Dec  4 19:18:35 np0005546222 kernel: Linux version 5.14.0-645.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025
Dec  4 19:18:35 np0005546222 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec  4 19:18:35 np0005546222 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  4 19:18:35 np0005546222 kernel: BIOS-provided physical RAM map:
Dec  4 19:18:35 np0005546222 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec  4 19:18:35 np0005546222 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec  4 19:18:35 np0005546222 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec  4 19:18:35 np0005546222 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Dec  4 19:18:35 np0005546222 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Dec  4 19:18:35 np0005546222 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec  4 19:18:35 np0005546222 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec  4 19:18:35 np0005546222 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Dec  4 19:18:35 np0005546222 kernel: NX (Execute Disable) protection: active
Dec  4 19:18:35 np0005546222 kernel: APIC: Static calls initialized
Dec  4 19:18:35 np0005546222 kernel: SMBIOS 2.8 present.
Dec  4 19:18:35 np0005546222 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Dec  4 19:18:35 np0005546222 kernel: Hypervisor detected: KVM
Dec  4 19:18:35 np0005546222 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec  4 19:18:35 np0005546222 kernel: kvm-clock: using sched offset of 3975422000 cycles
Dec  4 19:18:35 np0005546222 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec  4 19:18:35 np0005546222 kernel: tsc: Detected 2800.000 MHz processor
Dec  4 19:18:35 np0005546222 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Dec  4 19:18:35 np0005546222 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Dec  4 19:18:35 np0005546222 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec  4 19:18:35 np0005546222 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Dec  4 19:18:35 np0005546222 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Dec  4 19:18:35 np0005546222 kernel: Using GB pages for direct mapping
Dec  4 19:18:35 np0005546222 kernel: RAMDISK: [mem 0x2d472000-0x32a30fff]
Dec  4 19:18:35 np0005546222 kernel: ACPI: Early table checksum verification disabled
Dec  4 19:18:35 np0005546222 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Dec  4 19:18:35 np0005546222 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  4 19:18:35 np0005546222 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  4 19:18:35 np0005546222 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  4 19:18:35 np0005546222 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Dec  4 19:18:35 np0005546222 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  4 19:18:35 np0005546222 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  4 19:18:35 np0005546222 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Dec  4 19:18:35 np0005546222 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Dec  4 19:18:35 np0005546222 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Dec  4 19:18:35 np0005546222 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Dec  4 19:18:35 np0005546222 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Dec  4 19:18:35 np0005546222 kernel: No NUMA configuration found
Dec  4 19:18:35 np0005546222 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Dec  4 19:18:35 np0005546222 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Dec  4 19:18:35 np0005546222 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Dec  4 19:18:35 np0005546222 kernel: Zone ranges:
Dec  4 19:18:35 np0005546222 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec  4 19:18:35 np0005546222 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec  4 19:18:35 np0005546222 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Dec  4 19:18:35 np0005546222 kernel:  Device   empty
Dec  4 19:18:35 np0005546222 kernel: Movable zone start for each node
Dec  4 19:18:35 np0005546222 kernel: Early memory node ranges
Dec  4 19:18:35 np0005546222 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec  4 19:18:35 np0005546222 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Dec  4 19:18:35 np0005546222 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Dec  4 19:18:35 np0005546222 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Dec  4 19:18:35 np0005546222 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec  4 19:18:35 np0005546222 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec  4 19:18:35 np0005546222 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec  4 19:18:35 np0005546222 kernel: ACPI: PM-Timer IO Port: 0x608
Dec  4 19:18:35 np0005546222 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec  4 19:18:35 np0005546222 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec  4 19:18:35 np0005546222 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec  4 19:18:35 np0005546222 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec  4 19:18:35 np0005546222 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec  4 19:18:35 np0005546222 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec  4 19:18:35 np0005546222 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec  4 19:18:35 np0005546222 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec  4 19:18:35 np0005546222 kernel: TSC deadline timer available
Dec  4 19:18:35 np0005546222 kernel: CPU topo: Max. logical packages:   8
Dec  4 19:18:35 np0005546222 kernel: CPU topo: Max. logical dies:       8
Dec  4 19:18:35 np0005546222 kernel: CPU topo: Max. dies per package:   1
Dec  4 19:18:35 np0005546222 kernel: CPU topo: Max. threads per core:   1
Dec  4 19:18:35 np0005546222 kernel: CPU topo: Num. cores per package:     1
Dec  4 19:18:35 np0005546222 kernel: CPU topo: Num. threads per package:   1
Dec  4 19:18:35 np0005546222 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Dec  4 19:18:35 np0005546222 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Dec  4 19:18:35 np0005546222 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec  4 19:18:35 np0005546222 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec  4 19:18:35 np0005546222 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec  4 19:18:35 np0005546222 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec  4 19:18:35 np0005546222 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Dec  4 19:18:35 np0005546222 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Dec  4 19:18:35 np0005546222 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec  4 19:18:35 np0005546222 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec  4 19:18:35 np0005546222 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec  4 19:18:35 np0005546222 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Dec  4 19:18:35 np0005546222 kernel: Booting paravirtualized kernel on KVM
Dec  4 19:18:35 np0005546222 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec  4 19:18:35 np0005546222 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Dec  4 19:18:35 np0005546222 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Dec  4 19:18:35 np0005546222 kernel: kvm-guest: PV spinlocks disabled, no host support
Dec  4 19:18:35 np0005546222 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  4 19:18:35 np0005546222 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64", will be passed to user space.
Dec  4 19:18:35 np0005546222 kernel: random: crng init done
Dec  4 19:18:35 np0005546222 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec  4 19:18:35 np0005546222 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec  4 19:18:35 np0005546222 kernel: Fallback order for Node 0: 0 
Dec  4 19:18:35 np0005546222 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Dec  4 19:18:35 np0005546222 kernel: Policy zone: Normal
Dec  4 19:18:35 np0005546222 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec  4 19:18:35 np0005546222 kernel: software IO TLB: area num 8.
Dec  4 19:18:35 np0005546222 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Dec  4 19:18:35 np0005546222 kernel: ftrace: allocating 49335 entries in 193 pages
Dec  4 19:18:35 np0005546222 kernel: ftrace: allocated 193 pages with 3 groups
Dec  4 19:18:35 np0005546222 kernel: Dynamic Preempt: voluntary
Dec  4 19:18:35 np0005546222 kernel: rcu: Preemptible hierarchical RCU implementation.
Dec  4 19:18:35 np0005546222 kernel: rcu: #011RCU event tracing is enabled.
Dec  4 19:18:35 np0005546222 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Dec  4 19:18:35 np0005546222 kernel: #011Trampoline variant of Tasks RCU enabled.
Dec  4 19:18:35 np0005546222 kernel: #011Rude variant of Tasks RCU enabled.
Dec  4 19:18:35 np0005546222 kernel: #011Tracing variant of Tasks RCU enabled.
Dec  4 19:18:35 np0005546222 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec  4 19:18:35 np0005546222 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Dec  4 19:18:35 np0005546222 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  4 19:18:35 np0005546222 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  4 19:18:35 np0005546222 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  4 19:18:35 np0005546222 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Dec  4 19:18:35 np0005546222 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec  4 19:18:35 np0005546222 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec  4 19:18:35 np0005546222 kernel: Console: colour VGA+ 80x25
Dec  4 19:18:35 np0005546222 kernel: printk: console [ttyS0] enabled
Dec  4 19:18:35 np0005546222 kernel: ACPI: Core revision 20230331
Dec  4 19:18:35 np0005546222 kernel: APIC: Switch to symmetric I/O mode setup
Dec  4 19:18:35 np0005546222 kernel: x2apic enabled
Dec  4 19:18:35 np0005546222 kernel: APIC: Switched APIC routing to: physical x2apic
Dec  4 19:18:35 np0005546222 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec  4 19:18:35 np0005546222 kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Dec  4 19:18:35 np0005546222 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec  4 19:18:35 np0005546222 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec  4 19:18:35 np0005546222 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec  4 19:18:35 np0005546222 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec  4 19:18:35 np0005546222 kernel: Spectre V2 : Mitigation: Retpolines
Dec  4 19:18:35 np0005546222 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Dec  4 19:18:35 np0005546222 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Dec  4 19:18:35 np0005546222 kernel: RETBleed: Mitigation: untrained return thunk
Dec  4 19:18:35 np0005546222 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec  4 19:18:35 np0005546222 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec  4 19:18:35 np0005546222 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Dec  4 19:18:35 np0005546222 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Dec  4 19:18:35 np0005546222 kernel: x86/bugs: return thunk changed
Dec  4 19:18:35 np0005546222 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Dec  4 19:18:35 np0005546222 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec  4 19:18:35 np0005546222 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec  4 19:18:35 np0005546222 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec  4 19:18:35 np0005546222 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec  4 19:18:35 np0005546222 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Dec  4 19:18:35 np0005546222 kernel: Freeing SMP alternatives memory: 40K
Dec  4 19:18:35 np0005546222 kernel: pid_max: default: 32768 minimum: 301
Dec  4 19:18:35 np0005546222 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Dec  4 19:18:35 np0005546222 kernel: landlock: Up and running.
Dec  4 19:18:35 np0005546222 kernel: Yama: becoming mindful.
Dec  4 19:18:35 np0005546222 kernel: SELinux:  Initializing.
Dec  4 19:18:35 np0005546222 kernel: LSM support for eBPF active
Dec  4 19:18:35 np0005546222 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec  4 19:18:35 np0005546222 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec  4 19:18:35 np0005546222 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Dec  4 19:18:35 np0005546222 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec  4 19:18:35 np0005546222 kernel: ... version:                0
Dec  4 19:18:35 np0005546222 kernel: ... bit width:              48
Dec  4 19:18:35 np0005546222 kernel: ... generic registers:      6
Dec  4 19:18:35 np0005546222 kernel: ... value mask:             0000ffffffffffff
Dec  4 19:18:35 np0005546222 kernel: ... max period:             00007fffffffffff
Dec  4 19:18:35 np0005546222 kernel: ... fixed-purpose events:   0
Dec  4 19:18:35 np0005546222 kernel: ... event mask:             000000000000003f
Dec  4 19:18:35 np0005546222 kernel: signal: max sigframe size: 1776
Dec  4 19:18:35 np0005546222 kernel: rcu: Hierarchical SRCU implementation.
Dec  4 19:18:35 np0005546222 kernel: rcu: #011Max phase no-delay instances is 400.
Dec  4 19:18:35 np0005546222 kernel: smp: Bringing up secondary CPUs ...
Dec  4 19:18:35 np0005546222 kernel: smpboot: x86: Booting SMP configuration:
Dec  4 19:18:35 np0005546222 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Dec  4 19:18:35 np0005546222 kernel: smp: Brought up 1 node, 8 CPUs
Dec  4 19:18:35 np0005546222 kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Dec  4 19:18:35 np0005546222 kernel: node 0 deferred pages initialised in 21ms
Dec  4 19:18:35 np0005546222 kernel: Memory: 7763992K/8388068K available (16384K kernel code, 5795K rwdata, 13908K rodata, 4196K init, 7156K bss, 618204K reserved, 0K cma-reserved)
Dec  4 19:18:35 np0005546222 kernel: devtmpfs: initialized
Dec  4 19:18:35 np0005546222 kernel: x86/mm: Memory block size: 128MB
Dec  4 19:18:35 np0005546222 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec  4 19:18:35 np0005546222 kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Dec  4 19:18:35 np0005546222 kernel: pinctrl core: initialized pinctrl subsystem
Dec  4 19:18:35 np0005546222 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec  4 19:18:35 np0005546222 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Dec  4 19:18:35 np0005546222 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec  4 19:18:35 np0005546222 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec  4 19:18:35 np0005546222 kernel: audit: initializing netlink subsys (disabled)
Dec  4 19:18:35 np0005546222 kernel: audit: type=2000 audit(1764893913.664:1): state=initialized audit_enabled=0 res=1
Dec  4 19:18:35 np0005546222 kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec  4 19:18:35 np0005546222 kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec  4 19:18:35 np0005546222 kernel: thermal_sys: Registered thermal governor 'user_space'
Dec  4 19:18:35 np0005546222 kernel: cpuidle: using governor menu
Dec  4 19:18:35 np0005546222 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec  4 19:18:35 np0005546222 kernel: PCI: Using configuration type 1 for base access
Dec  4 19:18:35 np0005546222 kernel: PCI: Using configuration type 1 for extended access
Dec  4 19:18:35 np0005546222 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec  4 19:18:35 np0005546222 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec  4 19:18:35 np0005546222 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Dec  4 19:18:35 np0005546222 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec  4 19:18:35 np0005546222 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec  4 19:18:35 np0005546222 kernel: Demotion targets for Node 0: null
Dec  4 19:18:35 np0005546222 kernel: cryptd: max_cpu_qlen set to 1000
Dec  4 19:18:35 np0005546222 kernel: ACPI: Added _OSI(Module Device)
Dec  4 19:18:35 np0005546222 kernel: ACPI: Added _OSI(Processor Device)
Dec  4 19:18:35 np0005546222 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec  4 19:18:35 np0005546222 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec  4 19:18:35 np0005546222 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec  4 19:18:35 np0005546222 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Dec  4 19:18:35 np0005546222 kernel: ACPI: Interpreter enabled
Dec  4 19:18:35 np0005546222 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Dec  4 19:18:35 np0005546222 kernel: ACPI: Using IOAPIC for interrupt routing
Dec  4 19:18:35 np0005546222 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec  4 19:18:35 np0005546222 kernel: PCI: Using E820 reservations for host bridge windows
Dec  4 19:18:35 np0005546222 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec  4 19:18:35 np0005546222 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec  4 19:18:35 np0005546222 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec  4 19:18:35 np0005546222 kernel: acpiphp: Slot [3] registered
Dec  4 19:18:35 np0005546222 kernel: acpiphp: Slot [4] registered
Dec  4 19:18:35 np0005546222 kernel: acpiphp: Slot [5] registered
Dec  4 19:18:35 np0005546222 kernel: acpiphp: Slot [6] registered
Dec  4 19:18:35 np0005546222 kernel: acpiphp: Slot [7] registered
Dec  4 19:18:35 np0005546222 kernel: acpiphp: Slot [8] registered
Dec  4 19:18:35 np0005546222 kernel: acpiphp: Slot [9] registered
Dec  4 19:18:35 np0005546222 kernel: acpiphp: Slot [10] registered
Dec  4 19:18:35 np0005546222 kernel: acpiphp: Slot [11] registered
Dec  4 19:18:35 np0005546222 kernel: acpiphp: Slot [12] registered
Dec  4 19:18:35 np0005546222 kernel: acpiphp: Slot [13] registered
Dec  4 19:18:35 np0005546222 kernel: acpiphp: Slot [14] registered
Dec  4 19:18:35 np0005546222 kernel: acpiphp: Slot [15] registered
Dec  4 19:18:35 np0005546222 kernel: acpiphp: Slot [16] registered
Dec  4 19:18:35 np0005546222 kernel: acpiphp: Slot [17] registered
Dec  4 19:18:35 np0005546222 kernel: acpiphp: Slot [18] registered
Dec  4 19:18:35 np0005546222 kernel: acpiphp: Slot [19] registered
Dec  4 19:18:35 np0005546222 kernel: acpiphp: Slot [20] registered
Dec  4 19:18:35 np0005546222 kernel: acpiphp: Slot [21] registered
Dec  4 19:18:35 np0005546222 kernel: acpiphp: Slot [22] registered
Dec  4 19:18:35 np0005546222 kernel: acpiphp: Slot [23] registered
Dec  4 19:18:35 np0005546222 kernel: acpiphp: Slot [24] registered
Dec  4 19:18:35 np0005546222 kernel: acpiphp: Slot [25] registered
Dec  4 19:18:35 np0005546222 kernel: acpiphp: Slot [26] registered
Dec  4 19:18:35 np0005546222 kernel: acpiphp: Slot [27] registered
Dec  4 19:18:35 np0005546222 kernel: acpiphp: Slot [28] registered
Dec  4 19:18:35 np0005546222 kernel: acpiphp: Slot [29] registered
Dec  4 19:18:35 np0005546222 kernel: acpiphp: Slot [30] registered
Dec  4 19:18:35 np0005546222 kernel: acpiphp: Slot [31] registered
Dec  4 19:18:35 np0005546222 kernel: PCI host bridge to bus 0000:00
Dec  4 19:18:35 np0005546222 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec  4 19:18:35 np0005546222 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec  4 19:18:35 np0005546222 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec  4 19:18:35 np0005546222 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec  4 19:18:35 np0005546222 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Dec  4 19:18:35 np0005546222 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec  4 19:18:35 np0005546222 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Dec  4 19:18:35 np0005546222 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Dec  4 19:18:35 np0005546222 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Dec  4 19:18:35 np0005546222 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Dec  4 19:18:35 np0005546222 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Dec  4 19:18:35 np0005546222 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Dec  4 19:18:35 np0005546222 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Dec  4 19:18:35 np0005546222 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Dec  4 19:18:35 np0005546222 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Dec  4 19:18:35 np0005546222 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Dec  4 19:18:35 np0005546222 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Dec  4 19:18:35 np0005546222 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec  4 19:18:35 np0005546222 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec  4 19:18:35 np0005546222 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Dec  4 19:18:35 np0005546222 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Dec  4 19:18:35 np0005546222 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Dec  4 19:18:35 np0005546222 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Dec  4 19:18:35 np0005546222 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Dec  4 19:18:35 np0005546222 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec  4 19:18:35 np0005546222 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec  4 19:18:35 np0005546222 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Dec  4 19:18:35 np0005546222 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Dec  4 19:18:35 np0005546222 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Dec  4 19:18:35 np0005546222 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Dec  4 19:18:35 np0005546222 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Dec  4 19:18:35 np0005546222 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Dec  4 19:18:35 np0005546222 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Dec  4 19:18:35 np0005546222 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec  4 19:18:35 np0005546222 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Dec  4 19:18:35 np0005546222 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Dec  4 19:18:35 np0005546222 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec  4 19:18:35 np0005546222 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Dec  4 19:18:35 np0005546222 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Dec  4 19:18:35 np0005546222 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Dec  4 19:18:35 np0005546222 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec  4 19:18:35 np0005546222 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec  4 19:18:35 np0005546222 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec  4 19:18:35 np0005546222 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec  4 19:18:35 np0005546222 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec  4 19:18:35 np0005546222 kernel: iommu: Default domain type: Translated
Dec  4 19:18:35 np0005546222 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec  4 19:18:35 np0005546222 kernel: SCSI subsystem initialized
Dec  4 19:18:35 np0005546222 kernel: ACPI: bus type USB registered
Dec  4 19:18:35 np0005546222 kernel: usbcore: registered new interface driver usbfs
Dec  4 19:18:35 np0005546222 kernel: usbcore: registered new interface driver hub
Dec  4 19:18:35 np0005546222 kernel: usbcore: registered new device driver usb
Dec  4 19:18:35 np0005546222 kernel: pps_core: LinuxPPS API ver. 1 registered
Dec  4 19:18:35 np0005546222 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec  4 19:18:35 np0005546222 kernel: PTP clock support registered
Dec  4 19:18:35 np0005546222 kernel: EDAC MC: Ver: 3.0.0
Dec  4 19:18:35 np0005546222 kernel: NetLabel: Initializing
Dec  4 19:18:35 np0005546222 kernel: NetLabel:  domain hash size = 128
Dec  4 19:18:35 np0005546222 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec  4 19:18:35 np0005546222 kernel: NetLabel:  unlabeled traffic allowed by default
Dec  4 19:18:35 np0005546222 kernel: PCI: Using ACPI for IRQ routing
Dec  4 19:18:35 np0005546222 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec  4 19:18:35 np0005546222 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec  4 19:18:35 np0005546222 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec  4 19:18:35 np0005546222 kernel: vgaarb: loaded
Dec  4 19:18:35 np0005546222 kernel: clocksource: Switched to clocksource kvm-clock
Dec  4 19:18:35 np0005546222 kernel: VFS: Disk quotas dquot_6.6.0
Dec  4 19:18:35 np0005546222 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec  4 19:18:35 np0005546222 kernel: pnp: PnP ACPI init
Dec  4 19:18:35 np0005546222 kernel: pnp: PnP ACPI: found 5 devices
Dec  4 19:18:35 np0005546222 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec  4 19:18:35 np0005546222 kernel: NET: Registered PF_INET protocol family
Dec  4 19:18:35 np0005546222 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec  4 19:18:35 np0005546222 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Dec  4 19:18:35 np0005546222 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec  4 19:18:35 np0005546222 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec  4 19:18:35 np0005546222 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec  4 19:18:35 np0005546222 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Dec  4 19:18:35 np0005546222 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Dec  4 19:18:35 np0005546222 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec  4 19:18:35 np0005546222 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec  4 19:18:35 np0005546222 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec  4 19:18:35 np0005546222 kernel: NET: Registered PF_XDP protocol family
Dec  4 19:18:35 np0005546222 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec  4 19:18:35 np0005546222 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec  4 19:18:35 np0005546222 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec  4 19:18:35 np0005546222 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Dec  4 19:18:35 np0005546222 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Dec  4 19:18:35 np0005546222 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec  4 19:18:35 np0005546222 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec  4 19:18:35 np0005546222 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec  4 19:18:35 np0005546222 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 80517 usecs
Dec  4 19:18:35 np0005546222 kernel: PCI: CLS 0 bytes, default 64
Dec  4 19:18:35 np0005546222 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec  4 19:18:35 np0005546222 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Dec  4 19:18:35 np0005546222 kernel: ACPI: bus type thunderbolt registered
Dec  4 19:18:35 np0005546222 kernel: Trying to unpack rootfs image as initramfs...
Dec  4 19:18:35 np0005546222 kernel: Initialise system trusted keyrings
Dec  4 19:18:35 np0005546222 kernel: Key type blacklist registered
Dec  4 19:18:35 np0005546222 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Dec  4 19:18:35 np0005546222 kernel: zbud: loaded
Dec  4 19:18:35 np0005546222 kernel: integrity: Platform Keyring initialized
Dec  4 19:18:35 np0005546222 kernel: integrity: Machine keyring initialized
Dec  4 19:18:35 np0005546222 kernel: Freeing initrd memory: 87804K
Dec  4 19:18:35 np0005546222 kernel: NET: Registered PF_ALG protocol family
Dec  4 19:18:35 np0005546222 kernel: xor: automatically using best checksumming function   avx       
Dec  4 19:18:35 np0005546222 kernel: Key type asymmetric registered
Dec  4 19:18:35 np0005546222 kernel: Asymmetric key parser 'x509' registered
Dec  4 19:18:35 np0005546222 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec  4 19:18:35 np0005546222 kernel: io scheduler mq-deadline registered
Dec  4 19:18:35 np0005546222 kernel: io scheduler kyber registered
Dec  4 19:18:35 np0005546222 kernel: io scheduler bfq registered
Dec  4 19:18:35 np0005546222 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec  4 19:18:35 np0005546222 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec  4 19:18:35 np0005546222 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec  4 19:18:35 np0005546222 kernel: ACPI: button: Power Button [PWRF]
Dec  4 19:18:35 np0005546222 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec  4 19:18:35 np0005546222 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec  4 19:18:35 np0005546222 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec  4 19:18:35 np0005546222 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec  4 19:18:35 np0005546222 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec  4 19:18:35 np0005546222 kernel: Non-volatile memory driver v1.3
Dec  4 19:18:35 np0005546222 kernel: rdac: device handler registered
Dec  4 19:18:35 np0005546222 kernel: hp_sw: device handler registered
Dec  4 19:18:35 np0005546222 kernel: emc: device handler registered
Dec  4 19:18:35 np0005546222 kernel: alua: device handler registered
Dec  4 19:18:35 np0005546222 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Dec  4 19:18:35 np0005546222 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Dec  4 19:18:35 np0005546222 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Dec  4 19:18:35 np0005546222 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Dec  4 19:18:35 np0005546222 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec  4 19:18:35 np0005546222 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec  4 19:18:35 np0005546222 kernel: usb usb1: Product: UHCI Host Controller
Dec  4 19:18:35 np0005546222 kernel: usb usb1: Manufacturer: Linux 5.14.0-645.el9.x86_64 uhci_hcd
Dec  4 19:18:35 np0005546222 kernel: usb usb1: SerialNumber: 0000:00:01.2
Dec  4 19:18:35 np0005546222 kernel: hub 1-0:1.0: USB hub found
Dec  4 19:18:35 np0005546222 kernel: hub 1-0:1.0: 2 ports detected
Dec  4 19:18:35 np0005546222 kernel: usbcore: registered new interface driver usbserial_generic
Dec  4 19:18:35 np0005546222 kernel: usbserial: USB Serial support registered for generic
Dec  4 19:18:35 np0005546222 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec  4 19:18:35 np0005546222 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec  4 19:18:35 np0005546222 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec  4 19:18:35 np0005546222 kernel: mousedev: PS/2 mouse device common for all mice
Dec  4 19:18:35 np0005546222 kernel: rtc_cmos 00:04: RTC can wake from S4
Dec  4 19:18:35 np0005546222 kernel: rtc_cmos 00:04: registered as rtc0
Dec  4 19:18:35 np0005546222 kernel: rtc_cmos 00:04: setting system clock to 2025-12-05T00:18:34 UTC (1764893914)
Dec  4 19:18:35 np0005546222 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Dec  4 19:18:35 np0005546222 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Dec  4 19:18:35 np0005546222 kernel: hid: raw HID events driver (C) Jiri Kosina
Dec  4 19:18:35 np0005546222 kernel: usbcore: registered new interface driver usbhid
Dec  4 19:18:35 np0005546222 kernel: usbhid: USB HID core driver
Dec  4 19:18:35 np0005546222 kernel: drop_monitor: Initializing network drop monitor service
Dec  4 19:18:35 np0005546222 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec  4 19:18:35 np0005546222 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec  4 19:18:35 np0005546222 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec  4 19:18:35 np0005546222 kernel: Initializing XFRM netlink socket
Dec  4 19:18:35 np0005546222 kernel: NET: Registered PF_INET6 protocol family
Dec  4 19:18:35 np0005546222 kernel: Segment Routing with IPv6
Dec  4 19:18:35 np0005546222 kernel: NET: Registered PF_PACKET protocol family
Dec  4 19:18:35 np0005546222 kernel: mpls_gso: MPLS GSO support
Dec  4 19:18:35 np0005546222 kernel: IPI shorthand broadcast: enabled
Dec  4 19:18:35 np0005546222 kernel: AVX2 version of gcm_enc/dec engaged.
Dec  4 19:18:35 np0005546222 kernel: AES CTR mode by8 optimization enabled
Dec  4 19:18:35 np0005546222 kernel: sched_clock: Marking stable (1705002320, 158712810)->(2041589540, -177874410)
Dec  4 19:18:35 np0005546222 kernel: registered taskstats version 1
Dec  4 19:18:35 np0005546222 kernel: Loading compiled-in X.509 certificates
Dec  4 19:18:35 np0005546222 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec  4 19:18:35 np0005546222 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec  4 19:18:35 np0005546222 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec  4 19:18:35 np0005546222 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Dec  4 19:18:35 np0005546222 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Dec  4 19:18:35 np0005546222 kernel: Demotion targets for Node 0: null
Dec  4 19:18:35 np0005546222 kernel: page_owner is disabled
Dec  4 19:18:35 np0005546222 kernel: Key type .fscrypt registered
Dec  4 19:18:35 np0005546222 kernel: Key type fscrypt-provisioning registered
Dec  4 19:18:35 np0005546222 kernel: Key type big_key registered
Dec  4 19:18:35 np0005546222 kernel: Key type encrypted registered
Dec  4 19:18:35 np0005546222 kernel: ima: No TPM chip found, activating TPM-bypass!
Dec  4 19:18:35 np0005546222 kernel: Loading compiled-in module X.509 certificates
Dec  4 19:18:35 np0005546222 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec  4 19:18:35 np0005546222 kernel: ima: Allocated hash algorithm: sha256
Dec  4 19:18:35 np0005546222 kernel: ima: No architecture policies found
Dec  4 19:18:35 np0005546222 kernel: evm: Initialising EVM extended attributes:
Dec  4 19:18:35 np0005546222 kernel: evm: security.selinux
Dec  4 19:18:35 np0005546222 kernel: evm: security.SMACK64 (disabled)
Dec  4 19:18:35 np0005546222 kernel: evm: security.SMACK64EXEC (disabled)
Dec  4 19:18:35 np0005546222 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec  4 19:18:35 np0005546222 kernel: evm: security.SMACK64MMAP (disabled)
Dec  4 19:18:35 np0005546222 kernel: evm: security.apparmor (disabled)
Dec  4 19:18:35 np0005546222 kernel: evm: security.ima
Dec  4 19:18:35 np0005546222 kernel: evm: security.capability
Dec  4 19:18:35 np0005546222 kernel: evm: HMAC attrs: 0x1
Dec  4 19:18:35 np0005546222 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec  4 19:18:35 np0005546222 kernel: Running certificate verification RSA selftest
Dec  4 19:18:35 np0005546222 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec  4 19:18:35 np0005546222 kernel: Running certificate verification ECDSA selftest
Dec  4 19:18:35 np0005546222 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Dec  4 19:18:35 np0005546222 kernel: clk: Disabling unused clocks
Dec  4 19:18:35 np0005546222 kernel: Freeing unused decrypted memory: 2028K
Dec  4 19:18:35 np0005546222 kernel: Freeing unused kernel image (initmem) memory: 4196K
Dec  4 19:18:35 np0005546222 kernel: Write protecting the kernel read-only data: 30720k
Dec  4 19:18:35 np0005546222 kernel: Freeing unused kernel image (rodata/data gap) memory: 428K
Dec  4 19:18:35 np0005546222 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec  4 19:18:35 np0005546222 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec  4 19:18:35 np0005546222 kernel: usb 1-1: Product: QEMU USB Tablet
Dec  4 19:18:35 np0005546222 kernel: usb 1-1: Manufacturer: QEMU
Dec  4 19:18:35 np0005546222 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Dec  4 19:18:35 np0005546222 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec  4 19:18:35 np0005546222 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Dec  4 19:18:35 np0005546222 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec  4 19:18:35 np0005546222 kernel: Run /init as init process
Dec  4 19:18:35 np0005546222 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec  4 19:18:35 np0005546222 systemd: Detected virtualization kvm.
Dec  4 19:18:35 np0005546222 systemd: Detected architecture x86-64.
Dec  4 19:18:35 np0005546222 systemd: Running in initrd.
Dec  4 19:18:35 np0005546222 systemd: No hostname configured, using default hostname.
Dec  4 19:18:35 np0005546222 systemd: Hostname set to <localhost>.
Dec  4 19:18:35 np0005546222 systemd: Initializing machine ID from VM UUID.
Dec  4 19:18:35 np0005546222 systemd: Queued start job for default target Initrd Default Target.
Dec  4 19:18:35 np0005546222 systemd: Started Dispatch Password Requests to Console Directory Watch.
Dec  4 19:18:35 np0005546222 systemd: Reached target Local Encrypted Volumes.
Dec  4 19:18:35 np0005546222 systemd: Reached target Initrd /usr File System.
Dec  4 19:18:35 np0005546222 systemd: Reached target Local File Systems.
Dec  4 19:18:35 np0005546222 systemd: Reached target Path Units.
Dec  4 19:18:35 np0005546222 systemd: Reached target Slice Units.
Dec  4 19:18:35 np0005546222 systemd: Reached target Swaps.
Dec  4 19:18:35 np0005546222 systemd: Reached target Timer Units.
Dec  4 19:18:35 np0005546222 systemd: Listening on D-Bus System Message Bus Socket.
Dec  4 19:18:35 np0005546222 systemd: Listening on Journal Socket (/dev/log).
Dec  4 19:18:35 np0005546222 systemd: Listening on Journal Socket.
Dec  4 19:18:35 np0005546222 systemd: Listening on udev Control Socket.
Dec  4 19:18:35 np0005546222 systemd: Listening on udev Kernel Socket.
Dec  4 19:18:35 np0005546222 systemd: Reached target Socket Units.
Dec  4 19:18:35 np0005546222 systemd: Starting Create List of Static Device Nodes...
Dec  4 19:18:35 np0005546222 systemd: Starting Journal Service...
Dec  4 19:18:35 np0005546222 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec  4 19:18:35 np0005546222 systemd: Starting Apply Kernel Variables...
Dec  4 19:18:35 np0005546222 systemd: Starting Create System Users...
Dec  4 19:18:35 np0005546222 systemd: Starting Setup Virtual Console...
Dec  4 19:18:35 np0005546222 systemd: Finished Create List of Static Device Nodes.
Dec  4 19:18:35 np0005546222 systemd: Finished Apply Kernel Variables.
Dec  4 19:18:35 np0005546222 systemd-journald[307]: Journal started
Dec  4 19:18:35 np0005546222 systemd-journald[307]: Runtime Journal (/run/log/journal/6c9ead2d84954e2b9845f862956e441e) is 8.0M, max 153.6M, 145.6M free.
Dec  4 19:18:35 np0005546222 systemd: Started Journal Service.
Dec  4 19:18:35 np0005546222 systemd-sysusers[311]: Creating group 'users' with GID 100.
Dec  4 19:18:35 np0005546222 systemd-sysusers[311]: Creating group 'dbus' with GID 81.
Dec  4 19:18:35 np0005546222 systemd-sysusers[311]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec  4 19:18:35 np0005546222 systemd[1]: Finished Create System Users.
Dec  4 19:18:35 np0005546222 systemd[1]: Starting Create Static Device Nodes in /dev...
Dec  4 19:18:35 np0005546222 systemd[1]: Starting Create Volatile Files and Directories...
Dec  4 19:18:35 np0005546222 systemd[1]: Finished Create Static Device Nodes in /dev.
Dec  4 19:18:35 np0005546222 systemd[1]: Finished Create Volatile Files and Directories.
Dec  4 19:18:35 np0005546222 systemd[1]: Finished Setup Virtual Console.
Dec  4 19:18:35 np0005546222 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec  4 19:18:35 np0005546222 systemd[1]: Starting dracut cmdline hook...
Dec  4 19:18:35 np0005546222 dracut-cmdline[327]: dracut-9 dracut-057-102.git20250818.el9
Dec  4 19:18:35 np0005546222 dracut-cmdline[327]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  4 19:18:35 np0005546222 systemd[1]: Finished dracut cmdline hook.
Dec  4 19:18:35 np0005546222 systemd[1]: Starting dracut pre-udev hook...
Dec  4 19:18:35 np0005546222 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec  4 19:18:35 np0005546222 kernel: device-mapper: uevent: version 1.0.3
Dec  4 19:18:35 np0005546222 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Dec  4 19:18:35 np0005546222 kernel: RPC: Registered named UNIX socket transport module.
Dec  4 19:18:35 np0005546222 kernel: RPC: Registered udp transport module.
Dec  4 19:18:35 np0005546222 kernel: RPC: Registered tcp transport module.
Dec  4 19:18:35 np0005546222 kernel: RPC: Registered tcp-with-tls transport module.
Dec  4 19:18:35 np0005546222 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec  4 19:18:35 np0005546222 rpc.statd[442]: Version 2.5.4 starting
Dec  4 19:18:35 np0005546222 rpc.statd[442]: Initializing NSM state
Dec  4 19:18:35 np0005546222 rpc.idmapd[447]: Setting log level to 0
Dec  4 19:18:35 np0005546222 systemd[1]: Finished dracut pre-udev hook.
Dec  4 19:18:35 np0005546222 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec  4 19:18:35 np0005546222 systemd-udevd[460]: Using default interface naming scheme 'rhel-9.0'.
Dec  4 19:18:35 np0005546222 systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec  4 19:18:35 np0005546222 systemd[1]: Starting dracut pre-trigger hook...
Dec  4 19:18:35 np0005546222 systemd[1]: Finished dracut pre-trigger hook.
Dec  4 19:18:35 np0005546222 systemd[1]: Starting Coldplug All udev Devices...
Dec  4 19:18:36 np0005546222 systemd[1]: Created slice Slice /system/modprobe.
Dec  4 19:18:36 np0005546222 systemd[1]: Starting Load Kernel Module configfs...
Dec  4 19:18:36 np0005546222 systemd[1]: Finished Coldplug All udev Devices.
Dec  4 19:18:36 np0005546222 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  4 19:18:36 np0005546222 systemd[1]: Finished Load Kernel Module configfs.
Dec  4 19:18:36 np0005546222 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec  4 19:18:36 np0005546222 systemd[1]: Reached target Network.
Dec  4 19:18:36 np0005546222 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec  4 19:18:36 np0005546222 systemd[1]: Starting dracut initqueue hook...
Dec  4 19:18:36 np0005546222 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Dec  4 19:18:36 np0005546222 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Dec  4 19:18:36 np0005546222 kernel: vda: vda1
Dec  4 19:18:36 np0005546222 kernel: scsi host0: ata_piix
Dec  4 19:18:36 np0005546222 kernel: scsi host1: ata_piix
Dec  4 19:18:36 np0005546222 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Dec  4 19:18:36 np0005546222 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Dec  4 19:18:36 np0005546222 systemd[1]: Mounting Kernel Configuration File System...
Dec  4 19:18:36 np0005546222 systemd[1]: Mounted Kernel Configuration File System.
Dec  4 19:18:36 np0005546222 systemd[1]: Found device /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec  4 19:18:36 np0005546222 systemd[1]: Reached target Initrd Root Device.
Dec  4 19:18:36 np0005546222 systemd[1]: Reached target System Initialization.
Dec  4 19:18:36 np0005546222 systemd[1]: Reached target Basic System.
Dec  4 19:18:36 np0005546222 kernel: ata1: found unknown device (class 0)
Dec  4 19:18:36 np0005546222 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec  4 19:18:36 np0005546222 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec  4 19:18:36 np0005546222 systemd-udevd[472]: Network interface NamePolicy= disabled on kernel command line.
Dec  4 19:18:36 np0005546222 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec  4 19:18:36 np0005546222 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec  4 19:18:36 np0005546222 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec  4 19:18:36 np0005546222 systemd[1]: Finished dracut initqueue hook.
Dec  4 19:18:36 np0005546222 systemd[1]: Reached target Preparation for Remote File Systems.
Dec  4 19:18:36 np0005546222 systemd[1]: Reached target Remote Encrypted Volumes.
Dec  4 19:18:36 np0005546222 systemd[1]: Reached target Remote File Systems.
Dec  4 19:18:36 np0005546222 systemd[1]: Starting dracut pre-mount hook...
Dec  4 19:18:36 np0005546222 systemd[1]: Finished dracut pre-mount hook.
Dec  4 19:18:36 np0005546222 systemd[1]: Starting File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f...
Dec  4 19:18:36 np0005546222 systemd-fsck[555]: /usr/sbin/fsck.xfs: XFS file system.
Dec  4 19:18:36 np0005546222 systemd[1]: Finished File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec  4 19:18:36 np0005546222 systemd[1]: Mounting /sysroot...
Dec  4 19:18:37 np0005546222 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec  4 19:18:37 np0005546222 kernel: XFS (vda1): Mounting V5 Filesystem fcf6b761-831a-48a7-9f5f-068b5063763f
Dec  4 19:18:37 np0005546222 kernel: XFS (vda1): Ending clean mount
Dec  4 19:18:37 np0005546222 systemd[1]: Mounted /sysroot.
Dec  4 19:18:37 np0005546222 systemd[1]: Reached target Initrd Root File System.
Dec  4 19:18:37 np0005546222 systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec  4 19:18:37 np0005546222 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec  4 19:18:37 np0005546222 systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec  4 19:18:37 np0005546222 systemd[1]: Reached target Initrd File Systems.
Dec  4 19:18:37 np0005546222 systemd[1]: Reached target Initrd Default Target.
Dec  4 19:18:37 np0005546222 systemd[1]: Starting dracut mount hook...
Dec  4 19:18:37 np0005546222 systemd[1]: Finished dracut mount hook.
Dec  4 19:18:37 np0005546222 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec  4 19:18:37 np0005546222 rpc.idmapd[447]: exiting on signal 15
Dec  4 19:18:37 np0005546222 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec  4 19:18:37 np0005546222 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec  4 19:18:37 np0005546222 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec  4 19:18:37 np0005546222 systemd[1]: Stopped target Network.
Dec  4 19:18:37 np0005546222 systemd[1]: Stopped target Remote Encrypted Volumes.
Dec  4 19:18:37 np0005546222 systemd[1]: Stopped target Timer Units.
Dec  4 19:18:37 np0005546222 systemd[1]: dbus.socket: Deactivated successfully.
Dec  4 19:18:37 np0005546222 systemd[1]: Closed D-Bus System Message Bus Socket.
Dec  4 19:18:37 np0005546222 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec  4 19:18:37 np0005546222 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec  4 19:18:37 np0005546222 systemd[1]: Stopped target Initrd Default Target.
Dec  4 19:18:37 np0005546222 systemd[1]: Stopped target Basic System.
Dec  4 19:18:37 np0005546222 systemd[1]: Stopped target Initrd Root Device.
Dec  4 19:18:37 np0005546222 systemd[1]: Stopped target Initrd /usr File System.
Dec  4 19:18:37 np0005546222 systemd[1]: Stopped target Path Units.
Dec  4 19:18:37 np0005546222 systemd[1]: Stopped target Remote File Systems.
Dec  4 19:18:37 np0005546222 systemd[1]: Stopped target Preparation for Remote File Systems.
Dec  4 19:18:37 np0005546222 systemd[1]: Stopped target Slice Units.
Dec  4 19:18:37 np0005546222 systemd[1]: Stopped target Socket Units.
Dec  4 19:18:37 np0005546222 systemd[1]: Stopped target System Initialization.
Dec  4 19:18:37 np0005546222 systemd[1]: Stopped target Local File Systems.
Dec  4 19:18:37 np0005546222 systemd[1]: Stopped target Swaps.
Dec  4 19:18:37 np0005546222 systemd[1]: dracut-mount.service: Deactivated successfully.
Dec  4 19:18:37 np0005546222 systemd[1]: Stopped dracut mount hook.
Dec  4 19:18:37 np0005546222 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec  4 19:18:37 np0005546222 systemd[1]: Stopped dracut pre-mount hook.
Dec  4 19:18:37 np0005546222 systemd[1]: Stopped target Local Encrypted Volumes.
Dec  4 19:18:37 np0005546222 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec  4 19:18:37 np0005546222 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec  4 19:18:37 np0005546222 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec  4 19:18:37 np0005546222 systemd[1]: Stopped dracut initqueue hook.
Dec  4 19:18:37 np0005546222 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec  4 19:18:37 np0005546222 systemd[1]: Stopped Apply Kernel Variables.
Dec  4 19:18:37 np0005546222 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec  4 19:18:37 np0005546222 systemd[1]: Stopped Create Volatile Files and Directories.
Dec  4 19:18:37 np0005546222 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec  4 19:18:37 np0005546222 systemd[1]: Stopped Coldplug All udev Devices.
Dec  4 19:18:37 np0005546222 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec  4 19:18:37 np0005546222 systemd[1]: Stopped dracut pre-trigger hook.
Dec  4 19:18:37 np0005546222 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec  4 19:18:37 np0005546222 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec  4 19:18:37 np0005546222 systemd[1]: Stopped Setup Virtual Console.
Dec  4 19:18:37 np0005546222 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Dec  4 19:18:37 np0005546222 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec  4 19:18:37 np0005546222 systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec  4 19:18:37 np0005546222 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec  4 19:18:37 np0005546222 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec  4 19:18:37 np0005546222 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec  4 19:18:37 np0005546222 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec  4 19:18:37 np0005546222 systemd[1]: Closed udev Control Socket.
Dec  4 19:18:37 np0005546222 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec  4 19:18:37 np0005546222 systemd[1]: Closed udev Kernel Socket.
Dec  4 19:18:37 np0005546222 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec  4 19:18:37 np0005546222 systemd[1]: Stopped dracut pre-udev hook.
Dec  4 19:18:37 np0005546222 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec  4 19:18:37 np0005546222 systemd[1]: Stopped dracut cmdline hook.
Dec  4 19:18:37 np0005546222 systemd[1]: Starting Cleanup udev Database...
Dec  4 19:18:37 np0005546222 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec  4 19:18:37 np0005546222 systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec  4 19:18:37 np0005546222 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec  4 19:18:37 np0005546222 systemd[1]: Stopped Create List of Static Device Nodes.
Dec  4 19:18:37 np0005546222 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec  4 19:18:37 np0005546222 systemd[1]: Stopped Create System Users.
Dec  4 19:18:37 np0005546222 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Dec  4 19:18:37 np0005546222 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Dec  4 19:18:37 np0005546222 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec  4 19:18:37 np0005546222 systemd[1]: Finished Cleanup udev Database.
Dec  4 19:18:37 np0005546222 systemd[1]: Reached target Switch Root.
Dec  4 19:18:37 np0005546222 systemd[1]: Starting Switch Root...
Dec  4 19:18:37 np0005546222 systemd[1]: Switching root.
Dec  4 19:18:37 np0005546222 systemd-journald[307]: Journal stopped
Dec  4 19:18:38 np0005546222 systemd-journald: Received SIGTERM from PID 1 (systemd).
Dec  4 19:18:38 np0005546222 kernel: audit: type=1404 audit(1764893917.483:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Dec  4 19:18:38 np0005546222 kernel: SELinux:  policy capability network_peer_controls=1
Dec  4 19:18:38 np0005546222 kernel: SELinux:  policy capability open_perms=1
Dec  4 19:18:38 np0005546222 kernel: SELinux:  policy capability extended_socket_class=1
Dec  4 19:18:38 np0005546222 kernel: SELinux:  policy capability always_check_network=0
Dec  4 19:18:38 np0005546222 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  4 19:18:38 np0005546222 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  4 19:18:38 np0005546222 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  4 19:18:38 np0005546222 kernel: audit: type=1403 audit(1764893917.617:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec  4 19:18:38 np0005546222 systemd: Successfully loaded SELinux policy in 137.469ms.
Dec  4 19:18:38 np0005546222 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.493ms.
Dec  4 19:18:38 np0005546222 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec  4 19:18:38 np0005546222 systemd: Detected virtualization kvm.
Dec  4 19:18:38 np0005546222 systemd: Detected architecture x86-64.
Dec  4 19:18:38 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 19:18:38 np0005546222 systemd: initrd-switch-root.service: Deactivated successfully.
Dec  4 19:18:38 np0005546222 systemd: Stopped Switch Root.
Dec  4 19:18:38 np0005546222 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec  4 19:18:38 np0005546222 systemd: Created slice Slice /system/getty.
Dec  4 19:18:38 np0005546222 systemd: Created slice Slice /system/serial-getty.
Dec  4 19:18:38 np0005546222 systemd: Created slice Slice /system/sshd-keygen.
Dec  4 19:18:38 np0005546222 systemd: Created slice User and Session Slice.
Dec  4 19:18:38 np0005546222 systemd: Started Dispatch Password Requests to Console Directory Watch.
Dec  4 19:18:38 np0005546222 systemd: Started Forward Password Requests to Wall Directory Watch.
Dec  4 19:18:38 np0005546222 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Dec  4 19:18:38 np0005546222 systemd: Reached target Local Encrypted Volumes.
Dec  4 19:18:38 np0005546222 systemd: Stopped target Switch Root.
Dec  4 19:18:38 np0005546222 systemd: Stopped target Initrd File Systems.
Dec  4 19:18:38 np0005546222 systemd: Stopped target Initrd Root File System.
Dec  4 19:18:38 np0005546222 systemd: Reached target Local Integrity Protected Volumes.
Dec  4 19:18:38 np0005546222 systemd: Reached target Path Units.
Dec  4 19:18:38 np0005546222 systemd: Reached target rpc_pipefs.target.
Dec  4 19:18:38 np0005546222 systemd: Reached target Slice Units.
Dec  4 19:18:38 np0005546222 systemd: Reached target Swaps.
Dec  4 19:18:38 np0005546222 systemd: Reached target Local Verity Protected Volumes.
Dec  4 19:18:38 np0005546222 systemd: Listening on RPCbind Server Activation Socket.
Dec  4 19:18:38 np0005546222 systemd: Reached target RPC Port Mapper.
Dec  4 19:18:38 np0005546222 systemd: Listening on Process Core Dump Socket.
Dec  4 19:18:38 np0005546222 systemd: Listening on initctl Compatibility Named Pipe.
Dec  4 19:18:38 np0005546222 systemd: Listening on udev Control Socket.
Dec  4 19:18:38 np0005546222 systemd: Listening on udev Kernel Socket.
Dec  4 19:18:38 np0005546222 systemd: Mounting Huge Pages File System...
Dec  4 19:18:38 np0005546222 systemd: Mounting POSIX Message Queue File System...
Dec  4 19:18:38 np0005546222 systemd: Mounting Kernel Debug File System...
Dec  4 19:18:38 np0005546222 systemd: Mounting Kernel Trace File System...
Dec  4 19:18:38 np0005546222 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec  4 19:18:38 np0005546222 systemd: Starting Create List of Static Device Nodes...
Dec  4 19:18:38 np0005546222 systemd: Starting Load Kernel Module configfs...
Dec  4 19:18:38 np0005546222 systemd: Starting Load Kernel Module drm...
Dec  4 19:18:38 np0005546222 systemd: Starting Load Kernel Module efi_pstore...
Dec  4 19:18:38 np0005546222 systemd: Starting Load Kernel Module fuse...
Dec  4 19:18:38 np0005546222 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Dec  4 19:18:38 np0005546222 systemd: systemd-fsck-root.service: Deactivated successfully.
Dec  4 19:18:38 np0005546222 systemd: Stopped File System Check on Root Device.
Dec  4 19:18:38 np0005546222 systemd: Stopped Journal Service.
Dec  4 19:18:38 np0005546222 systemd: Starting Journal Service...
Dec  4 19:18:38 np0005546222 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec  4 19:18:38 np0005546222 systemd: Starting Generate network units from Kernel command line...
Dec  4 19:18:38 np0005546222 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  4 19:18:38 np0005546222 systemd: Starting Remount Root and Kernel File Systems...
Dec  4 19:18:38 np0005546222 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Dec  4 19:18:38 np0005546222 systemd: Starting Apply Kernel Variables...
Dec  4 19:18:38 np0005546222 systemd-journald[679]: Journal started
Dec  4 19:18:38 np0005546222 systemd-journald[679]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec  4 19:18:38 np0005546222 systemd[1]: Queued start job for default target Multi-User System.
Dec  4 19:18:38 np0005546222 systemd[1]: systemd-journald.service: Deactivated successfully.
Dec  4 19:18:38 np0005546222 kernel: fuse: init (API version 7.37)
Dec  4 19:18:38 np0005546222 systemd: Starting Coldplug All udev Devices...
Dec  4 19:18:38 np0005546222 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Dec  4 19:18:38 np0005546222 systemd: Started Journal Service.
Dec  4 19:18:38 np0005546222 systemd[1]: Mounted Huge Pages File System.
Dec  4 19:18:38 np0005546222 systemd[1]: Mounted POSIX Message Queue File System.
Dec  4 19:18:38 np0005546222 kernel: ACPI: bus type drm_connector registered
Dec  4 19:18:38 np0005546222 systemd[1]: Mounted Kernel Debug File System.
Dec  4 19:18:38 np0005546222 systemd[1]: Mounted Kernel Trace File System.
Dec  4 19:18:38 np0005546222 systemd[1]: Finished Create List of Static Device Nodes.
Dec  4 19:18:38 np0005546222 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  4 19:18:38 np0005546222 systemd[1]: Finished Load Kernel Module configfs.
Dec  4 19:18:38 np0005546222 systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec  4 19:18:38 np0005546222 systemd[1]: Finished Load Kernel Module drm.
Dec  4 19:18:38 np0005546222 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec  4 19:18:38 np0005546222 systemd[1]: Finished Load Kernel Module efi_pstore.
Dec  4 19:18:38 np0005546222 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec  4 19:18:38 np0005546222 systemd[1]: Finished Load Kernel Module fuse.
Dec  4 19:18:38 np0005546222 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Dec  4 19:18:38 np0005546222 systemd[1]: Finished Generate network units from Kernel command line.
Dec  4 19:18:38 np0005546222 systemd[1]: Finished Remount Root and Kernel File Systems.
Dec  4 19:18:38 np0005546222 systemd[1]: Finished Apply Kernel Variables.
Dec  4 19:18:38 np0005546222 systemd[1]: Mounting FUSE Control File System...
Dec  4 19:18:38 np0005546222 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec  4 19:18:38 np0005546222 systemd[1]: Starting Rebuild Hardware Database...
Dec  4 19:18:38 np0005546222 systemd[1]: Starting Flush Journal to Persistent Storage...
Dec  4 19:18:38 np0005546222 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec  4 19:18:38 np0005546222 systemd[1]: Starting Load/Save OS Random Seed...
Dec  4 19:18:38 np0005546222 systemd[1]: Starting Create System Users...
Dec  4 19:18:38 np0005546222 systemd[1]: Mounted FUSE Control File System.
Dec  4 19:18:38 np0005546222 systemd-journald[679]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec  4 19:18:38 np0005546222 systemd-journald[679]: Received client request to flush runtime journal.
Dec  4 19:18:38 np0005546222 systemd[1]: Finished Flush Journal to Persistent Storage.
Dec  4 19:18:38 np0005546222 systemd[1]: Finished Load/Save OS Random Seed.
Dec  4 19:18:38 np0005546222 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec  4 19:18:38 np0005546222 systemd[1]: Finished Create System Users.
Dec  4 19:18:38 np0005546222 systemd[1]: Starting Create Static Device Nodes in /dev...
Dec  4 19:18:38 np0005546222 systemd[1]: Finished Coldplug All udev Devices.
Dec  4 19:18:38 np0005546222 systemd[1]: Finished Create Static Device Nodes in /dev.
Dec  4 19:18:38 np0005546222 systemd[1]: Reached target Preparation for Local File Systems.
Dec  4 19:18:38 np0005546222 systemd[1]: Reached target Local File Systems.
Dec  4 19:18:38 np0005546222 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Dec  4 19:18:38 np0005546222 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Dec  4 19:18:38 np0005546222 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec  4 19:18:38 np0005546222 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Dec  4 19:18:38 np0005546222 systemd[1]: Starting Automatic Boot Loader Update...
Dec  4 19:18:38 np0005546222 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Dec  4 19:18:38 np0005546222 systemd[1]: Starting Create Volatile Files and Directories...
Dec  4 19:18:38 np0005546222 bootctl[698]: Couldn't find EFI system partition, skipping.
Dec  4 19:18:38 np0005546222 systemd[1]: Finished Automatic Boot Loader Update.
Dec  4 19:18:38 np0005546222 systemd[1]: Finished Create Volatile Files and Directories.
Dec  4 19:18:38 np0005546222 systemd[1]: Starting Security Auditing Service...
Dec  4 19:18:38 np0005546222 systemd[1]: Starting RPC Bind...
Dec  4 19:18:38 np0005546222 systemd[1]: Starting Rebuild Journal Catalog...
Dec  4 19:18:38 np0005546222 auditd[704]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Dec  4 19:18:38 np0005546222 auditd[704]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Dec  4 19:18:38 np0005546222 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Dec  4 19:18:38 np0005546222 systemd[1]: Finished Rebuild Journal Catalog.
Dec  4 19:18:38 np0005546222 augenrules[709]: /sbin/augenrules: No change
Dec  4 19:18:38 np0005546222 systemd[1]: Started RPC Bind.
Dec  4 19:18:38 np0005546222 augenrules[724]: No rules
Dec  4 19:18:38 np0005546222 augenrules[724]: enabled 1
Dec  4 19:18:38 np0005546222 augenrules[724]: failure 1
Dec  4 19:18:38 np0005546222 augenrules[724]: pid 704
Dec  4 19:18:38 np0005546222 augenrules[724]: rate_limit 0
Dec  4 19:18:38 np0005546222 augenrules[724]: backlog_limit 8192
Dec  4 19:18:38 np0005546222 augenrules[724]: lost 0
Dec  4 19:18:38 np0005546222 augenrules[724]: backlog 2
Dec  4 19:18:38 np0005546222 augenrules[724]: backlog_wait_time 60000
Dec  4 19:18:38 np0005546222 augenrules[724]: backlog_wait_time_actual 0
Dec  4 19:18:38 np0005546222 augenrules[724]: enabled 1
Dec  4 19:18:38 np0005546222 augenrules[724]: failure 1
Dec  4 19:18:38 np0005546222 augenrules[724]: pid 704
Dec  4 19:18:38 np0005546222 augenrules[724]: rate_limit 0
Dec  4 19:18:38 np0005546222 augenrules[724]: backlog_limit 8192
Dec  4 19:18:38 np0005546222 augenrules[724]: lost 0
Dec  4 19:18:38 np0005546222 augenrules[724]: backlog 2
Dec  4 19:18:38 np0005546222 augenrules[724]: backlog_wait_time 60000
Dec  4 19:18:38 np0005546222 augenrules[724]: backlog_wait_time_actual 0
Dec  4 19:18:38 np0005546222 augenrules[724]: enabled 1
Dec  4 19:18:38 np0005546222 augenrules[724]: failure 1
Dec  4 19:18:38 np0005546222 augenrules[724]: pid 704
Dec  4 19:18:38 np0005546222 augenrules[724]: rate_limit 0
Dec  4 19:18:38 np0005546222 augenrules[724]: backlog_limit 8192
Dec  4 19:18:38 np0005546222 augenrules[724]: lost 0
Dec  4 19:18:38 np0005546222 augenrules[724]: backlog 1
Dec  4 19:18:38 np0005546222 augenrules[724]: backlog_wait_time 60000
Dec  4 19:18:38 np0005546222 augenrules[724]: backlog_wait_time_actual 0
Dec  4 19:18:38 np0005546222 systemd[1]: Started Security Auditing Service.
Dec  4 19:18:38 np0005546222 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Dec  4 19:18:38 np0005546222 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Dec  4 19:18:38 np0005546222 systemd[1]: Finished Rebuild Hardware Database.
Dec  4 19:18:38 np0005546222 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec  4 19:18:38 np0005546222 systemd[1]: Starting Update is Completed...
Dec  4 19:18:38 np0005546222 systemd[1]: Finished Update is Completed.
Dec  4 19:18:38 np0005546222 systemd-udevd[732]: Using default interface naming scheme 'rhel-9.0'.
Dec  4 19:18:38 np0005546222 systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec  4 19:18:38 np0005546222 systemd[1]: Reached target System Initialization.
Dec  4 19:18:38 np0005546222 systemd[1]: Started dnf makecache --timer.
Dec  4 19:18:38 np0005546222 systemd[1]: Started Daily rotation of log files.
Dec  4 19:18:38 np0005546222 systemd[1]: Started Daily Cleanup of Temporary Directories.
Dec  4 19:18:38 np0005546222 systemd[1]: Reached target Timer Units.
Dec  4 19:18:38 np0005546222 systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec  4 19:18:38 np0005546222 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Dec  4 19:18:38 np0005546222 systemd[1]: Reached target Socket Units.
Dec  4 19:18:38 np0005546222 systemd[1]: Starting D-Bus System Message Bus...
Dec  4 19:18:38 np0005546222 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  4 19:18:38 np0005546222 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Dec  4 19:18:38 np0005546222 systemd-udevd[735]: Network interface NamePolicy= disabled on kernel command line.
Dec  4 19:18:38 np0005546222 systemd[1]: Starting Load Kernel Module configfs...
Dec  4 19:18:38 np0005546222 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  4 19:18:38 np0005546222 systemd[1]: Finished Load Kernel Module configfs.
Dec  4 19:18:38 np0005546222 systemd[1]: Started D-Bus System Message Bus.
Dec  4 19:18:38 np0005546222 systemd[1]: Reached target Basic System.
Dec  4 19:18:38 np0005546222 dbus-broker-lau[765]: Ready
Dec  4 19:18:38 np0005546222 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Dec  4 19:18:38 np0005546222 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Dec  4 19:18:38 np0005546222 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Dec  4 19:18:38 np0005546222 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Dec  4 19:18:38 np0005546222 systemd[1]: Starting NTP client/server...
Dec  4 19:18:38 np0005546222 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Dec  4 19:18:38 np0005546222 systemd[1]: Starting Restore /run/initramfs on shutdown...
Dec  4 19:18:38 np0005546222 systemd[1]: Starting IPv4 firewall with iptables...
Dec  4 19:18:38 np0005546222 systemd[1]: Started irqbalance daemon.
Dec  4 19:18:38 np0005546222 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Dec  4 19:18:38 np0005546222 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  4 19:18:38 np0005546222 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  4 19:18:38 np0005546222 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  4 19:18:38 np0005546222 systemd[1]: Reached target sshd-keygen.target.
Dec  4 19:18:38 np0005546222 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Dec  4 19:18:38 np0005546222 systemd[1]: Reached target User and Group Name Lookups.
Dec  4 19:18:39 np0005546222 systemd[1]: Starting User Login Management...
Dec  4 19:18:39 np0005546222 chronyd[795]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec  4 19:18:39 np0005546222 chronyd[795]: Loaded 0 symmetric keys
Dec  4 19:18:39 np0005546222 chronyd[795]: Using right/UTC timezone to obtain leap second data
Dec  4 19:18:39 np0005546222 chronyd[795]: Loaded seccomp filter (level 2)
Dec  4 19:18:39 np0005546222 systemd[1]: Started NTP client/server.
Dec  4 19:18:39 np0005546222 systemd[1]: Finished Restore /run/initramfs on shutdown.
Dec  4 19:18:39 np0005546222 systemd-logind[792]: Watching system buttons on /dev/input/event0 (Power Button)
Dec  4 19:18:39 np0005546222 systemd-logind[792]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec  4 19:18:39 np0005546222 systemd-logind[792]: New seat seat0.
Dec  4 19:18:39 np0005546222 systemd[1]: Started User Login Management.
Dec  4 19:18:39 np0005546222 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Dec  4 19:18:39 np0005546222 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Dec  4 19:18:39 np0005546222 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Dec  4 19:18:39 np0005546222 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Dec  4 19:18:39 np0005546222 kernel: Console: switching to colour dummy device 80x25
Dec  4 19:18:39 np0005546222 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Dec  4 19:18:39 np0005546222 kernel: [drm] features: -context_init
Dec  4 19:18:39 np0005546222 kernel: [drm] number of scanouts: 1
Dec  4 19:18:39 np0005546222 kernel: [drm] number of cap sets: 0
Dec  4 19:18:39 np0005546222 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Dec  4 19:18:39 np0005546222 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Dec  4 19:18:39 np0005546222 kernel: Console: switching to colour frame buffer device 128x48
Dec  4 19:18:39 np0005546222 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Dec  4 19:18:39 np0005546222 kernel: kvm_amd: TSC scaling supported
Dec  4 19:18:39 np0005546222 kernel: kvm_amd: Nested Virtualization enabled
Dec  4 19:18:39 np0005546222 kernel: kvm_amd: Nested Paging enabled
Dec  4 19:18:39 np0005546222 kernel: kvm_amd: LBR virtualization supported
Dec  4 19:18:39 np0005546222 iptables.init[784]: iptables: Applying firewall rules: [  OK  ]
Dec  4 19:18:39 np0005546222 systemd[1]: Finished IPv4 firewall with iptables.
Dec  4 19:18:39 np0005546222 cloud-init[843]: Cloud-init v. 24.4-7.el9 running 'init-local' at Fri, 05 Dec 2025 00:18:39 +0000. Up 6.57 seconds.
Dec  4 19:18:39 np0005546222 systemd[1]: run-cloud\x2dinit-tmp-tmp_v0ff7zi.mount: Deactivated successfully.
Dec  4 19:18:39 np0005546222 systemd[1]: Starting Hostname Service...
Dec  4 19:18:39 np0005546222 systemd[1]: Started Hostname Service.
Dec  4 19:18:39 np0005546222 systemd-hostnamed[857]: Hostname set to <np0005546222.novalocal> (static)
Dec  4 19:18:39 np0005546222 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Dec  4 19:18:39 np0005546222 systemd[1]: Reached target Preparation for Network.
Dec  4 19:18:39 np0005546222 systemd[1]: Starting Network Manager...
Dec  4 19:18:39 np0005546222 NetworkManager[861]: <info>  [1764893919.9897] NetworkManager (version 1.54.1-1.el9) is starting... (boot:4334a7b0-3a1f-41a9-a980-618d92846a01)
Dec  4 19:18:39 np0005546222 NetworkManager[861]: <info>  [1764893919.9903] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0021] manager[0x55dab0b5b080]: monitoring kernel firmware directory '/lib/firmware'.
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0071] hostname: hostname: using hostnamed
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0071] hostname: static hostname changed from (none) to "np0005546222.novalocal"
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0080] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0180] manager[0x55dab0b5b080]: rfkill: Wi-Fi hardware radio set enabled
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0181] manager[0x55dab0b5b080]: rfkill: WWAN hardware radio set enabled
Dec  4 19:18:40 np0005546222 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0247] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0247] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0248] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0250] manager: Networking is enabled by state file
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0253] settings: Loaded settings plugin: keyfile (internal)
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0270] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0311] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0330] dhcp: init: Using DHCP client 'internal'
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0335] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0361] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0376] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0393] device (lo): Activation: starting connection 'lo' (d5cf929f-c0df-4c7c-b75c-299bce2e80f0)
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0410] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0417] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  4 19:18:40 np0005546222 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  4 19:18:40 np0005546222 systemd[1]: Started Network Manager.
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0474] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  4 19:18:40 np0005546222 systemd[1]: Reached target Network.
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0487] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0490] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0493] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0496] device (eth0): carrier: link connected
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0501] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0512] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec  4 19:18:40 np0005546222 systemd[1]: Starting Network Manager Wait Online...
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0522] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0530] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0531] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0536] manager: NetworkManager state is now CONNECTING
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0538] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  4 19:18:40 np0005546222 systemd[1]: Starting GSSAPI Proxy Daemon...
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0551] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0557] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0611] dhcp4 (eth0): state changed new lease, address=38.102.83.176
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0625] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  4 19:18:40 np0005546222 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0661] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  4 19:18:40 np0005546222 systemd[1]: Started GSSAPI Proxy Daemon.
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0692] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0696] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  4 19:18:40 np0005546222 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec  4 19:18:40 np0005546222 systemd[1]: Reached target NFS client services.
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0706] device (lo): Activation: successful, device activated.
Dec  4 19:18:40 np0005546222 systemd[1]: Reached target Preparation for Remote File Systems.
Dec  4 19:18:40 np0005546222 systemd[1]: Reached target Remote File Systems.
Dec  4 19:18:40 np0005546222 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0760] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0762] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0765] manager: NetworkManager state is now CONNECTED_SITE
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0767] device (eth0): Activation: successful, device activated.
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0772] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  4 19:18:40 np0005546222 NetworkManager[861]: <info>  [1764893920.0774] manager: startup complete
Dec  4 19:18:40 np0005546222 systemd[1]: Finished Network Manager Wait Online.
Dec  4 19:18:40 np0005546222 systemd[1]: Starting Cloud-init: Network Stage...
Dec  4 19:18:40 np0005546222 cloud-init[925]: Cloud-init v. 24.4-7.el9 running 'init' at Fri, 05 Dec 2025 00:18:40 +0000. Up 7.51 seconds.
Dec  4 19:18:40 np0005546222 cloud-init[925]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Dec  4 19:18:40 np0005546222 cloud-init[925]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec  4 19:18:40 np0005546222 cloud-init[925]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Dec  4 19:18:40 np0005546222 cloud-init[925]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec  4 19:18:40 np0005546222 cloud-init[925]: ci-info: |  eth0  | True |        38.102.83.176         | 255.255.255.0 | global | fa:16:3e:86:26:59 |
Dec  4 19:18:40 np0005546222 cloud-init[925]: ci-info: |  eth0  | True | fe80::f816:3eff:fe86:2659/64 |       .       |  link  | fa:16:3e:86:26:59 |
Dec  4 19:18:40 np0005546222 cloud-init[925]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Dec  4 19:18:40 np0005546222 cloud-init[925]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Dec  4 19:18:40 np0005546222 cloud-init[925]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec  4 19:18:40 np0005546222 cloud-init[925]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Dec  4 19:18:40 np0005546222 cloud-init[925]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  4 19:18:40 np0005546222 cloud-init[925]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Dec  4 19:18:40 np0005546222 cloud-init[925]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  4 19:18:40 np0005546222 cloud-init[925]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Dec  4 19:18:40 np0005546222 cloud-init[925]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Dec  4 19:18:40 np0005546222 cloud-init[925]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Dec  4 19:18:40 np0005546222 cloud-init[925]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  4 19:18:40 np0005546222 cloud-init[925]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Dec  4 19:18:40 np0005546222 cloud-init[925]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  4 19:18:40 np0005546222 cloud-init[925]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Dec  4 19:18:40 np0005546222 cloud-init[925]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  4 19:18:40 np0005546222 cloud-init[925]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Dec  4 19:18:40 np0005546222 cloud-init[925]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Dec  4 19:18:40 np0005546222 cloud-init[925]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  4 19:18:41 np0005546222 cloud-init[925]: Generating public/private rsa key pair.
Dec  4 19:18:41 np0005546222 cloud-init[925]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Dec  4 19:18:41 np0005546222 cloud-init[925]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Dec  4 19:18:41 np0005546222 cloud-init[925]: The key fingerprint is:
Dec  4 19:18:41 np0005546222 cloud-init[925]: SHA256:tUePpzzRHZgyI++cXeT8lDATfBnObToXhC99+x2g1LY root@np0005546222.novalocal
Dec  4 19:18:41 np0005546222 cloud-init[925]: The key's randomart image is:
Dec  4 19:18:41 np0005546222 cloud-init[925]: +---[RSA 3072]----+
Dec  4 19:18:41 np0005546222 cloud-init[925]: |            ...oo|
Dec  4 19:18:41 np0005546222 cloud-init[925]: |             oBo.|
Dec  4 19:18:41 np0005546222 cloud-init[925]: |         ..+o*+*o|
Dec  4 19:18:41 np0005546222 cloud-init[925]: |         .o++OB*=|
Dec  4 19:18:41 np0005546222 cloud-init[925]: |        S o.* X=*|
Dec  4 19:18:41 np0005546222 cloud-init[925]: |          o+oE.*.|
Dec  4 19:18:41 np0005546222 cloud-init[925]: |           ++.  =|
Dec  4 19:18:41 np0005546222 cloud-init[925]: |             .  o|
Dec  4 19:18:41 np0005546222 cloud-init[925]: |                 |
Dec  4 19:18:41 np0005546222 cloud-init[925]: +----[SHA256]-----+
Dec  4 19:18:41 np0005546222 cloud-init[925]: Generating public/private ecdsa key pair.
Dec  4 19:18:41 np0005546222 cloud-init[925]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Dec  4 19:18:41 np0005546222 cloud-init[925]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Dec  4 19:18:41 np0005546222 cloud-init[925]: The key fingerprint is:
Dec  4 19:18:41 np0005546222 cloud-init[925]: SHA256:zjEmhd+SR45MSS774Hti793KOBQQmEvPoyBkW8m9HI8 root@np0005546222.novalocal
Dec  4 19:18:41 np0005546222 cloud-init[925]: The key's randomart image is:
Dec  4 19:18:41 np0005546222 cloud-init[925]: +---[ECDSA 256]---+
Dec  4 19:18:41 np0005546222 cloud-init[925]: |  . oo...        |
Dec  4 19:18:41 np0005546222 cloud-init[925]: | o ++o.+ .       |
Dec  4 19:18:41 np0005546222 cloud-init[925]: |o o..+B.= .      |
Dec  4 19:18:41 np0005546222 cloud-init[925]: |... .E+O.*       |
Dec  4 19:18:41 np0005546222 cloud-init[925]: | . . .+.S.+      |
Dec  4 19:18:41 np0005546222 cloud-init[925]: |    .. B.=       |
Dec  4 19:18:41 np0005546222 cloud-init[925]: |      ..+        |
Dec  4 19:18:41 np0005546222 cloud-init[925]: |      o.o+ .     |
Dec  4 19:18:41 np0005546222 cloud-init[925]: |     ..=+.+..    |
Dec  4 19:18:41 np0005546222 cloud-init[925]: +----[SHA256]-----+
Dec  4 19:18:41 np0005546222 cloud-init[925]: Generating public/private ed25519 key pair.
Dec  4 19:18:41 np0005546222 cloud-init[925]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Dec  4 19:18:41 np0005546222 cloud-init[925]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Dec  4 19:18:41 np0005546222 cloud-init[925]: The key fingerprint is:
Dec  4 19:18:41 np0005546222 cloud-init[925]: SHA256:McTQA/aXxjNH1RPfUIGoMMKutUrrMSACe0Q1FA8WXqk root@np0005546222.novalocal
Dec  4 19:18:41 np0005546222 cloud-init[925]: The key's randomart image is:
Dec  4 19:18:41 np0005546222 cloud-init[925]: +--[ED25519 256]--+
Dec  4 19:18:41 np0005546222 cloud-init[925]: |  .oXo=*.   o.+=+|
Dec  4 19:18:41 np0005546222 cloud-init[925]: | . o O.=+. + . o+|
Dec  4 19:18:41 np0005546222 cloud-init[925]: |. . o.o *.O .   +|
Dec  4 19:18:41 np0005546222 cloud-init[925]: |.o  Eo   * +     |
Dec  4 19:18:41 np0005546222 cloud-init[925]: |+.. o . S        |
Dec  4 19:18:41 np0005546222 cloud-init[925]: |o..o .           |
Dec  4 19:18:41 np0005546222 cloud-init[925]: |  .oo            |
Dec  4 19:18:41 np0005546222 cloud-init[925]: |   oo            |
Dec  4 19:18:41 np0005546222 cloud-init[925]: |  ..             |
Dec  4 19:18:41 np0005546222 cloud-init[925]: +----[SHA256]-----+
Dec  4 19:18:41 np0005546222 systemd[1]: Finished Cloud-init: Network Stage.
Dec  4 19:18:41 np0005546222 systemd[1]: Reached target Cloud-config availability.
Dec  4 19:18:41 np0005546222 systemd[1]: Reached target Network is Online.
Dec  4 19:18:41 np0005546222 systemd[1]: Starting Cloud-init: Config Stage...
Dec  4 19:18:41 np0005546222 systemd[1]: Starting Crash recovery kernel arming...
Dec  4 19:18:41 np0005546222 systemd[1]: Starting Notify NFS peers of a restart...
Dec  4 19:18:41 np0005546222 systemd[1]: Starting System Logging Service...
Dec  4 19:18:41 np0005546222 sm-notify[1007]: Version 2.5.4 starting
Dec  4 19:18:41 np0005546222 systemd[1]: Starting OpenSSH server daemon...
Dec  4 19:18:41 np0005546222 systemd[1]: Starting Permit User Sessions...
Dec  4 19:18:41 np0005546222 systemd[1]: Started Notify NFS peers of a restart.
Dec  4 19:18:41 np0005546222 systemd[1]: Finished Permit User Sessions.
Dec  4 19:18:41 np0005546222 rsyslogd[1008]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1008" x-info="https://www.rsyslog.com"] start
Dec  4 19:18:41 np0005546222 rsyslogd[1008]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Dec  4 19:18:41 np0005546222 systemd[1]: Started Command Scheduler.
Dec  4 19:18:41 np0005546222 systemd[1]: Started Getty on tty1.
Dec  4 19:18:41 np0005546222 systemd[1]: Started Serial Getty on ttyS0.
Dec  4 19:18:41 np0005546222 systemd[1]: Reached target Login Prompts.
Dec  4 19:18:41 np0005546222 systemd[1]: Started OpenSSH server daemon.
Dec  4 19:18:41 np0005546222 systemd[1]: Started System Logging Service.
Dec  4 19:18:41 np0005546222 systemd[1]: Reached target Multi-User System.
Dec  4 19:18:41 np0005546222 systemd[1]: Starting Record Runlevel Change in UTMP...
Dec  4 19:18:41 np0005546222 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec  4 19:18:41 np0005546222 systemd[1]: Finished Record Runlevel Change in UTMP.
Dec  4 19:18:41 np0005546222 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  4 19:18:42 np0005546222 kdumpctl[1020]: kdump: No kdump initial ramdisk found.
Dec  4 19:18:42 np0005546222 kdumpctl[1020]: kdump: Rebuilding /boot/initramfs-5.14.0-645.el9.x86_64kdump.img
Dec  4 19:18:42 np0005546222 cloud-init[1147]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Fri, 05 Dec 2025 00:18:42 +0000. Up 9.30 seconds.
Dec  4 19:18:42 np0005546222 systemd[1]: Finished Cloud-init: Config Stage.
Dec  4 19:18:42 np0005546222 systemd[1]: Starting Cloud-init: Final Stage...
Dec  4 19:18:42 np0005546222 dracut[1286]: dracut-057-102.git20250818.el9
Dec  4 19:18:42 np0005546222 dracut[1288]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-645.el9.x86_64kdump.img 5.14.0-645.el9.x86_64
Dec  4 19:18:42 np0005546222 cloud-init[1328]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Fri, 05 Dec 2025 00:18:42 +0000. Up 9.67 seconds.
Dec  4 19:18:42 np0005546222 cloud-init[1357]: #############################################################
Dec  4 19:18:42 np0005546222 cloud-init[1359]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Dec  4 19:18:42 np0005546222 cloud-init[1361]: 256 SHA256:zjEmhd+SR45MSS774Hti793KOBQQmEvPoyBkW8m9HI8 root@np0005546222.novalocal (ECDSA)
Dec  4 19:18:42 np0005546222 cloud-init[1363]: 256 SHA256:McTQA/aXxjNH1RPfUIGoMMKutUrrMSACe0Q1FA8WXqk root@np0005546222.novalocal (ED25519)
Dec  4 19:18:42 np0005546222 cloud-init[1368]: 3072 SHA256:tUePpzzRHZgyI++cXeT8lDATfBnObToXhC99+x2g1LY root@np0005546222.novalocal (RSA)
Dec  4 19:18:42 np0005546222 cloud-init[1369]: -----END SSH HOST KEY FINGERPRINTS-----
Dec  4 19:18:42 np0005546222 cloud-init[1370]: #############################################################
Dec  4 19:18:42 np0005546222 cloud-init[1328]: Cloud-init v. 24.4-7.el9 finished at Fri, 05 Dec 2025 00:18:42 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 9.86 seconds
Dec  4 19:18:42 np0005546222 systemd[1]: Finished Cloud-init: Final Stage.
Dec  4 19:18:42 np0005546222 systemd[1]: Reached target Cloud-init target.
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: memstrack is not available
Dec  4 19:18:43 np0005546222 dracut[1288]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec  4 19:18:43 np0005546222 dracut[1288]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec  4 19:18:44 np0005546222 dracut[1288]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec  4 19:18:44 np0005546222 dracut[1288]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec  4 19:18:44 np0005546222 dracut[1288]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec  4 19:18:44 np0005546222 dracut[1288]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec  4 19:18:44 np0005546222 dracut[1288]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec  4 19:18:44 np0005546222 dracut[1288]: memstrack is not available
Dec  4 19:18:44 np0005546222 dracut[1288]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec  4 19:18:44 np0005546222 dracut[1288]: *** Including module: systemd ***
Dec  4 19:18:44 np0005546222 dracut[1288]: *** Including module: fips ***
Dec  4 19:18:44 np0005546222 dracut[1288]: *** Including module: systemd-initrd ***
Dec  4 19:18:44 np0005546222 dracut[1288]: *** Including module: i18n ***
Dec  4 19:18:44 np0005546222 dracut[1288]: *** Including module: drm ***
Dec  4 19:18:45 np0005546222 chronyd[795]: Selected source 174.138.193.90 (2.centos.pool.ntp.org)
Dec  4 19:18:45 np0005546222 chronyd[795]: System clock TAI offset set to 37 seconds
Dec  4 19:18:45 np0005546222 dracut[1288]: *** Including module: prefixdevname ***
Dec  4 19:18:45 np0005546222 dracut[1288]: *** Including module: kernel-modules ***
Dec  4 19:18:45 np0005546222 kernel: block vda: the capability attribute has been deprecated.
Dec  4 19:18:45 np0005546222 dracut[1288]: *** Including module: kernel-modules-extra ***
Dec  4 19:18:45 np0005546222 dracut[1288]: *** Including module: qemu ***
Dec  4 19:18:46 np0005546222 dracut[1288]: *** Including module: fstab-sys ***
Dec  4 19:18:46 np0005546222 dracut[1288]: *** Including module: rootfs-block ***
Dec  4 19:18:46 np0005546222 dracut[1288]: *** Including module: terminfo ***
Dec  4 19:18:46 np0005546222 dracut[1288]: *** Including module: udev-rules ***
Dec  4 19:18:46 np0005546222 dracut[1288]: Skipping udev rule: 91-permissions.rules
Dec  4 19:18:46 np0005546222 dracut[1288]: Skipping udev rule: 80-drivers-modprobe.rules
Dec  4 19:18:46 np0005546222 dracut[1288]: *** Including module: virtiofs ***
Dec  4 19:18:46 np0005546222 dracut[1288]: *** Including module: dracut-systemd ***
Dec  4 19:18:47 np0005546222 chronyd[795]: Selected source 149.56.19.163 (2.centos.pool.ntp.org)
Dec  4 19:18:47 np0005546222 dracut[1288]: *** Including module: usrmount ***
Dec  4 19:18:47 np0005546222 dracut[1288]: *** Including module: base ***
Dec  4 19:18:47 np0005546222 dracut[1288]: *** Including module: fs-lib ***
Dec  4 19:18:47 np0005546222 dracut[1288]: *** Including module: kdumpbase ***
Dec  4 19:18:47 np0005546222 dracut[1288]: *** Including module: microcode_ctl-fw_dir_override ***
Dec  4 19:18:47 np0005546222 dracut[1288]:  microcode_ctl module: mangling fw_dir
Dec  4 19:18:47 np0005546222 dracut[1288]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Dec  4 19:18:47 np0005546222 dracut[1288]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Dec  4 19:18:47 np0005546222 dracut[1288]:    microcode_ctl: configuration "intel" is ignored
Dec  4 19:18:47 np0005546222 dracut[1288]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Dec  4 19:18:47 np0005546222 dracut[1288]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Dec  4 19:18:47 np0005546222 dracut[1288]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Dec  4 19:18:47 np0005546222 dracut[1288]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Dec  4 19:18:47 np0005546222 dracut[1288]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Dec  4 19:18:48 np0005546222 dracut[1288]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Dec  4 19:18:48 np0005546222 dracut[1288]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Dec  4 19:18:48 np0005546222 dracut[1288]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Dec  4 19:18:48 np0005546222 dracut[1288]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Dec  4 19:18:48 np0005546222 dracut[1288]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Dec  4 19:18:48 np0005546222 dracut[1288]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Dec  4 19:18:48 np0005546222 dracut[1288]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Dec  4 19:18:48 np0005546222 dracut[1288]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Dec  4 19:18:48 np0005546222 dracut[1288]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Dec  4 19:18:48 np0005546222 dracut[1288]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Dec  4 19:18:48 np0005546222 dracut[1288]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Dec  4 19:18:48 np0005546222 dracut[1288]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Dec  4 19:18:48 np0005546222 dracut[1288]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Dec  4 19:18:48 np0005546222 dracut[1288]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Dec  4 19:18:48 np0005546222 dracut[1288]: *** Including module: openssl ***
Dec  4 19:18:48 np0005546222 dracut[1288]: *** Including module: shutdown ***
Dec  4 19:18:48 np0005546222 dracut[1288]: *** Including module: squash ***
Dec  4 19:18:48 np0005546222 dracut[1288]: *** Including modules done ***
Dec  4 19:18:48 np0005546222 dracut[1288]: *** Installing kernel module dependencies ***
Dec  4 19:18:48 np0005546222 irqbalance[786]: Cannot change IRQ 25 affinity: Operation not permitted
Dec  4 19:18:48 np0005546222 irqbalance[786]: IRQ 25 affinity is now unmanaged
Dec  4 19:18:48 np0005546222 irqbalance[786]: Cannot change IRQ 31 affinity: Operation not permitted
Dec  4 19:18:48 np0005546222 irqbalance[786]: IRQ 31 affinity is now unmanaged
Dec  4 19:18:48 np0005546222 irqbalance[786]: Cannot change IRQ 28 affinity: Operation not permitted
Dec  4 19:18:48 np0005546222 irqbalance[786]: IRQ 28 affinity is now unmanaged
Dec  4 19:18:48 np0005546222 irqbalance[786]: Cannot change IRQ 32 affinity: Operation not permitted
Dec  4 19:18:48 np0005546222 irqbalance[786]: IRQ 32 affinity is now unmanaged
Dec  4 19:18:48 np0005546222 irqbalance[786]: Cannot change IRQ 30 affinity: Operation not permitted
Dec  4 19:18:48 np0005546222 irqbalance[786]: IRQ 30 affinity is now unmanaged
Dec  4 19:18:48 np0005546222 irqbalance[786]: Cannot change IRQ 29 affinity: Operation not permitted
Dec  4 19:18:48 np0005546222 irqbalance[786]: IRQ 29 affinity is now unmanaged
Dec  4 19:18:49 np0005546222 dracut[1288]: *** Installing kernel module dependencies done ***
Dec  4 19:18:49 np0005546222 dracut[1288]: *** Resolving executable dependencies ***
Dec  4 19:18:50 np0005546222 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  4 19:18:51 np0005546222 dracut[1288]: *** Resolving executable dependencies done ***
Dec  4 19:18:51 np0005546222 dracut[1288]: *** Generating early-microcode cpio image ***
Dec  4 19:18:51 np0005546222 dracut[1288]: *** Store current command line parameters ***
Dec  4 19:18:51 np0005546222 dracut[1288]: Stored kernel commandline:
Dec  4 19:18:51 np0005546222 dracut[1288]: No dracut internal kernel commandline stored in the initramfs
Dec  4 19:18:51 np0005546222 dracut[1288]: *** Install squash loader ***
Dec  4 19:18:52 np0005546222 dracut[1288]: *** Squashing the files inside the initramfs ***
Dec  4 19:18:53 np0005546222 dracut[1288]: *** Squashing the files inside the initramfs done ***
Dec  4 19:18:53 np0005546222 dracut[1288]: *** Creating image file '/boot/initramfs-5.14.0-645.el9.x86_64kdump.img' ***
Dec  4 19:18:53 np0005546222 dracut[1288]: *** Hardlinking files ***
Dec  4 19:18:53 np0005546222 dracut[1288]: *** Hardlinking files done ***
Dec  4 19:18:53 np0005546222 dracut[1288]: *** Creating initramfs image file '/boot/initramfs-5.14.0-645.el9.x86_64kdump.img' done ***
Dec  4 19:18:54 np0005546222 kdumpctl[1020]: kdump: kexec: loaded kdump kernel
Dec  4 19:18:54 np0005546222 kdumpctl[1020]: kdump: Starting kdump: [OK]
Dec  4 19:18:54 np0005546222 systemd[1]: Finished Crash recovery kernel arming.
Dec  4 19:18:54 np0005546222 systemd[1]: Startup finished in 2.087s (kernel) + 2.537s (initrd) + 16.769s (userspace) = 21.395s.
Dec  4 19:18:56 np0005546222 systemd[1]: Created slice User Slice of UID 1000.
Dec  4 19:18:56 np0005546222 systemd[1]: Starting User Runtime Directory /run/user/1000...
Dec  4 19:18:56 np0005546222 systemd-logind[792]: New session 1 of user zuul.
Dec  4 19:18:56 np0005546222 systemd[1]: Finished User Runtime Directory /run/user/1000.
Dec  4 19:18:56 np0005546222 systemd[1]: Starting User Manager for UID 1000...
Dec  4 19:18:56 np0005546222 systemd[4301]: Queued start job for default target Main User Target.
Dec  4 19:18:56 np0005546222 systemd[4301]: Created slice User Application Slice.
Dec  4 19:18:56 np0005546222 systemd[4301]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  4 19:18:56 np0005546222 systemd[4301]: Started Daily Cleanup of User's Temporary Directories.
Dec  4 19:18:56 np0005546222 systemd[4301]: Reached target Paths.
Dec  4 19:18:56 np0005546222 systemd[4301]: Reached target Timers.
Dec  4 19:18:56 np0005546222 systemd[4301]: Starting D-Bus User Message Bus Socket...
Dec  4 19:18:56 np0005546222 systemd[4301]: Starting Create User's Volatile Files and Directories...
Dec  4 19:18:57 np0005546222 systemd[4301]: Finished Create User's Volatile Files and Directories.
Dec  4 19:18:57 np0005546222 systemd[4301]: Listening on D-Bus User Message Bus Socket.
Dec  4 19:18:57 np0005546222 systemd[4301]: Reached target Sockets.
Dec  4 19:18:57 np0005546222 systemd[4301]: Reached target Basic System.
Dec  4 19:18:57 np0005546222 systemd[4301]: Reached target Main User Target.
Dec  4 19:18:57 np0005546222 systemd[4301]: Startup finished in 114ms.
Dec  4 19:18:57 np0005546222 systemd[1]: Started User Manager for UID 1000.
Dec  4 19:18:57 np0005546222 systemd[1]: Started Session 1 of User zuul.
Dec  4 19:18:57 np0005546222 python3[4383]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 19:19:00 np0005546222 python3[4411]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 19:19:06 np0005546222 python3[4469]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 19:19:07 np0005546222 python3[4509]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Dec  4 19:19:08 np0005546222 python3[4535]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrke2nNI7LZZcrV3DsdyMSIR4c6KfqG/Om714fquYAFbh1UWyn5oMHsKsvzgsrOIcaGmNYOpXmgcRacQSrOFDvb+xChD7lS8+fQBxFPtuZUoI1pMyWDDnKHr1t9ZABIurzBz2x+fMUQ7vpMBANf3KQwUowFL1piEDrThsoTDWM/RqfkQcwYjWuqLci1YcDlCCOf+xEgOQbX0YjS0LMpk7WURouYANIkIIoKnbXtkKfylX2rW/ZPtpCFKLORsNs5QCXSITbkfr8npItpelyDo0Wu3HbLKsR6tip36RnB+aso4Dm9OnAPxtQ17bNtiLUQHuqLiYMirkjizswukaynpECYtGzccW+QGEcmnZ6TmG9uxVqFhGZmt1c6WQWbFDmOPISeov6LQc6Tgg9OllMZT1bpQQFDE9jjVJxaIhQkam2w7eawimiQa17Rl6EScnamNQFx2E9m5UNxdqy4OY95y4Cy5w/qaJbtcESxO91Qch5DA6el+D1ayPiyj1A31ugLkM= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 19:19:09 np0005546222 python3[4559]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:19:09 np0005546222 python3[4658]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 19:19:10 np0005546222 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  4 19:19:10 np0005546222 python3[4731]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764893949.5651255-207-167469590867975/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=3a75b7ae28fe48ff8d276b97cada9e67_id_rsa follow=False checksum=a05cea211c85eb88e6672f4e1f9d0017264e88e8 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:19:10 np0005546222 python3[4854]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 19:19:11 np0005546222 python3[4925]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764893950.5522387-240-132846170932382/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=3a75b7ae28fe48ff8d276b97cada9e67_id_rsa.pub follow=False checksum=f80602560ff0022bd9c0fa6a603c5552a0d66a17 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:19:12 np0005546222 python3[4973]: ansible-ping Invoked with data=pong
Dec  4 19:19:13 np0005546222 python3[4997]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 19:19:14 np0005546222 python3[5055]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Dec  4 19:19:15 np0005546222 python3[5087]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:19:15 np0005546222 python3[5111]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:19:16 np0005546222 python3[5135]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:19:16 np0005546222 python3[5159]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:19:16 np0005546222 python3[5183]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:19:16 np0005546222 python3[5207]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:19:18 np0005546222 python3[5233]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:19:18 np0005546222 python3[5311]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 19:19:19 np0005546222 python3[5384]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764893958.4652362-21-24959342436566/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:19:19 np0005546222 python3[5432]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 19:19:20 np0005546222 python3[5456]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 19:19:20 np0005546222 python3[5480]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 19:19:20 np0005546222 python3[5504]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 19:19:20 np0005546222 python3[5528]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 19:19:21 np0005546222 python3[5552]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 19:19:21 np0005546222 python3[5576]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 19:19:21 np0005546222 python3[5600]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 19:19:22 np0005546222 python3[5624]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 19:19:22 np0005546222 python3[5648]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 19:19:22 np0005546222 python3[5672]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 19:19:22 np0005546222 python3[5696]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 19:19:23 np0005546222 python3[5720]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 19:19:23 np0005546222 python3[5744]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 19:19:23 np0005546222 python3[5768]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 19:19:23 np0005546222 python3[5792]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 19:19:24 np0005546222 python3[5816]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 19:19:24 np0005546222 python3[5840]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 19:19:24 np0005546222 python3[5864]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 19:19:25 np0005546222 python3[5888]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 19:19:25 np0005546222 python3[5912]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 19:19:25 np0005546222 python3[5936]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 19:19:25 np0005546222 python3[5960]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 19:19:26 np0005546222 python3[5984]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 19:19:26 np0005546222 python3[6008]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 19:19:26 np0005546222 python3[6032]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 19:19:32 np0005546222 python3[6058]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec  4 19:19:32 np0005546222 systemd[1]: Starting Time & Date Service...
Dec  4 19:19:32 np0005546222 systemd[1]: Started Time & Date Service.
Dec  4 19:19:32 np0005546222 systemd-timedated[6060]: Changed time zone to 'UTC' (UTC).
Dec  4 19:19:32 np0005546222 python3[6089]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:19:33 np0005546222 python3[6165]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 19:19:33 np0005546222 python3[6236]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764893973.175383-153-226596726729127/source _original_basename=tmpwuu27w03 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:19:34 np0005546222 python3[6336]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 19:19:34 np0005546222 python3[6407]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764893973.9798925-183-263147300137375/source _original_basename=tmp5gh9pb2w follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:19:35 np0005546222 python3[6509]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 19:19:35 np0005546222 python3[6582]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764893974.97921-231-53259489605934/source _original_basename=tmpu3qw08j2 follow=False checksum=8e0e434468aa50922357fbdb56d8b197f48f0949 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:19:36 np0005546222 python3[6630]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 19:19:36 np0005546222 python3[6656]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 19:19:36 np0005546222 python3[6736]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 19:19:37 np0005546222 python3[6809]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764893976.5276375-273-32351702820715/source _original_basename=tmps1ps58bj follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:19:37 np0005546222 python3[6860]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ec2-ffbe-cf93-3d31-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 19:19:38 np0005546222 python3[6888]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-cf93-3d31-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Dec  4 19:19:39 np0005546222 python3[6916]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:19:59 np0005546222 python3[6942]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:20:02 np0005546222 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  4 19:20:42 np0005546222 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec  4 19:20:42 np0005546222 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Dec  4 19:20:42 np0005546222 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Dec  4 19:20:42 np0005546222 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Dec  4 19:20:42 np0005546222 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Dec  4 19:20:42 np0005546222 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Dec  4 19:20:42 np0005546222 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Dec  4 19:20:42 np0005546222 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Dec  4 19:20:42 np0005546222 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Dec  4 19:20:42 np0005546222 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Dec  4 19:20:42 np0005546222 NetworkManager[861]: <info>  [1764894042.3351] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  4 19:20:42 np0005546222 systemd-udevd[6946]: Network interface NamePolicy= disabled on kernel command line.
Dec  4 19:20:42 np0005546222 NetworkManager[861]: <info>  [1764894042.3566] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  4 19:20:42 np0005546222 NetworkManager[861]: <info>  [1764894042.3615] settings: (eth1): created default wired connection 'Wired connection 1'
Dec  4 19:20:42 np0005546222 NetworkManager[861]: <info>  [1764894042.3620] device (eth1): carrier: link connected
Dec  4 19:20:42 np0005546222 NetworkManager[861]: <info>  [1764894042.3623] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec  4 19:20:42 np0005546222 NetworkManager[861]: <info>  [1764894042.3634] policy: auto-activating connection 'Wired connection 1' (74afdaaf-08cb-315f-8816-01bd59fc3bf4)
Dec  4 19:20:42 np0005546222 NetworkManager[861]: <info>  [1764894042.3643] device (eth1): Activation: starting connection 'Wired connection 1' (74afdaaf-08cb-315f-8816-01bd59fc3bf4)
Dec  4 19:20:42 np0005546222 NetworkManager[861]: <info>  [1764894042.3644] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  4 19:20:42 np0005546222 NetworkManager[861]: <info>  [1764894042.3649] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  4 19:20:42 np0005546222 NetworkManager[861]: <info>  [1764894042.3655] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  4 19:20:42 np0005546222 NetworkManager[861]: <info>  [1764894042.3662] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  4 19:20:43 np0005546222 python3[6972]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ec2-ffbe-6e21-e576-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 19:20:53 np0005546222 python3[7052]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 19:20:53 np0005546222 python3[7125]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764894052.9167798-102-66769513048676/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=f07f1bcbce7c2f75e7f9492c34b4635a0841af8e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:20:54 np0005546222 python3[7175]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  4 19:20:54 np0005546222 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec  4 19:20:54 np0005546222 systemd[1]: Stopped Network Manager Wait Online.
Dec  4 19:20:54 np0005546222 systemd[1]: Stopping Network Manager Wait Online...
Dec  4 19:20:54 np0005546222 systemd[1]: Stopping Network Manager...
Dec  4 19:20:54 np0005546222 NetworkManager[861]: <info>  [1764894054.4074] caught SIGTERM, shutting down normally.
Dec  4 19:20:54 np0005546222 NetworkManager[861]: <info>  [1764894054.4092] dhcp4 (eth0): canceled DHCP transaction
Dec  4 19:20:54 np0005546222 NetworkManager[861]: <info>  [1764894054.4092] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  4 19:20:54 np0005546222 NetworkManager[861]: <info>  [1764894054.4093] dhcp4 (eth0): state changed no lease
Dec  4 19:20:54 np0005546222 NetworkManager[861]: <info>  [1764894054.4097] manager: NetworkManager state is now CONNECTING
Dec  4 19:20:54 np0005546222 NetworkManager[861]: <info>  [1764894054.4149] dhcp4 (eth1): canceled DHCP transaction
Dec  4 19:20:54 np0005546222 NetworkManager[861]: <info>  [1764894054.4150] dhcp4 (eth1): state changed no lease
Dec  4 19:20:54 np0005546222 NetworkManager[861]: <info>  [1764894054.4217] exiting (success)
Dec  4 19:20:54 np0005546222 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  4 19:20:54 np0005546222 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  4 19:20:54 np0005546222 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec  4 19:20:54 np0005546222 systemd[1]: Stopped Network Manager.
Dec  4 19:20:54 np0005546222 systemd[1]: NetworkManager.service: Consumed 1.088s CPU time, 10.0M memory peak.
Dec  4 19:20:54 np0005546222 systemd[1]: Starting Network Manager...
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.4671] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:4334a7b0-3a1f-41a9-a980-618d92846a01)
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.4676] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.4734] manager[0x5600f85c7070]: monitoring kernel firmware directory '/lib/firmware'.
Dec  4 19:20:54 np0005546222 systemd[1]: Starting Hostname Service...
Dec  4 19:20:54 np0005546222 systemd[1]: Started Hostname Service.
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.5946] hostname: hostname: using hostnamed
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.5948] hostname: static hostname changed from (none) to "np0005546222.novalocal"
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.5955] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.5961] manager[0x5600f85c7070]: rfkill: Wi-Fi hardware radio set enabled
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.5961] manager[0x5600f85c7070]: rfkill: WWAN hardware radio set enabled
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.5999] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6000] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6000] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6001] manager: Networking is enabled by state file
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6004] settings: Loaded settings plugin: keyfile (internal)
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6008] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6037] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6051] dhcp: init: Using DHCP client 'internal'
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6054] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6060] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6067] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6077] device (lo): Activation: starting connection 'lo' (d5cf929f-c0df-4c7c-b75c-299bce2e80f0)
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6085] device (eth0): carrier: link connected
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6090] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6097] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6098] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6107] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6116] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6125] device (eth1): carrier: link connected
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6130] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6136] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (74afdaaf-08cb-315f-8816-01bd59fc3bf4) (indicated)
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6136] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6144] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6152] device (eth1): Activation: starting connection 'Wired connection 1' (74afdaaf-08cb-315f-8816-01bd59fc3bf4)
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6160] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  4 19:20:54 np0005546222 systemd[1]: Started Network Manager.
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6178] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6185] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6188] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6192] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6196] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6201] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6205] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6210] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6220] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6225] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6243] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6247] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6270] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6279] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6288] device (lo): Activation: successful, device activated.
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6300] dhcp4 (eth0): state changed new lease, address=38.102.83.176
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6313] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  4 19:20:54 np0005546222 systemd[1]: Starting Network Manager Wait Online...
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6403] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6426] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6430] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6436] manager: NetworkManager state is now CONNECTED_SITE
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6445] device (eth0): Activation: successful, device activated.
Dec  4 19:20:54 np0005546222 NetworkManager[7183]: <info>  [1764894054.6453] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  4 19:20:54 np0005546222 python3[7259]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ec2-ffbe-6e21-e576-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 19:21:04 np0005546222 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  4 19:21:16 np0005546222 systemd[4301]: Starting Mark boot as successful...
Dec  4 19:21:16 np0005546222 systemd[4301]: Finished Mark boot as successful.
Dec  4 19:21:24 np0005546222 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  4 19:21:39 np0005546222 NetworkManager[7183]: <info>  [1764894099.8576] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  4 19:21:39 np0005546222 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  4 19:21:39 np0005546222 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  4 19:21:39 np0005546222 NetworkManager[7183]: <info>  [1764894099.8971] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  4 19:21:39 np0005546222 NetworkManager[7183]: <info>  [1764894099.8972] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  4 19:21:39 np0005546222 NetworkManager[7183]: <info>  [1764894099.8978] device (eth1): Activation: successful, device activated.
Dec  4 19:21:39 np0005546222 NetworkManager[7183]: <info>  [1764894099.8983] manager: startup complete
Dec  4 19:21:39 np0005546222 NetworkManager[7183]: <info>  [1764894099.8986] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Dec  4 19:21:39 np0005546222 NetworkManager[7183]: <warn>  [1764894099.8989] device (eth1): Activation: failed for connection 'Wired connection 1'
Dec  4 19:21:39 np0005546222 NetworkManager[7183]: <info>  [1764894099.8996] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Dec  4 19:21:39 np0005546222 systemd[1]: Finished Network Manager Wait Online.
Dec  4 19:21:39 np0005546222 NetworkManager[7183]: <info>  [1764894099.9089] dhcp4 (eth1): canceled DHCP transaction
Dec  4 19:21:39 np0005546222 NetworkManager[7183]: <info>  [1764894099.9090] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  4 19:21:39 np0005546222 NetworkManager[7183]: <info>  [1764894099.9090] dhcp4 (eth1): state changed no lease
Dec  4 19:21:39 np0005546222 NetworkManager[7183]: <info>  [1764894099.9105] policy: auto-activating connection 'ci-private-network' (f5ed226a-1553-53a1-8171-c813f4b5c69c)
Dec  4 19:21:39 np0005546222 NetworkManager[7183]: <info>  [1764894099.9110] device (eth1): Activation: starting connection 'ci-private-network' (f5ed226a-1553-53a1-8171-c813f4b5c69c)
Dec  4 19:21:39 np0005546222 NetworkManager[7183]: <info>  [1764894099.9110] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  4 19:21:39 np0005546222 NetworkManager[7183]: <info>  [1764894099.9113] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  4 19:21:39 np0005546222 NetworkManager[7183]: <info>  [1764894099.9118] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  4 19:21:39 np0005546222 NetworkManager[7183]: <info>  [1764894099.9125] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  4 19:21:39 np0005546222 NetworkManager[7183]: <info>  [1764894099.9159] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  4 19:21:39 np0005546222 NetworkManager[7183]: <info>  [1764894099.9161] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  4 19:21:39 np0005546222 NetworkManager[7183]: <info>  [1764894099.9166] device (eth1): Activation: successful, device activated.
Dec  4 19:21:49 np0005546222 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  4 19:21:55 np0005546222 systemd-logind[792]: Session 1 logged out. Waiting for processes to exit.
Dec  4 19:22:01 np0005546222 systemd-logind[792]: New session 3 of user zuul.
Dec  4 19:22:01 np0005546222 systemd[1]: Started Session 3 of User zuul.
Dec  4 19:22:02 np0005546222 python3[7369]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 19:22:02 np0005546222 python3[7442]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764894122.0749617-267-232678731867454/source _original_basename=tmp8nei87_4 follow=False checksum=f7e92cd384322c1de547c4614a92d0716d6c382e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:22:04 np0005546222 systemd[1]: session-3.scope: Deactivated successfully.
Dec  4 19:22:04 np0005546222 systemd-logind[792]: Session 3 logged out. Waiting for processes to exit.
Dec  4 19:22:04 np0005546222 systemd-logind[792]: Removed session 3.
Dec  4 19:24:16 np0005546222 systemd[4301]: Created slice User Background Tasks Slice.
Dec  4 19:24:16 np0005546222 systemd[4301]: Starting Cleanup of User's Temporary Files and Directories...
Dec  4 19:24:16 np0005546222 systemd[4301]: Finished Cleanup of User's Temporary Files and Directories.
Dec  4 19:29:14 np0005546222 systemd-logind[792]: New session 4 of user zuul.
Dec  4 19:29:14 np0005546222 systemd[1]: Started Session 4 of User zuul.
Dec  4 19:29:14 np0005546222 python3[7520]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-59a0-e257-000000001cd6-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 19:29:14 np0005546222 python3[7549]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:29:14 np0005546222 python3[7575]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:29:15 np0005546222 python3[7601]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:29:15 np0005546222 python3[7627]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:29:15 np0005546222 python3[7653]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:29:16 np0005546222 python3[7731]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 19:29:16 np0005546222 python3[7804]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764894556.1267543-477-99524090553393/source _original_basename=tmp7ryqk7zz follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:29:17 np0005546222 python3[7854]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  4 19:29:17 np0005546222 systemd[1]: Reloading.
Dec  4 19:29:17 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 19:29:19 np0005546222 python3[7910]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Dec  4 19:29:20 np0005546222 python3[7936]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 19:29:20 np0005546222 python3[7964]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 19:29:20 np0005546222 python3[7992]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 19:29:21 np0005546222 python3[8020]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 19:29:21 np0005546222 python3[8047]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-59a0-e257-000000001cdd-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 19:29:22 np0005546222 python3[8077]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  4 19:29:24 np0005546222 systemd[1]: session-4.scope: Deactivated successfully.
Dec  4 19:29:24 np0005546222 systemd[1]: session-4.scope: Consumed 4.207s CPU time.
Dec  4 19:29:24 np0005546222 systemd-logind[792]: Session 4 logged out. Waiting for processes to exit.
Dec  4 19:29:24 np0005546222 systemd-logind[792]: Removed session 4.
Dec  4 19:29:25 np0005546222 systemd-logind[792]: New session 5 of user zuul.
Dec  4 19:29:25 np0005546222 systemd[1]: Started Session 5 of User zuul.
Dec  4 19:29:25 np0005546222 python3[8113]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  4 19:29:38 np0005546222 kernel: SELinux:  Converting 385 SID table entries...
Dec  4 19:29:38 np0005546222 kernel: SELinux:  policy capability network_peer_controls=1
Dec  4 19:29:38 np0005546222 kernel: SELinux:  policy capability open_perms=1
Dec  4 19:29:38 np0005546222 kernel: SELinux:  policy capability extended_socket_class=1
Dec  4 19:29:38 np0005546222 kernel: SELinux:  policy capability always_check_network=0
Dec  4 19:29:38 np0005546222 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  4 19:29:38 np0005546222 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  4 19:29:38 np0005546222 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  4 19:29:47 np0005546222 kernel: SELinux:  Converting 385 SID table entries...
Dec  4 19:29:47 np0005546222 kernel: SELinux:  policy capability network_peer_controls=1
Dec  4 19:29:47 np0005546222 kernel: SELinux:  policy capability open_perms=1
Dec  4 19:29:47 np0005546222 kernel: SELinux:  policy capability extended_socket_class=1
Dec  4 19:29:47 np0005546222 kernel: SELinux:  policy capability always_check_network=0
Dec  4 19:29:47 np0005546222 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  4 19:29:47 np0005546222 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  4 19:29:47 np0005546222 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  4 19:29:56 np0005546222 kernel: SELinux:  Converting 385 SID table entries...
Dec  4 19:29:56 np0005546222 kernel: SELinux:  policy capability network_peer_controls=1
Dec  4 19:29:56 np0005546222 kernel: SELinux:  policy capability open_perms=1
Dec  4 19:29:56 np0005546222 kernel: SELinux:  policy capability extended_socket_class=1
Dec  4 19:29:56 np0005546222 kernel: SELinux:  policy capability always_check_network=0
Dec  4 19:29:56 np0005546222 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  4 19:29:56 np0005546222 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  4 19:29:56 np0005546222 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  4 19:29:57 np0005546222 setsebool[8176]: The virt_use_nfs policy boolean was changed to 1 by root
Dec  4 19:29:57 np0005546222 setsebool[8176]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Dec  4 19:30:08 np0005546222 kernel: SELinux:  Converting 388 SID table entries...
Dec  4 19:30:08 np0005546222 kernel: SELinux:  policy capability network_peer_controls=1
Dec  4 19:30:08 np0005546222 kernel: SELinux:  policy capability open_perms=1
Dec  4 19:30:08 np0005546222 kernel: SELinux:  policy capability extended_socket_class=1
Dec  4 19:30:08 np0005546222 kernel: SELinux:  policy capability always_check_network=0
Dec  4 19:30:08 np0005546222 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  4 19:30:08 np0005546222 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  4 19:30:08 np0005546222 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  4 19:30:26 np0005546222 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec  4 19:30:26 np0005546222 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  4 19:30:26 np0005546222 systemd[1]: Starting man-db-cache-update.service...
Dec  4 19:30:26 np0005546222 systemd[1]: Reloading.
Dec  4 19:30:26 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 19:30:26 np0005546222 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  4 19:30:36 np0005546222 python3[15501]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-1d2a-bece-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 19:30:37 np0005546222 kernel: evm: overlay not supported
Dec  4 19:30:37 np0005546222 systemd[4301]: Starting D-Bus User Message Bus...
Dec  4 19:30:37 np0005546222 dbus-broker-launch[15949]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec  4 19:30:37 np0005546222 dbus-broker-launch[15949]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec  4 19:30:37 np0005546222 systemd[4301]: Started D-Bus User Message Bus.
Dec  4 19:30:37 np0005546222 dbus-broker-lau[15949]: Ready
Dec  4 19:30:37 np0005546222 systemd[4301]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec  4 19:30:37 np0005546222 systemd[4301]: Created slice Slice /user.
Dec  4 19:30:37 np0005546222 systemd[4301]: podman-15881.scope: unit configures an IP firewall, but not running as root.
Dec  4 19:30:37 np0005546222 systemd[4301]: (This warning is only shown for the first unit using IP firewalling.)
Dec  4 19:30:37 np0005546222 systemd[4301]: Started podman-15881.scope.
Dec  4 19:30:37 np0005546222 systemd[4301]: Started podman-pause-59ff2333.scope.
Dec  4 19:30:38 np0005546222 python3[16291]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.129.56.107:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.129.56.107:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:30:38 np0005546222 python3[16291]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Dec  4 19:30:38 np0005546222 systemd[1]: session-5.scope: Deactivated successfully.
Dec  4 19:30:38 np0005546222 systemd[1]: session-5.scope: Consumed 59.087s CPU time.
Dec  4 19:30:38 np0005546222 systemd-logind[792]: Session 5 logged out. Waiting for processes to exit.
Dec  4 19:30:38 np0005546222 systemd-logind[792]: Removed session 5.
Dec  4 19:30:48 np0005546222 irqbalance[786]: Cannot change IRQ 27 affinity: Operation not permitted
Dec  4 19:30:48 np0005546222 irqbalance[786]: IRQ 27 affinity is now unmanaged
Dec  4 19:31:01 np0005546222 systemd-logind[792]: New session 6 of user zuul.
Dec  4 19:31:01 np0005546222 systemd[1]: Started Session 6 of User zuul.
Dec  4 19:31:01 np0005546222 python3[25367]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAteWjKx1WidR2fld72QkeCDJvAUiRqCvGoAMZWxhexJ1YeJP5ASDHAiFUpkx06Liwggu/eavRoHvmQvhjUvhOU= zuul@np0005546221.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 19:31:02 np0005546222 python3[25591]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAteWjKx1WidR2fld72QkeCDJvAUiRqCvGoAMZWxhexJ1YeJP5ASDHAiFUpkx06Liwggu/eavRoHvmQvhjUvhOU= zuul@np0005546221.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 19:31:03 np0005546222 python3[26023]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005546222.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec  4 19:31:03 np0005546222 python3[26217]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAteWjKx1WidR2fld72QkeCDJvAUiRqCvGoAMZWxhexJ1YeJP5ASDHAiFUpkx06Liwggu/eavRoHvmQvhjUvhOU= zuul@np0005546221.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  4 19:31:04 np0005546222 python3[26447]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 19:31:04 np0005546222 python3[26688]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764894663.824713-135-161331014616792/source _original_basename=tmp6p6l9uxd follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:31:05 np0005546222 python3[27016]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Dec  4 19:31:05 np0005546222 systemd[1]: Starting Hostname Service...
Dec  4 19:31:05 np0005546222 systemd[1]: Started Hostname Service.
Dec  4 19:31:05 np0005546222 systemd-hostnamed[27112]: Changed pretty hostname to 'compute-0'
Dec  4 19:31:05 np0005546222 systemd-hostnamed[27112]: Hostname set to <compute-0> (static)
Dec  4 19:31:05 np0005546222 NetworkManager[7183]: <info>  [1764894665.5333] hostname: static hostname changed from "np0005546222.novalocal" to "compute-0"
Dec  4 19:31:05 np0005546222 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  4 19:31:05 np0005546222 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  4 19:31:06 np0005546222 systemd[1]: session-6.scope: Deactivated successfully.
Dec  4 19:31:06 np0005546222 systemd[1]: session-6.scope: Consumed 2.183s CPU time.
Dec  4 19:31:06 np0005546222 systemd-logind[792]: Session 6 logged out. Waiting for processes to exit.
Dec  4 19:31:06 np0005546222 systemd-logind[792]: Removed session 6.
Dec  4 19:31:08 np0005546222 irqbalance[786]: Cannot change IRQ 26 affinity: Operation not permitted
Dec  4 19:31:08 np0005546222 irqbalance[786]: IRQ 26 affinity is now unmanaged
Dec  4 19:31:14 np0005546222 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  4 19:31:14 np0005546222 systemd[1]: Finished man-db-cache-update.service.
Dec  4 19:31:14 np0005546222 systemd[1]: man-db-cache-update.service: Consumed 57.282s CPU time.
Dec  4 19:31:14 np0005546222 systemd[1]: run-rabd214ef511a41589295ad367d7d3a2d.service: Deactivated successfully.
Dec  4 19:31:15 np0005546222 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  4 19:31:35 np0005546222 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  4 19:34:16 np0005546222 systemd[1]: Starting Cleanup of Temporary Directories...
Dec  4 19:34:16 np0005546222 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Dec  4 19:34:16 np0005546222 systemd[1]: Finished Cleanup of Temporary Directories.
Dec  4 19:34:16 np0005546222 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Dec  4 19:36:10 np0005546222 systemd-logind[792]: New session 7 of user zuul.
Dec  4 19:36:10 np0005546222 systemd[1]: Started Session 7 of User zuul.
Dec  4 19:36:11 np0005546222 python3[30072]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 19:36:12 np0005546222 python3[30188]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 19:36:13 np0005546222 python3[30261]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764894972.5303113-33625-133759135839121/source mode=0755 _original_basename=delorean.repo follow=False checksum=39c885eb875fd03e010d1b0454241c26b121dfb2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:36:13 np0005546222 python3[30287]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 19:36:14 np0005546222 python3[30360]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764894972.5303113-33625-133759135839121/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:36:14 np0005546222 python3[30386]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 19:36:14 np0005546222 python3[30459]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764894972.5303113-33625-133759135839121/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:36:15 np0005546222 python3[30485]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 19:36:15 np0005546222 python3[30558]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764894972.5303113-33625-133759135839121/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:36:15 np0005546222 python3[30584]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 19:36:16 np0005546222 python3[30657]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764894972.5303113-33625-133759135839121/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:36:16 np0005546222 python3[30683]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 19:36:16 np0005546222 python3[30756]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764894972.5303113-33625-133759135839121/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:36:17 np0005546222 python3[30782]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  4 19:36:17 np0005546222 python3[30855]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764894972.5303113-33625-133759135839121/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6e18e2038d54303b4926db53c0b6cced515a9151 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:39:10 np0005546222 python3[30913]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 19:42:16 np0005546222 systemd[1]: Starting dnf makecache...
Dec  4 19:42:16 np0005546222 dnf[30927]: Failed determining last makecache time.
Dec  4 19:42:16 np0005546222 dnf[30927]: delorean-openstack-barbican-42b4c41831408a8e323 332 kB/s |  13 kB     00:00
Dec  4 19:42:16 np0005546222 dnf[30927]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 1.9 MB/s |  65 kB     00:00
Dec  4 19:42:16 np0005546222 dnf[30927]: delorean-openstack-cinder-1c00d6490d88e436f26ef 1.1 MB/s |  32 kB     00:00
Dec  4 19:42:17 np0005546222 dnf[30927]: delorean-python-stevedore-c4acc5639fd2329372142 3.7 MB/s | 131 kB     00:00
Dec  4 19:42:17 np0005546222 dnf[30927]: delorean-python-cloudkitty-tests-tempest-2c80f8 1.4 MB/s |  32 kB     00:00
Dec  4 19:42:17 np0005546222 dnf[30927]: delorean-os-net-config-d0cedbdb788d43e5c7551df5 3.1 MB/s | 349 kB     00:00
Dec  4 19:42:17 np0005546222 dnf[30927]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 1.1 MB/s |  42 kB     00:00
Dec  4 19:42:17 np0005546222 dnf[30927]: delorean-python-designate-tests-tempest-347fdbc 626 kB/s |  18 kB     00:00
Dec  4 19:42:17 np0005546222 dnf[30927]: delorean-openstack-glance-1fd12c29b339f30fe823e 483 kB/s |  18 kB     00:00
Dec  4 19:42:17 np0005546222 dnf[30927]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 1.3 MB/s |  29 kB     00:00
Dec  4 19:42:17 np0005546222 dnf[30927]: delorean-openstack-manila-3c01b7181572c95dac462 773 kB/s |  25 kB     00:00
Dec  4 19:42:17 np0005546222 dnf[30927]: delorean-python-whitebox-neutron-tests-tempest- 6.2 MB/s | 154 kB     00:00
Dec  4 19:42:17 np0005546222 dnf[30927]: delorean-openstack-octavia-ba397f07a7331190208c 823 kB/s |  26 kB     00:00
Dec  4 19:42:17 np0005546222 dnf[30927]: delorean-openstack-watcher-c014f81a8647287f6dcc 526 kB/s |  16 kB     00:00
Dec  4 19:42:17 np0005546222 dnf[30927]: delorean-ansible-config_template-5ccaa22121a7ff 318 kB/s | 7.4 kB     00:00
Dec  4 19:42:17 np0005546222 dnf[30927]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 3.8 MB/s | 144 kB     00:00
Dec  4 19:42:17 np0005546222 dnf[30927]: delorean-openstack-swift-dc98a8463506ac520c469a 446 kB/s |  14 kB     00:00
Dec  4 19:42:17 np0005546222 dnf[30927]: delorean-python-tempestconf-8515371b7cceebd4282 1.7 MB/s |  53 kB     00:00
Dec  4 19:42:17 np0005546222 dnf[30927]: delorean-openstack-heat-ui-013accbfd179753bc3f0 2.3 MB/s |  96 kB     00:00
Dec  4 19:42:18 np0005546222 dnf[30927]: CentOS Stream 9 - BaseOS                         27 kB/s | 7.3 kB     00:00
Dec  4 19:42:18 np0005546222 dnf[30927]: CentOS Stream 9 - AppStream                      81 kB/s | 7.4 kB     00:00
Dec  4 19:42:18 np0005546222 dnf[30927]: CentOS Stream 9 - CRB                            70 kB/s | 7.2 kB     00:00
Dec  4 19:42:18 np0005546222 dnf[30927]: CentOS Stream 9 - Extras packages                73 kB/s | 8.3 kB     00:00
Dec  4 19:42:18 np0005546222 dnf[30927]: dlrn-antelope-testing                            25 MB/s | 1.1 MB     00:00
Dec  4 19:42:19 np0005546222 dnf[30927]: dlrn-antelope-build-deps                         17 MB/s | 461 kB     00:00
Dec  4 19:42:19 np0005546222 dnf[30927]: centos9-rabbitmq                                8.5 MB/s | 123 kB     00:00
Dec  4 19:42:19 np0005546222 dnf[30927]: centos9-storage                                  27 MB/s | 415 kB     00:00
Dec  4 19:42:19 np0005546222 dnf[30927]: centos9-opstools                                4.5 MB/s |  51 kB     00:00
Dec  4 19:42:19 np0005546222 dnf[30927]: NFV SIG OpenvSwitch                              18 MB/s | 456 kB     00:00
Dec  4 19:42:20 np0005546222 dnf[30927]: repo-setup-centos-appstream                      85 MB/s |  25 MB     00:00
Dec  4 19:42:26 np0005546222 dnf[30927]: repo-setup-centos-baseos                         63 MB/s | 8.8 MB     00:00
Dec  4 19:42:27 np0005546222 dnf[30927]: repo-setup-centos-highavailability               15 MB/s | 744 kB     00:00
Dec  4 19:42:27 np0005546222 dnf[30927]: repo-setup-centos-powertools                     72 MB/s | 7.3 MB     00:00
Dec  4 19:42:30 np0005546222 dnf[30927]: Extra Packages for Enterprise Linux 9 - x86_64   27 MB/s |  20 MB     00:00
Dec  4 19:42:43 np0005546222 dnf[30927]: Metadata cache created.
Dec  4 19:42:43 np0005546222 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec  4 19:42:43 np0005546222 systemd[1]: Finished dnf makecache.
Dec  4 19:42:43 np0005546222 systemd[1]: dnf-makecache.service: Consumed 24.477s CPU time.
Dec  4 19:44:09 np0005546222 systemd[1]: session-7.scope: Deactivated successfully.
Dec  4 19:44:09 np0005546222 systemd[1]: session-7.scope: Consumed 5.635s CPU time.
Dec  4 19:44:09 np0005546222 systemd-logind[792]: Session 7 logged out. Waiting for processes to exit.
Dec  4 19:44:09 np0005546222 systemd-logind[792]: Removed session 7.
Dec  4 19:50:56 np0005546222 systemd-logind[792]: New session 8 of user zuul.
Dec  4 19:50:56 np0005546222 systemd[1]: Started Session 8 of User zuul.
Dec  4 19:50:57 np0005546222 python3.9[31185]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 19:50:59 np0005546222 python3.9[31366]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 19:51:06 np0005546222 systemd[1]: session-8.scope: Deactivated successfully.
Dec  4 19:51:06 np0005546222 systemd[1]: session-8.scope: Consumed 7.855s CPU time.
Dec  4 19:51:06 np0005546222 systemd-logind[792]: Session 8 logged out. Waiting for processes to exit.
Dec  4 19:51:06 np0005546222 systemd-logind[792]: Removed session 8.
Dec  4 19:51:22 np0005546222 systemd-logind[792]: New session 9 of user zuul.
Dec  4 19:51:22 np0005546222 systemd[1]: Started Session 9 of User zuul.
Dec  4 19:51:23 np0005546222 python3.9[31578]: ansible-ansible.legacy.ping Invoked with data=pong
Dec  4 19:51:24 np0005546222 python3.9[31752]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 19:51:25 np0005546222 python3.9[31904]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 19:51:26 np0005546222 python3.9[32057]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 19:51:27 np0005546222 python3.9[32209]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:51:28 np0005546222 python3.9[32361]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:51:29 np0005546222 python3.9[32484]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764895887.801169-73-192652768040053/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:51:29 np0005546222 python3.9[32636]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 19:51:30 np0005546222 python3.9[32792]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 19:51:31 np0005546222 python3.9[32944]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 19:51:32 np0005546222 python3.9[33094]: ansible-ansible.builtin.service_facts Invoked
Dec  4 19:51:38 np0005546222 python3.9[33347]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:51:39 np0005546222 python3.9[33497]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 19:51:40 np0005546222 python3.9[33651]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 19:51:41 np0005546222 python3.9[33809]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  4 19:51:42 np0005546222 python3.9[33893]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  4 19:52:26 np0005546222 systemd[1]: Reloading.
Dec  4 19:52:26 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 19:52:26 np0005546222 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Dec  4 19:52:27 np0005546222 systemd[1]: Reloading.
Dec  4 19:52:27 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 19:52:27 np0005546222 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Dec  4 19:52:27 np0005546222 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Dec  4 19:52:27 np0005546222 systemd[1]: Reloading.
Dec  4 19:52:27 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 19:52:27 np0005546222 systemd[1]: Listening on LVM2 poll daemon socket.
Dec  4 19:52:27 np0005546222 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Dec  4 19:52:27 np0005546222 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Dec  4 19:52:27 np0005546222 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Dec  4 19:53:32 np0005546222 kernel: SELinux:  Converting 2719 SID table entries...
Dec  4 19:53:32 np0005546222 kernel: SELinux:  policy capability network_peer_controls=1
Dec  4 19:53:32 np0005546222 kernel: SELinux:  policy capability open_perms=1
Dec  4 19:53:32 np0005546222 kernel: SELinux:  policy capability extended_socket_class=1
Dec  4 19:53:32 np0005546222 kernel: SELinux:  policy capability always_check_network=0
Dec  4 19:53:32 np0005546222 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  4 19:53:32 np0005546222 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  4 19:53:32 np0005546222 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  4 19:53:32 np0005546222 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Dec  4 19:53:33 np0005546222 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  4 19:53:33 np0005546222 systemd[1]: Starting man-db-cache-update.service...
Dec  4 19:53:33 np0005546222 systemd[1]: Reloading.
Dec  4 19:53:33 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 19:53:33 np0005546222 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  4 19:53:34 np0005546222 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  4 19:53:34 np0005546222 systemd[1]: Finished man-db-cache-update.service.
Dec  4 19:53:34 np0005546222 systemd[1]: man-db-cache-update.service: Consumed 1.157s CPU time.
Dec  4 19:53:34 np0005546222 systemd[1]: run-r4d6dfa0464004ed9afee165154f165ad.service: Deactivated successfully.
Dec  4 19:53:34 np0005546222 python3.9[35438]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 19:53:36 np0005546222 python3.9[35719]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec  4 19:53:37 np0005546222 python3.9[35871]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec  4 19:53:40 np0005546222 python3.9[36024]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:53:41 np0005546222 python3.9[36176]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec  4 19:53:42 np0005546222 python3.9[36328]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 19:53:44 np0005546222 python3.9[36480]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:53:48 np0005546222 python3.9[36603]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896022.979676-236-152245672197792/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=aad3215deeeb1eba7754fd1a27527afcf2bb5051 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:53:50 np0005546222 python3.9[36755]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 19:53:50 np0005546222 python3.9[36907]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 19:53:51 np0005546222 python3.9[37060]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:53:52 np0005546222 python3.9[37212]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec  4 19:53:52 np0005546222 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  4 19:53:53 np0005546222 python3.9[37366]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  4 19:53:54 np0005546222 python3.9[37524]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  4 19:53:55 np0005546222 python3.9[37684]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec  4 19:53:56 np0005546222 python3.9[37837]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  4 19:53:57 np0005546222 python3.9[37995]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec  4 19:53:58 np0005546222 python3.9[38147]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  4 19:54:00 np0005546222 python3.9[38300]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 19:54:01 np0005546222 python3.9[38452]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:54:01 np0005546222 python3.9[38575]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764896040.871625-355-185880773444324/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  4 19:54:02 np0005546222 python3.9[38727]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  4 19:54:03 np0005546222 systemd[1]: Starting Load Kernel Modules...
Dec  4 19:54:03 np0005546222 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec  4 19:54:03 np0005546222 kernel: Bridge firewalling registered
Dec  4 19:54:03 np0005546222 systemd-modules-load[38731]: Inserted module 'br_netfilter'
Dec  4 19:54:03 np0005546222 systemd[1]: Finished Load Kernel Modules.
Dec  4 19:54:03 np0005546222 python3.9[38887]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:54:04 np0005546222 python3.9[39010]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764896043.2780735-378-96168174893512/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  4 19:54:05 np0005546222 python3.9[39162]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  4 19:54:10 np0005546222 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Dec  4 19:54:10 np0005546222 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Dec  4 19:54:11 np0005546222 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  4 19:54:11 np0005546222 systemd[1]: Starting man-db-cache-update.service...
Dec  4 19:54:11 np0005546222 systemd[1]: Reloading.
Dec  4 19:54:11 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 19:54:11 np0005546222 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  4 19:54:13 np0005546222 python3.9[41457]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 19:54:14 np0005546222 python3.9[42481]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec  4 19:54:15 np0005546222 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  4 19:54:15 np0005546222 systemd[1]: Finished man-db-cache-update.service.
Dec  4 19:54:15 np0005546222 systemd[1]: man-db-cache-update.service: Consumed 4.700s CPU time.
Dec  4 19:54:15 np0005546222 systemd[1]: run-r2da31c0c80154411b0a98127cd83664a.service: Deactivated successfully.
Dec  4 19:54:15 np0005546222 python3.9[43205]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 19:54:16 np0005546222 python3.9[43358]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 19:54:16 np0005546222 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  4 19:54:16 np0005546222 systemd[1]: Starting Authorization Manager...
Dec  4 19:54:16 np0005546222 polkitd[43575]: Started polkitd version 0.117
Dec  4 19:54:16 np0005546222 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  4 19:54:16 np0005546222 systemd[1]: Started Authorization Manager.
Dec  4 19:54:17 np0005546222 python3.9[43745]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 19:54:17 np0005546222 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec  4 19:54:17 np0005546222 systemd[1]: tuned.service: Deactivated successfully.
Dec  4 19:54:17 np0005546222 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec  4 19:54:17 np0005546222 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  4 19:54:17 np0005546222 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  4 19:54:18 np0005546222 python3.9[43907]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec  4 19:54:21 np0005546222 python3.9[44059]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 19:54:21 np0005546222 systemd[1]: Reloading.
Dec  4 19:54:21 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 19:54:22 np0005546222 python3.9[44249]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 19:54:22 np0005546222 systemd[1]: Reloading.
Dec  4 19:54:22 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 19:54:23 np0005546222 python3.9[44438]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 19:54:24 np0005546222 python3.9[44591]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 19:54:24 np0005546222 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Dec  4 19:54:25 np0005546222 python3.9[44744]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 19:54:27 np0005546222 python3.9[44906]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 19:54:27 np0005546222 python3.9[45059]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  4 19:54:27 np0005546222 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec  4 19:54:27 np0005546222 systemd[1]: Stopped Apply Kernel Variables.
Dec  4 19:54:27 np0005546222 systemd[1]: Stopping Apply Kernel Variables...
Dec  4 19:54:27 np0005546222 systemd[1]: Starting Apply Kernel Variables...
Dec  4 19:54:27 np0005546222 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec  4 19:54:28 np0005546222 systemd[1]: Finished Apply Kernel Variables.
Dec  4 19:54:28 np0005546222 systemd[1]: session-9.scope: Deactivated successfully.
Dec  4 19:54:28 np0005546222 systemd[1]: session-9.scope: Consumed 2min 14.486s CPU time.
Dec  4 19:54:28 np0005546222 systemd-logind[792]: Session 9 logged out. Waiting for processes to exit.
Dec  4 19:54:28 np0005546222 systemd-logind[792]: Removed session 9.
Dec  4 19:54:34 np0005546222 systemd-logind[792]: New session 10 of user zuul.
Dec  4 19:54:34 np0005546222 systemd[1]: Started Session 10 of User zuul.
Dec  4 19:54:35 np0005546222 python3.9[45242]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 19:54:36 np0005546222 python3.9[45398]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec  4 19:54:37 np0005546222 python3.9[45551]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  4 19:54:38 np0005546222 python3.9[45709]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  4 19:54:39 np0005546222 python3.9[45869]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  4 19:54:40 np0005546222 python3.9[45953]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  4 19:54:44 np0005546222 python3.9[46117]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  4 19:54:54 np0005546222 kernel: SELinux:  Converting 2731 SID table entries...
Dec  4 19:54:54 np0005546222 kernel: SELinux:  policy capability network_peer_controls=1
Dec  4 19:54:54 np0005546222 kernel: SELinux:  policy capability open_perms=1
Dec  4 19:54:54 np0005546222 kernel: SELinux:  policy capability extended_socket_class=1
Dec  4 19:54:54 np0005546222 kernel: SELinux:  policy capability always_check_network=0
Dec  4 19:54:54 np0005546222 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  4 19:54:54 np0005546222 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  4 19:54:54 np0005546222 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  4 19:54:55 np0005546222 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Dec  4 19:54:55 np0005546222 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Dec  4 19:54:56 np0005546222 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  4 19:54:56 np0005546222 systemd[1]: Starting man-db-cache-update.service...
Dec  4 19:54:56 np0005546222 systemd[1]: Reloading.
Dec  4 19:54:56 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 19:54:56 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 19:54:56 np0005546222 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  4 19:54:57 np0005546222 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  4 19:54:57 np0005546222 systemd[1]: Finished man-db-cache-update.service.
Dec  4 19:54:57 np0005546222 systemd[1]: run-re81ed5bc8b4d4abe8d7cb9f291f2b5ec.service: Deactivated successfully.
Dec  4 19:54:58 np0005546222 python3.9[47216]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  4 19:54:58 np0005546222 systemd[1]: Reloading.
Dec  4 19:54:58 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 19:54:58 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 19:54:58 np0005546222 systemd[1]: Starting Open vSwitch Database Unit...
Dec  4 19:54:58 np0005546222 chown[47258]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Dec  4 19:54:58 np0005546222 ovs-ctl[47263]: /etc/openvswitch/conf.db does not exist ... (warning).
Dec  4 19:54:58 np0005546222 ovs-ctl[47263]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Dec  4 19:54:58 np0005546222 ovs-ctl[47263]: Starting ovsdb-server [  OK  ]
Dec  4 19:54:58 np0005546222 ovs-vsctl[47312]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Dec  4 19:54:59 np0005546222 ovs-vsctl[47332]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"8dd76c1c-ab01-42af-b35e-2e870841b6ad\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Dec  4 19:54:59 np0005546222 ovs-ctl[47263]: Configuring Open vSwitch system IDs [  OK  ]
Dec  4 19:54:59 np0005546222 ovs-ctl[47263]: Enabling remote OVSDB managers [  OK  ]
Dec  4 19:54:59 np0005546222 systemd[1]: Started Open vSwitch Database Unit.
Dec  4 19:54:59 np0005546222 ovs-vsctl[47338]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec  4 19:54:59 np0005546222 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Dec  4 19:54:59 np0005546222 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Dec  4 19:54:59 np0005546222 systemd[1]: Starting Open vSwitch Forwarding Unit...
Dec  4 19:54:59 np0005546222 kernel: openvswitch: Open vSwitch switching datapath
Dec  4 19:54:59 np0005546222 ovs-ctl[47382]: Inserting openvswitch module [  OK  ]
Dec  4 19:54:59 np0005546222 ovs-ctl[47351]: Starting ovs-vswitchd [  OK  ]
Dec  4 19:54:59 np0005546222 ovs-ctl[47351]: Enabling remote OVSDB managers [  OK  ]
Dec  4 19:54:59 np0005546222 systemd[1]: Started Open vSwitch Forwarding Unit.
Dec  4 19:54:59 np0005546222 ovs-vsctl[47400]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec  4 19:54:59 np0005546222 systemd[1]: Starting Open vSwitch...
Dec  4 19:54:59 np0005546222 systemd[1]: Finished Open vSwitch.
Dec  4 19:55:00 np0005546222 python3.9[47551]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 19:55:01 np0005546222 python3.9[47703]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec  4 19:55:02 np0005546222 kernel: SELinux:  Converting 2745 SID table entries...
Dec  4 19:55:02 np0005546222 kernel: SELinux:  policy capability network_peer_controls=1
Dec  4 19:55:02 np0005546222 kernel: SELinux:  policy capability open_perms=1
Dec  4 19:55:02 np0005546222 kernel: SELinux:  policy capability extended_socket_class=1
Dec  4 19:55:02 np0005546222 kernel: SELinux:  policy capability always_check_network=0
Dec  4 19:55:02 np0005546222 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  4 19:55:02 np0005546222 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  4 19:55:02 np0005546222 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  4 19:55:03 np0005546222 python3.9[47858]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 19:55:04 np0005546222 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Dec  4 19:55:04 np0005546222 python3.9[48016]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  4 19:55:06 np0005546222 python3.9[48169]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 19:55:08 np0005546222 python3.9[48456]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  4 19:55:08 np0005546222 python3.9[48606]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 19:55:09 np0005546222 python3.9[48760]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  4 19:55:11 np0005546222 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  4 19:55:11 np0005546222 systemd[1]: Starting man-db-cache-update.service...
Dec  4 19:55:11 np0005546222 systemd[1]: Reloading.
Dec  4 19:55:11 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 19:55:11 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 19:55:11 np0005546222 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  4 19:55:11 np0005546222 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  4 19:55:11 np0005546222 systemd[1]: Finished man-db-cache-update.service.
Dec  4 19:55:11 np0005546222 systemd[1]: run-rb892d9f4f1bc4359bf72bc6f6b4e4a8f.service: Deactivated successfully.
Dec  4 19:55:12 np0005546222 python3.9[49079]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  4 19:55:12 np0005546222 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec  4 19:55:12 np0005546222 systemd[1]: Stopped Network Manager Wait Online.
Dec  4 19:55:12 np0005546222 systemd[1]: Stopping Network Manager Wait Online...
Dec  4 19:55:12 np0005546222 systemd[1]: Stopping Network Manager...
Dec  4 19:55:12 np0005546222 NetworkManager[7183]: <info>  [1764896112.9171] caught SIGTERM, shutting down normally.
Dec  4 19:55:12 np0005546222 NetworkManager[7183]: <info>  [1764896112.9184] dhcp4 (eth0): canceled DHCP transaction
Dec  4 19:55:12 np0005546222 NetworkManager[7183]: <info>  [1764896112.9184] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  4 19:55:12 np0005546222 NetworkManager[7183]: <info>  [1764896112.9184] dhcp4 (eth0): state changed no lease
Dec  4 19:55:12 np0005546222 NetworkManager[7183]: <info>  [1764896112.9186] manager: NetworkManager state is now CONNECTED_SITE
Dec  4 19:55:12 np0005546222 NetworkManager[7183]: <info>  [1764896112.9246] exiting (success)
Dec  4 19:55:12 np0005546222 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  4 19:55:12 np0005546222 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  4 19:55:12 np0005546222 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec  4 19:55:12 np0005546222 systemd[1]: Stopped Network Manager.
Dec  4 19:55:12 np0005546222 systemd[1]: NetworkManager.service: Consumed 13.604s CPU time, 4.1M memory peak, read 0B from disk, written 39.0K to disk.
Dec  4 19:55:12 np0005546222 systemd[1]: Starting Network Manager...
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.0012] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:4334a7b0-3a1f-41a9-a980-618d92846a01)
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.0014] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.0064] manager[0x564e22c6c090]: monitoring kernel firmware directory '/lib/firmware'.
Dec  4 19:55:13 np0005546222 systemd[1]: Starting Hostname Service...
Dec  4 19:55:13 np0005546222 systemd[1]: Started Hostname Service.
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1175] hostname: hostname: using hostnamed
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1176] hostname: static hostname changed from (none) to "compute-0"
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1182] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1187] manager[0x564e22c6c090]: rfkill: Wi-Fi hardware radio set enabled
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1187] manager[0x564e22c6c090]: rfkill: WWAN hardware radio set enabled
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1208] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1217] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1218] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1218] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1219] manager: Networking is enabled by state file
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1221] settings: Loaded settings plugin: keyfile (internal)
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1224] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1249] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1259] dhcp: init: Using DHCP client 'internal'
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1261] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1266] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1271] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1279] device (lo): Activation: starting connection 'lo' (d5cf929f-c0df-4c7c-b75c-299bce2e80f0)
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1286] device (eth0): carrier: link connected
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1290] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1294] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1295] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1301] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1307] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1312] device (eth1): carrier: link connected
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1315] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1320] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (f5ed226a-1553-53a1-8171-c813f4b5c69c) (indicated)
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1321] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1326] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1331] device (eth1): Activation: starting connection 'ci-private-network' (f5ed226a-1553-53a1-8171-c813f4b5c69c)
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1337] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  4 19:55:13 np0005546222 systemd[1]: Started Network Manager.
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1351] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1355] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1357] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1360] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1363] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1365] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1366] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1371] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1400] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1403] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1414] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1430] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1439] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1442] dhcp4 (eth0): state changed new lease, address=38.102.83.176
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1445] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1451] device (lo): Activation: successful, device activated.
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1462] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  4 19:55:13 np0005546222 systemd[1]: Starting Network Manager Wait Online...
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1565] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1571] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1572] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1575] manager: NetworkManager state is now CONNECTED_LOCAL
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1578] device (eth1): Activation: successful, device activated.
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1612] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1614] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1616] manager: NetworkManager state is now CONNECTED_SITE
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1620] device (eth0): Activation: successful, device activated.
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1623] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  4 19:55:13 np0005546222 NetworkManager[49092]: <info>  [1764896113.1655] manager: startup complete
Dec  4 19:55:13 np0005546222 systemd[1]: Finished Network Manager Wait Online.
Dec  4 19:55:13 np0005546222 python3.9[49305]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  4 19:55:18 np0005546222 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  4 19:55:18 np0005546222 systemd[1]: Starting man-db-cache-update.service...
Dec  4 19:55:18 np0005546222 systemd[1]: Reloading.
Dec  4 19:55:18 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 19:55:18 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 19:55:18 np0005546222 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  4 19:55:19 np0005546222 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  4 19:55:19 np0005546222 systemd[1]: Finished man-db-cache-update.service.
Dec  4 19:55:19 np0005546222 systemd[1]: run-rff9f6249a21948d7b1b5821026ebd649.service: Deactivated successfully.
Dec  4 19:55:20 np0005546222 python3.9[49764]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 19:55:21 np0005546222 python3.9[49916]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:55:22 np0005546222 python3.9[50070]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:55:22 np0005546222 python3.9[50222]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:55:23 np0005546222 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  4 19:55:23 np0005546222 python3.9[50374]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:55:24 np0005546222 python3.9[50526]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:55:25 np0005546222 python3.9[50678]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:55:26 np0005546222 python3.9[50801]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896124.8146298-229-92180075428855/.source _original_basename=.p2a_xk_x follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:55:27 np0005546222 python3.9[50953]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:55:28 np0005546222 python3.9[51105]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Dec  4 19:55:28 np0005546222 python3.9[51257]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:55:31 np0005546222 python3.9[51684]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Dec  4 19:55:32 np0005546222 ansible-async_wrapper.py[51859]: Invoked with j477830243345 300 /home/zuul/.ansible/tmp/ansible-tmp-1764896131.6636953-295-174145510162028/AnsiballZ_edpm_os_net_config.py _
Dec  4 19:55:32 np0005546222 ansible-async_wrapper.py[51862]: Starting module and watcher
Dec  4 19:55:32 np0005546222 ansible-async_wrapper.py[51862]: Start watching 51863 (300)
Dec  4 19:55:32 np0005546222 ansible-async_wrapper.py[51863]: Start module (51863)
Dec  4 19:55:32 np0005546222 ansible-async_wrapper.py[51859]: Return async_wrapper task started.
Dec  4 19:55:32 np0005546222 python3.9[51864]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Dec  4 19:55:33 np0005546222 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Dec  4 19:55:33 np0005546222 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Dec  4 19:55:33 np0005546222 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Dec  4 19:55:33 np0005546222 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Dec  4 19:55:33 np0005546222 kernel: cfg80211: failed to load regulatory.db
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.4597] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51865 uid=0 result="success"
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.4620] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51865 uid=0 result="success"
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5310] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5312] audit: op="connection-add" uuid="4733033e-b349-4da8-936a-745565aa8195" name="br-ex-br" pid=51865 uid=0 result="success"
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5328] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5329] audit: op="connection-add" uuid="8f533c4f-d253-418e-a730-17ae4582acc0" name="br-ex-port" pid=51865 uid=0 result="success"
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5342] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5344] audit: op="connection-add" uuid="a73e10db-a6ec-4485-b445-912f438e1c86" name="eth1-port" pid=51865 uid=0 result="success"
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5357] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5359] audit: op="connection-add" uuid="8e7ee521-5a0b-4baf-a3c5-f56c52f76df9" name="vlan20-port" pid=51865 uid=0 result="success"
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5371] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5373] audit: op="connection-add" uuid="52bbbaaf-5799-4b23-a85c-f4088e52ce08" name="vlan21-port" pid=51865 uid=0 result="success"
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5386] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5387] audit: op="connection-add" uuid="a8e02914-dcb4-454d-af1a-99a710a94712" name="vlan22-port" pid=51865 uid=0 result="success"
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5400] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5401] audit: op="connection-add" uuid="8351dc76-4b2a-4b38-a38b-ec331fba6f0a" name="vlan23-port" pid=51865 uid=0 result="success"
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5422] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.timestamp,connection.autoconnect-priority,ipv6.method,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu" pid=51865 uid=0 result="success"
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5440] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5442] audit: op="connection-add" uuid="f79589b2-1db3-4f8a-bef4-d4fa276641e4" name="br-ex-if" pid=51865 uid=0 result="success"
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5493] audit: op="connection-update" uuid="f5ed226a-1553-53a1-8171-c813f4b5c69c" name="ci-private-network" args="connection.controller,connection.port-type,connection.master,connection.timestamp,connection.slave-type,ipv6.dns,ipv6.routes,ipv6.routing-rules,ipv6.method,ipv6.addresses,ipv6.addr-gen-mode,ipv4.routing-rules,ipv4.dns,ipv4.routes,ipv4.never-default,ipv4.method,ipv4.addresses,ovs-external-ids.data,ovs-interface.type" pid=51865 uid=0 result="success"
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5510] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5512] audit: op="connection-add" uuid="7dd491ff-d7b6-449f-8ec1-4c6249dea15b" name="vlan20-if" pid=51865 uid=0 result="success"
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5529] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5530] audit: op="connection-add" uuid="296ce3d8-abc2-40f9-b037-fd406638e17c" name="vlan21-if" pid=51865 uid=0 result="success"
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5547] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5549] audit: op="connection-add" uuid="0406741a-5880-4a7c-b8e6-759f13e1395b" name="vlan22-if" pid=51865 uid=0 result="success"
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5566] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5568] audit: op="connection-add" uuid="47411013-6ebd-4ebf-8296-d3c322f63179" name="vlan23-if" pid=51865 uid=0 result="success"
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5581] audit: op="connection-delete" uuid="74afdaaf-08cb-315f-8816-01bd59fc3bf4" name="Wired connection 1" pid=51865 uid=0 result="success"
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5593] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5602] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5606] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (4733033e-b349-4da8-936a-745565aa8195)
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5607] audit: op="connection-activate" uuid="4733033e-b349-4da8-936a-745565aa8195" name="br-ex-br" pid=51865 uid=0 result="success"
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5609] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5616] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5621] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (8f533c4f-d253-418e-a730-17ae4582acc0)
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5624] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5629] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5633] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (a73e10db-a6ec-4485-b445-912f438e1c86)
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5635] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5642] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5646] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (8e7ee521-5a0b-4baf-a3c5-f56c52f76df9)
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5648] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5654] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5659] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (52bbbaaf-5799-4b23-a85c-f4088e52ce08)
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5660] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5666] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5670] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (a8e02914-dcb4-454d-af1a-99a710a94712)
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5672] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5677] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5681] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (8351dc76-4b2a-4b38-a38b-ec331fba6f0a)
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5682] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5684] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5685] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5691] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5694] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5697] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (f79589b2-1db3-4f8a-bef4-d4fa276641e4)
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5698] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5701] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5702] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5703] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5704] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5714] device (eth1): disconnecting for new activation request.
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5714] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5717] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5719] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5720] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5723] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5726] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5731] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (7dd491ff-d7b6-449f-8ec1-4c6249dea15b)
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5732] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5734] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5735] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5737] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5739] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5744] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5748] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (296ce3d8-abc2-40f9-b037-fd406638e17c)
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5749] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5752] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5753] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5754] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5757] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5762] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5766] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (0406741a-5880-4a7c-b8e6-759f13e1395b)
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5767] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5769] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5771] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5772] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5775] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5779] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5782] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (47411013-6ebd-4ebf-8296-d3c322f63179)
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5783] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5786] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5787] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5788] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5790] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5800] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,ipv6.method,ipv6.addr-gen-mode,ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu" pid=51865 uid=0 result="success"
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5802] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5805] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5806] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5812] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5815] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5819] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5822] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5824] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 kernel: ovs-system: entered promiscuous mode
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5828] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5831] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5833] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5834] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5839] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5842] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5845] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5846] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 kernel: Timeout policy base is empty
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5849] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5851] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 systemd-udevd[51870]: Network interface NamePolicy= disabled on kernel command line.
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5854] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5856] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5861] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5864] dhcp4 (eth0): canceled DHCP transaction
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5864] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5864] dhcp4 (eth0): state changed no lease
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5866] dhcp4 (eth0): activation: beginning transaction (no timeout)
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5875] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5877] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51865 uid=0 result="fail" reason="Device is not activated"
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5882] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Dec  4 19:55:34 np0005546222 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5914] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5917] dhcp4 (eth0): state changed new lease, address=38.102.83.176
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5922] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5956] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5966] device (eth1): disconnecting for new activation request.
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5966] audit: op="connection-activate" uuid="f5ed226a-1553-53a1-8171-c813f4b5c69c" name="ci-private-network" pid=51865 uid=0 result="success"
Dec  4 19:55:34 np0005546222 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.5995] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51865 uid=0 result="success"
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6000] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec  4 19:55:34 np0005546222 kernel: br-ex: entered promiscuous mode
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6125] device (eth1): Activation: starting connection 'ci-private-network' (f5ed226a-1553-53a1-8171-c813f4b5c69c)
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6128] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6134] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6136] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6140] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6143] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6150] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6151] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6152] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6153] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6153] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6154] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6173] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6180] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6184] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6186] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6189] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6192] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6198] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6201] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6206] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Dec  4 19:55:34 np0005546222 kernel: vlan22: entered promiscuous mode
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6209] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6212] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6215] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6217] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Dec  4 19:55:34 np0005546222 systemd-udevd[51869]: Network interface NamePolicy= disabled on kernel command line.
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6224] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6226] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6251] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Dec  4 19:55:34 np0005546222 kernel: vlan21: entered promiscuous mode
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6270] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6324] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 kernel: vlan23: entered promiscuous mode
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6328] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6333] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6340] device (eth1): Activation: successful, device activated.
Dec  4 19:55:34 np0005546222 systemd-udevd[51973]: Network interface NamePolicy= disabled on kernel command line.
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6361] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 kernel: vlan20: entered promiscuous mode
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6381] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6389] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6395] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6400] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  4 19:55:34 np0005546222 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6413] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6456] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6458] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6463] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6465] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6471] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6476] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6481] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6496] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6507] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6524] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6543] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6552] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6561] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6569] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6573] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  4 19:55:34 np0005546222 NetworkManager[49092]: <info>  [1764896134.6581] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  4 19:55:35 np0005546222 NetworkManager[49092]: <info>  [1764896135.7866] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51865 uid=0 result="success"
Dec  4 19:55:36 np0005546222 NetworkManager[49092]: <info>  [1764896136.0341] checkpoint[0x564e22c42950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Dec  4 19:55:36 np0005546222 NetworkManager[49092]: <info>  [1764896136.0345] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51865 uid=0 result="success"
Dec  4 19:55:36 np0005546222 NetworkManager[49092]: <info>  [1764896136.3678] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51865 uid=0 result="success"
Dec  4 19:55:36 np0005546222 NetworkManager[49092]: <info>  [1764896136.3694] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51865 uid=0 result="success"
Dec  4 19:55:36 np0005546222 python3.9[52225]: ansible-ansible.legacy.async_status Invoked with jid=j477830243345.51859 mode=status _async_dir=/root/.ansible_async
Dec  4 19:55:36 np0005546222 NetworkManager[49092]: <info>  [1764896136.6015] audit: op="networking-control" arg="global-dns-configuration" pid=51865 uid=0 result="success"
Dec  4 19:55:36 np0005546222 NetworkManager[49092]: <info>  [1764896136.6055] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Dec  4 19:55:36 np0005546222 NetworkManager[49092]: <info>  [1764896136.6090] audit: op="networking-control" arg="global-dns-configuration" pid=51865 uid=0 result="success"
Dec  4 19:55:36 np0005546222 NetworkManager[49092]: <info>  [1764896136.6117] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51865 uid=0 result="success"
Dec  4 19:55:36 np0005546222 NetworkManager[49092]: <info>  [1764896136.7636] checkpoint[0x564e22c42a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Dec  4 19:55:36 np0005546222 NetworkManager[49092]: <info>  [1764896136.7645] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51865 uid=0 result="success"
Dec  4 19:55:36 np0005546222 ansible-async_wrapper.py[51863]: Module complete (51863)
Dec  4 19:55:37 np0005546222 ansible-async_wrapper.py[51862]: Done in kid B.
Dec  4 19:55:39 np0005546222 python3.9[52329]: ansible-ansible.legacy.async_status Invoked with jid=j477830243345.51859 mode=status _async_dir=/root/.ansible_async
Dec  4 19:55:40 np0005546222 python3.9[52429]: ansible-ansible.legacy.async_status Invoked with jid=j477830243345.51859 mode=cleanup _async_dir=/root/.ansible_async
Dec  4 19:55:41 np0005546222 python3.9[52581]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:55:41 np0005546222 python3.9[52704]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896140.6202617-322-266499731196227/.source.returncode _original_basename=.3_6mzl2u follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:55:42 np0005546222 python3.9[52856]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:55:43 np0005546222 python3.9[52979]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896141.9363987-338-157019334410010/.source.cfg _original_basename=.z1z3zln4 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:55:43 np0005546222 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  4 19:55:43 np0005546222 python3.9[53134]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  4 19:55:44 np0005546222 systemd[1]: Reloading Network Manager...
Dec  4 19:55:44 np0005546222 NetworkManager[49092]: <info>  [1764896144.0313] audit: op="reload" arg="0" pid=53138 uid=0 result="success"
Dec  4 19:55:44 np0005546222 NetworkManager[49092]: <info>  [1764896144.0322] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Dec  4 19:55:44 np0005546222 systemd[1]: Reloaded Network Manager.
Dec  4 19:55:44 np0005546222 systemd[1]: session-10.scope: Deactivated successfully.
Dec  4 19:55:44 np0005546222 systemd[1]: session-10.scope: Consumed 49.725s CPU time.
Dec  4 19:55:44 np0005546222 systemd-logind[792]: Session 10 logged out. Waiting for processes to exit.
Dec  4 19:55:44 np0005546222 systemd-logind[792]: Removed session 10.
Dec  4 19:55:49 np0005546222 systemd-logind[792]: New session 11 of user zuul.
Dec  4 19:55:49 np0005546222 systemd[1]: Started Session 11 of User zuul.
Dec  4 19:55:50 np0005546222 python3.9[53322]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 19:55:51 np0005546222 python3.9[53476]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  4 19:55:52 np0005546222 python3.9[53669]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 19:55:53 np0005546222 systemd[1]: session-11.scope: Deactivated successfully.
Dec  4 19:55:53 np0005546222 systemd[1]: session-11.scope: Consumed 2.352s CPU time.
Dec  4 19:55:53 np0005546222 systemd-logind[792]: Session 11 logged out. Waiting for processes to exit.
Dec  4 19:55:53 np0005546222 systemd-logind[792]: Removed session 11.
Dec  4 19:55:54 np0005546222 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  4 19:55:58 np0005546222 systemd-logind[792]: New session 12 of user zuul.
Dec  4 19:55:58 np0005546222 systemd[1]: Started Session 12 of User zuul.
Dec  4 19:55:59 np0005546222 python3.9[53852]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 19:56:00 np0005546222 python3.9[54006]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 19:56:01 np0005546222 python3.9[54162]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  4 19:56:02 np0005546222 python3.9[54246]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  4 19:56:04 np0005546222 python3.9[54400]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  4 19:56:05 np0005546222 python3.9[54595]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:56:06 np0005546222 python3.9[54748]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 19:56:06 np0005546222 podman[54749]: 2025-12-05 00:56:06.274644418 +0000 UTC m=+0.043926615 system refresh
Dec  4 19:56:07 np0005546222 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  4 19:56:07 np0005546222 python3.9[54911]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:56:08 np0005546222 python3.9[55034]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896166.6642826-79-214810692193153/.source.json follow=False _original_basename=podman_network_config.j2 checksum=addbedc07cb79f12a131f0cddb3b2f6a3889c601 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:56:08 np0005546222 python3.9[55186]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:56:09 np0005546222 python3.9[55309]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764896168.276847-94-26263655149675/.source.conf follow=False _original_basename=registries.conf.j2 checksum=086f9dda0e1e7ae15c548d702b012e23e7cc73fc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  4 19:56:10 np0005546222 python3.9[55461]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  4 19:56:11 np0005546222 python3.9[55613]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  4 19:56:11 np0005546222 python3.9[55765]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  4 19:56:12 np0005546222 python3.9[55917]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  4 19:56:13 np0005546222 python3.9[56069]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  4 19:56:15 np0005546222 python3.9[56222]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 19:56:16 np0005546222 python3.9[56376]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 19:56:17 np0005546222 python3.9[56528]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 19:56:18 np0005546222 python3.9[56680]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 19:56:19 np0005546222 python3.9[56833]: ansible-service_facts Invoked
Dec  4 19:56:19 np0005546222 network[56850]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  4 19:56:19 np0005546222 network[56851]: 'network-scripts' will be removed from distribution in near future.
Dec  4 19:56:19 np0005546222 network[56852]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  4 19:56:26 np0005546222 python3.9[57304]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  4 19:56:28 np0005546222 python3.9[57457]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec  4 19:56:30 np0005546222 python3.9[57609]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:56:30 np0005546222 python3.9[57734]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896189.4744866-238-52317952355442/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:56:31 np0005546222 python3.9[57888]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:56:32 np0005546222 python3.9[58013]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896191.0552998-253-112330262379561/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:56:33 np0005546222 python3.9[58167]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:56:34 np0005546222 python3.9[58321]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  4 19:56:35 np0005546222 python3.9[58405]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 19:56:36 np0005546222 python3.9[58559]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  4 19:56:37 np0005546222 python3.9[58643]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  4 19:56:37 np0005546222 chronyd[795]: chronyd exiting
Dec  4 19:56:37 np0005546222 systemd[1]: Stopping NTP client/server...
Dec  4 19:56:37 np0005546222 systemd[1]: chronyd.service: Deactivated successfully.
Dec  4 19:56:37 np0005546222 systemd[1]: Stopped NTP client/server.
Dec  4 19:56:37 np0005546222 systemd[1]: Starting NTP client/server...
Dec  4 19:56:37 np0005546222 chronyd[58652]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec  4 19:56:37 np0005546222 chronyd[58652]: Frequency -28.334 +/- 0.265 ppm read from /var/lib/chrony/drift
Dec  4 19:56:37 np0005546222 chronyd[58652]: Loaded seccomp filter (level 2)
Dec  4 19:56:37 np0005546222 systemd[1]: Started NTP client/server.
Dec  4 19:56:38 np0005546222 systemd[1]: session-12.scope: Deactivated successfully.
Dec  4 19:56:38 np0005546222 systemd[1]: session-12.scope: Consumed 26.737s CPU time.
Dec  4 19:56:38 np0005546222 systemd-logind[792]: Session 12 logged out. Waiting for processes to exit.
Dec  4 19:56:38 np0005546222 systemd-logind[792]: Removed session 12.
Dec  4 19:56:44 np0005546222 systemd-logind[792]: New session 13 of user zuul.
Dec  4 19:56:44 np0005546222 systemd[1]: Started Session 13 of User zuul.
Dec  4 19:56:45 np0005546222 python3.9[58833]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:56:46 np0005546222 python3.9[58985]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:56:47 np0005546222 python3.9[59108]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896206.0332465-34-162815491009642/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:56:47 np0005546222 systemd[1]: session-13.scope: Deactivated successfully.
Dec  4 19:56:47 np0005546222 systemd[1]: session-13.scope: Consumed 1.766s CPU time.
Dec  4 19:56:47 np0005546222 systemd-logind[792]: Session 13 logged out. Waiting for processes to exit.
Dec  4 19:56:47 np0005546222 systemd-logind[792]: Removed session 13.
Dec  4 19:56:54 np0005546222 systemd-logind[792]: New session 14 of user zuul.
Dec  4 19:56:54 np0005546222 systemd[1]: Started Session 14 of User zuul.
Dec  4 19:56:55 np0005546222 python3.9[59286]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 19:56:56 np0005546222 python3.9[59442]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:56:57 np0005546222 python3.9[59617]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:56:58 np0005546222 python3.9[59740]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764896216.9951377-41-20958911228959/.source.json _original_basename=.7pupz0s5 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:56:59 np0005546222 python3.9[59892]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:56:59 np0005546222 python3.9[60015]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896218.910285-64-33352548782895/.source _original_basename=.wfrgppzh follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:57:00 np0005546222 python3.9[60167]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  4 19:57:01 np0005546222 python3.9[60319]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:57:02 np0005546222 python3.9[60442]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764896220.9629488-88-214074288358219/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  4 19:57:02 np0005546222 python3.9[60594]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:57:03 np0005546222 python3.9[60717]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764896222.2321987-88-180251409667733/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  4 19:57:04 np0005546222 python3.9[60869]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:57:05 np0005546222 python3.9[61021]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:57:05 np0005546222 python3.9[61144]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896224.4686036-125-72600650522989/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:57:06 np0005546222 python3.9[61296]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:57:07 np0005546222 python3.9[61419]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896225.9637728-140-115344248334404/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:57:08 np0005546222 python3.9[61571]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 19:57:08 np0005546222 systemd[1]: Reloading.
Dec  4 19:57:08 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 19:57:08 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 19:57:08 np0005546222 systemd[1]: Reloading.
Dec  4 19:57:08 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 19:57:08 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 19:57:08 np0005546222 systemd[1]: Starting EDPM Container Shutdown...
Dec  4 19:57:08 np0005546222 systemd[1]: Finished EDPM Container Shutdown.
Dec  4 19:57:09 np0005546222 python3.9[61797]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:57:10 np0005546222 python3.9[61920]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896229.2022293-163-247688409814825/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:57:11 np0005546222 python3.9[62072]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:57:11 np0005546222 python3.9[62195]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896230.6928594-178-23941689345884/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:57:12 np0005546222 python3.9[62347]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 19:57:12 np0005546222 systemd[1]: Reloading.
Dec  4 19:57:12 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 19:57:12 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 19:57:13 np0005546222 systemd[1]: Reloading.
Dec  4 19:57:13 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 19:57:13 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 19:57:13 np0005546222 systemd[1]: Starting Create netns directory...
Dec  4 19:57:13 np0005546222 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  4 19:57:13 np0005546222 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  4 19:57:13 np0005546222 systemd[1]: Finished Create netns directory.
Dec  4 19:57:14 np0005546222 python3.9[62575]: ansible-ansible.builtin.service_facts Invoked
Dec  4 19:57:14 np0005546222 network[62592]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  4 19:57:14 np0005546222 network[62593]: 'network-scripts' will be removed from distribution in near future.
Dec  4 19:57:14 np0005546222 network[62594]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  4 19:57:19 np0005546222 python3.9[62856]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 19:57:19 np0005546222 systemd[1]: Reloading.
Dec  4 19:57:19 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 19:57:19 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 19:57:19 np0005546222 systemd[1]: Stopping IPv4 firewall with iptables...
Dec  4 19:57:19 np0005546222 iptables.init[62895]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Dec  4 19:57:20 np0005546222 iptables.init[62895]: iptables: Flushing firewall rules: [  OK  ]
Dec  4 19:57:20 np0005546222 systemd[1]: iptables.service: Deactivated successfully.
Dec  4 19:57:20 np0005546222 systemd[1]: Stopped IPv4 firewall with iptables.
Dec  4 19:57:21 np0005546222 python3.9[63091]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 19:57:21 np0005546222 python3.9[63245]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 19:57:22 np0005546222 systemd[1]: Reloading.
Dec  4 19:57:22 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 19:57:22 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 19:57:22 np0005546222 systemd[1]: Starting Netfilter Tables...
Dec  4 19:57:22 np0005546222 systemd[1]: Finished Netfilter Tables.
Dec  4 19:57:23 np0005546222 python3.9[63436]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 19:57:24 np0005546222 python3.9[63589]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:57:25 np0005546222 python3.9[63714]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896243.9620895-247-200501691068586/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:57:25 np0005546222 python3.9[63867]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  4 19:57:26 np0005546222 systemd[1]: Reloading OpenSSH server daemon...
Dec  4 19:57:26 np0005546222 systemd[1]: Reloaded OpenSSH server daemon.
Dec  4 19:57:26 np0005546222 python3.9[64023]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:57:27 np0005546222 python3.9[64175]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:57:27 np0005546222 python3.9[64298]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896246.9736989-278-42443763789819/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:57:28 np0005546222 python3.9[64450]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec  4 19:57:28 np0005546222 systemd[1]: Starting Time & Date Service...
Dec  4 19:57:29 np0005546222 systemd[1]: Started Time & Date Service.
Dec  4 19:57:29 np0005546222 python3.9[64606]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:57:30 np0005546222 python3.9[64758]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:57:31 np0005546222 python3.9[64881]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896250.3588116-313-95273102727955/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:57:32 np0005546222 python3.9[65033]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:57:33 np0005546222 python3.9[65156]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896251.797346-328-232101191811679/.source.yaml _original_basename=.n1fbqfp9 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:57:33 np0005546222 python3.9[65308]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:57:34 np0005546222 python3.9[65431]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896253.3044994-343-204927693497294/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:57:35 np0005546222 python3.9[65583]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 19:57:36 np0005546222 python3.9[65736]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 19:57:37 np0005546222 python3[65889]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  4 19:57:38 np0005546222 python3.9[66041]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:57:39 np0005546222 python3.9[66164]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896257.63038-382-172527882188618/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:57:40 np0005546222 python3.9[66316]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:57:40 np0005546222 python3.9[66439]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896259.3982067-397-183104450018689/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:57:41 np0005546222 python3.9[66591]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:57:42 np0005546222 python3.9[66714]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896260.9380646-412-110636760058224/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:57:42 np0005546222 python3.9[66866]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:57:43 np0005546222 python3.9[66989]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896262.3461065-427-137532319720244/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:57:44 np0005546222 python3.9[67141]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:57:44 np0005546222 python3.9[67264]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896263.7734313-442-26909970014840/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:57:45 np0005546222 python3.9[67416]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:57:46 np0005546222 python3.9[67568]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 19:57:47 np0005546222 python3.9[67727]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:57:48 np0005546222 python3.9[67880]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:57:49 np0005546222 python3.9[68032]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:57:50 np0005546222 python3.9[68184]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  4 19:57:51 np0005546222 python3.9[68337]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  4 19:57:51 np0005546222 systemd[1]: session-14.scope: Deactivated successfully.
Dec  4 19:57:51 np0005546222 systemd[1]: session-14.scope: Consumed 40.592s CPU time.
Dec  4 19:57:51 np0005546222 systemd-logind[792]: Session 14 logged out. Waiting for processes to exit.
Dec  4 19:57:51 np0005546222 systemd-logind[792]: Removed session 14.
Dec  4 19:57:57 np0005546222 systemd-logind[792]: New session 15 of user zuul.
Dec  4 19:57:57 np0005546222 systemd[1]: Started Session 15 of User zuul.
Dec  4 19:57:58 np0005546222 python3.9[68518]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec  4 19:57:59 np0005546222 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  4 19:57:59 np0005546222 python3.9[68672]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 19:58:00 np0005546222 python3.9[68824]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 19:58:01 np0005546222 python3.9[68976]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD34iQxSDvRWxXWiq324tvvnkHz60HCvPTP/DU7o5oImJ7L5PeQTe9tPl2QVsPDuWSCrwTEupWDG8h+dMSTlGmE2dOPB66Zq0d9sww65ZtOq0JsaxhPfTB3aJe6aQDcYq9WQ/1T/lNE0Do7wQL88mneNtNMuLZD9Irm2WwDI38II50hBLyhLkuA6ik5m8wn++kFZPdu0pcYz24ameu4wB8DSKH8UAT3GBfc11AP8MuI6xtpcOT5Dr88jHtVEYH8eW4XWrKQeyZddDcJui/f6NqC4NrPSF4YgDRQ1z6/33N2E9EycvbOgdOt9pq1jpYaWkMHl2KeaAbNoAdSuXTGDhvCzv18a5QdOMVV7965nJMnpteZZjrhzpHSFkbnMvAaoktDOMhKkfPYUY6HhVdkVM7FntS5oT76c92NL3HNHDuV7Oh57/0epCuWK6LT+2z9SlP7VUPaUa2c/nZDSTeZO/gJmuyeJ9Iu8XtE1KvGRpHt6zVpKl1uyEoc+M5SO7YG+r8=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIIWlZK7FF2zVpeujHX1SXvuy5F4vd69JtXI65jfCGUb#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG3QjvzM+uHT65E6nwIhM59XNE6tJ4oKmErztLJ1wZJkltdzzAyZYA6BiT1RzCPoMNPk9MeYIRcQ8NtPcaWiPtU=#012 create=True mode=0644 path=/tmp/ansible.vcbw3jbu state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:58:02 np0005546222 python3.9[69128]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.vcbw3jbu' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 19:58:03 np0005546222 python3.9[69282]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.vcbw3jbu state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:58:04 np0005546222 systemd[1]: session-15.scope: Deactivated successfully.
Dec  4 19:58:04 np0005546222 systemd[1]: session-15.scope: Consumed 4.096s CPU time.
Dec  4 19:58:04 np0005546222 systemd-logind[792]: Session 15 logged out. Waiting for processes to exit.
Dec  4 19:58:04 np0005546222 systemd-logind[792]: Removed session 15.
Dec  4 19:58:10 np0005546222 systemd-logind[792]: New session 16 of user zuul.
Dec  4 19:58:10 np0005546222 systemd[1]: Started Session 16 of User zuul.
Dec  4 19:58:11 np0005546222 python3.9[69460]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 19:58:13 np0005546222 python3.9[69616]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec  4 19:58:14 np0005546222 python3.9[69770]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  4 19:58:16 np0005546222 python3.9[69923]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 19:58:17 np0005546222 python3.9[70076]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 19:58:18 np0005546222 python3.9[70230]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 19:58:19 np0005546222 python3.9[70385]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:58:19 np0005546222 systemd[1]: session-16.scope: Deactivated successfully.
Dec  4 19:58:19 np0005546222 systemd[1]: session-16.scope: Consumed 5.334s CPU time.
Dec  4 19:58:19 np0005546222 systemd-logind[792]: Session 16 logged out. Waiting for processes to exit.
Dec  4 19:58:19 np0005546222 systemd-logind[792]: Removed session 16.
Dec  4 19:58:24 np0005546222 systemd-logind[792]: New session 17 of user zuul.
Dec  4 19:58:24 np0005546222 systemd[1]: Started Session 17 of User zuul.
Dec  4 19:58:25 np0005546222 python3.9[70564]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 19:58:26 np0005546222 python3.9[70720]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  4 19:58:27 np0005546222 python3.9[70804]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  4 19:58:29 np0005546222 python3.9[70955]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 19:58:31 np0005546222 python3.9[71106]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  4 19:58:32 np0005546222 python3.9[71256]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 19:58:32 np0005546222 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  4 19:58:32 np0005546222 python3.9[71407]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 19:58:33 np0005546222 systemd[1]: session-17.scope: Deactivated successfully.
Dec  4 19:58:33 np0005546222 systemd[1]: session-17.scope: Consumed 5.684s CPU time.
Dec  4 19:58:33 np0005546222 systemd-logind[792]: Session 17 logged out. Waiting for processes to exit.
Dec  4 19:58:33 np0005546222 systemd-logind[792]: Removed session 17.
Dec  4 19:58:39 np0005546222 systemd-logind[792]: New session 18 of user zuul.
Dec  4 19:58:39 np0005546222 systemd[1]: Started Session 18 of User zuul.
Dec  4 19:58:40 np0005546222 python3.9[71585]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 19:58:42 np0005546222 python3.9[71741]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 19:58:42 np0005546222 python3.9[71893]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 19:58:43 np0005546222 python3.9[72045]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:58:44 np0005546222 python3.9[72168]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896323.2288954-65-150867799861915/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=5896f62e469b0f9145221f5d7571d3434f8e5542 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:58:45 np0005546222 python3.9[72320]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:58:46 np0005546222 python3.9[72443]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896325.1298184-65-134873501176028/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=09fecf97d4f61e8dfa5e5b79c9358a4c1891f28a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:58:47 np0005546222 python3.9[72595]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:58:47 np0005546222 chronyd[58652]: Selected source 208.81.1.244 (pool.ntp.org)
Dec  4 19:58:47 np0005546222 python3.9[72718]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896326.638606-65-203465723433256/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=cb6dc863a7e49862862f192524608a2149e74923 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:58:48 np0005546222 python3.9[72870]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 19:58:49 np0005546222 python3.9[73022]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 19:58:50 np0005546222 python3.9[73174]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:58:50 np0005546222 python3.9[73297]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896329.44266-124-242832380669440/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=795c456286f8d76351a77ecc4e3ba99a628d7436 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:58:51 np0005546222 python3.9[73449]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:58:52 np0005546222 python3.9[73572]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896330.913201-124-16041081002485/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=09fecf97d4f61e8dfa5e5b79c9358a4c1891f28a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:58:52 np0005546222 python3.9[73724]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:58:53 np0005546222 python3.9[73847]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896332.3305962-124-168861587018567/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=2131ad4d8dcdcc5b81ddd1452ca930972dc6654b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:58:54 np0005546222 python3.9[73999]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 19:58:55 np0005546222 python3.9[74151]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 19:58:55 np0005546222 python3.9[74303]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:58:56 np0005546222 python3.9[74426]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896335.214728-183-121345435620953/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=c2c880d24a89434c0556e43578ba5e67c355e46d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:58:57 np0005546222 python3.9[74578]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:58:57 np0005546222 python3.9[74701]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896336.6654785-183-218503221300623/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=c627cf96372f156350cf2665722ecb932c797bf8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:58:58 np0005546222 python3.9[74853]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:58:59 np0005546222 python3.9[74976]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896338.0841691-183-94728250115389/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=7a49c6b417aa4025a96f5ee1c9d0c2fef03bae53 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:59:00 np0005546222 python3.9[75128]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 19:59:00 np0005546222 python3.9[75280]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 19:59:01 np0005546222 python3.9[75432]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:59:02 np0005546222 python3.9[75555]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896341.1451666-242-277421999344596/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=fbc71f23ed09b9bcd3e04e386ee5074731d93f0c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:59:02 np0005546222 python3.9[75707]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:59:03 np0005546222 python3.9[75830]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896342.417423-242-57432467684362/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=464076ef88dcc89aa3cbba91e13b4b726d71f651 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:59:04 np0005546222 python3.9[75982]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:59:04 np0005546222 python3.9[76105]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896343.8123755-242-64294443990939/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=742b783dfdbfc50744d200a72e6bc0fd02d3a60e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:59:06 np0005546222 python3.9[76257]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 19:59:06 np0005546222 python3.9[76409]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:59:07 np0005546222 python3.9[76532]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896346.3137732-310-15912402350171/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=aad3215deeeb1eba7754fd1a27527afcf2bb5051 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:59:08 np0005546222 python3.9[76684]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 19:59:09 np0005546222 python3.9[76836]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:59:09 np0005546222 python3.9[76959]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896348.5102446-334-182765789737311/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=aad3215deeeb1eba7754fd1a27527afcf2bb5051 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:59:10 np0005546222 python3.9[77111]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 19:59:11 np0005546222 python3.9[77263]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:59:11 np0005546222 python3.9[77386]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896350.641771-358-19177513981717/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=aad3215deeeb1eba7754fd1a27527afcf2bb5051 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:59:12 np0005546222 python3.9[77538]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 19:59:13 np0005546222 python3.9[77690]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:59:14 np0005546222 python3.9[77813]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896352.942393-382-11635490066362/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=aad3215deeeb1eba7754fd1a27527afcf2bb5051 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:59:14 np0005546222 python3.9[77965]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 19:59:15 np0005546222 python3.9[78117]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:59:16 np0005546222 python3.9[78240]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896355.2154088-406-209146982290171/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=aad3215deeeb1eba7754fd1a27527afcf2bb5051 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:59:17 np0005546222 python3.9[78392]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 19:59:17 np0005546222 python3.9[78544]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:59:18 np0005546222 python3.9[78667]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896357.473478-430-25757901754522/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=aad3215deeeb1eba7754fd1a27527afcf2bb5051 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:59:19 np0005546222 systemd[1]: session-18.scope: Deactivated successfully.
Dec  4 19:59:19 np0005546222 systemd[1]: session-18.scope: Consumed 30.293s CPU time.
Dec  4 19:59:19 np0005546222 systemd-logind[792]: Session 18 logged out. Waiting for processes to exit.
Dec  4 19:59:19 np0005546222 systemd-logind[792]: Removed session 18.
Dec  4 19:59:25 np0005546222 systemd-logind[792]: New session 19 of user zuul.
Dec  4 19:59:25 np0005546222 systemd[1]: Started Session 19 of User zuul.
Dec  4 19:59:26 np0005546222 python3.9[78845]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 19:59:27 np0005546222 python3.9[79001]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 19:59:28 np0005546222 python3.9[79153]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  4 19:59:29 np0005546222 python3.9[79303]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 19:59:30 np0005546222 python3.9[79455]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec  4 19:59:32 np0005546222 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Dec  4 19:59:32 np0005546222 python3.9[79611]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  4 19:59:33 np0005546222 python3.9[79695]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  4 19:59:35 np0005546222 python3.9[79848]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  4 19:59:36 np0005546222 python3[80003]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Dec  4 19:59:37 np0005546222 python3.9[80155]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:59:38 np0005546222 python3.9[80307]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:59:39 np0005546222 python3.9[80385]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:59:39 np0005546222 python3.9[80537]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:59:40 np0005546222 python3.9[80615]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.2vbjb2vj recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:59:41 np0005546222 python3.9[80767]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:59:41 np0005546222 python3.9[80845]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:59:42 np0005546222 python3.9[80997]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 19:59:43 np0005546222 python3[81150]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  4 19:59:44 np0005546222 python3.9[81302]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:59:45 np0005546222 python3.9[81427]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896383.7084906-157-257366225876435/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:59:46 np0005546222 python3.9[81579]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:59:46 np0005546222 python3.9[81704]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896385.3914971-172-233421426334572/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:59:47 np0005546222 python3.9[81856]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:59:48 np0005546222 python3.9[81981]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896386.9870028-187-87464829771111/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:59:49 np0005546222 python3.9[82133]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:59:49 np0005546222 python3.9[82258]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896388.5961626-202-254681956390108/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:59:50 np0005546222 python3.9[82410]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 19:59:51 np0005546222 python3.9[82535]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896390.038504-217-25039120914979/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:59:52 np0005546222 python3.9[82687]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:59:52 np0005546222 python3.9[82839]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 19:59:53 np0005546222 python3.9[82994]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:59:54 np0005546222 python3.9[83146]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 19:59:55 np0005546222 python3.9[83299]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 19:59:56 np0005546222 python3.9[83453]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 19:59:56 np0005546222 python3.9[83608]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 19:59:57 np0005546222 python3.9[83758]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 19:59:59 np0005546222 python3.9[83911]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:f2:93:49:d5" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 19:59:59 np0005546222 ovs-vsctl[83912]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:f2:93:49:d5 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Dec  4 19:59:59 np0005546222 python3.9[84064]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 20:00:00 np0005546222 python3.9[84219]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 20:00:00 np0005546222 ovs-vsctl[84220]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Dec  4 20:00:01 np0005546222 python3.9[84370]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 20:00:01 np0005546222 python3.9[84524]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  4 20:00:02 np0005546222 python3.9[84676]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:00:03 np0005546222 python3.9[84754]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 20:00:04 np0005546222 python3.9[84906]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:00:04 np0005546222 python3.9[84984]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 20:00:05 np0005546222 python3.9[85136]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:00:06 np0005546222 python3.9[85288]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:00:06 np0005546222 python3.9[85366]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:00:07 np0005546222 python3.9[85518]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:00:07 np0005546222 python3.9[85596]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:00:08 np0005546222 python3.9[85748]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 20:00:08 np0005546222 systemd[1]: Reloading.
Dec  4 20:00:08 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 20:00:08 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 20:00:09 np0005546222 python3.9[85938]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:00:10 np0005546222 python3.9[86016]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:00:10 np0005546222 python3.9[86168]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:00:11 np0005546222 python3.9[86246]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:00:12 np0005546222 python3.9[86398]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 20:00:12 np0005546222 systemd[1]: Reloading.
Dec  4 20:00:12 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 20:00:12 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 20:00:12 np0005546222 systemd[1]: Starting Create netns directory...
Dec  4 20:00:12 np0005546222 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  4 20:00:12 np0005546222 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  4 20:00:12 np0005546222 systemd[1]: Finished Create netns directory.
Dec  4 20:00:13 np0005546222 python3.9[86591]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 20:00:14 np0005546222 python3.9[86743]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:00:15 np0005546222 python3.9[86866]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764896413.8710167-468-58731839497891/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  4 20:00:15 np0005546222 python3.9[87018]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  4 20:00:16 np0005546222 python3.9[87170]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:00:17 np0005546222 python3.9[87293]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896416.1009028-493-163454999765026/.source.json _original_basename=.09y95qyj follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:00:18 np0005546222 python3.9[87445]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:00:20 np0005546222 python3.9[87872]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Dec  4 20:00:21 np0005546222 python3.9[88024]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  4 20:00:22 np0005546222 python3.9[88176]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  4 20:00:22 np0005546222 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  4 20:00:23 np0005546222 python3[88339]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  4 20:00:24 np0005546222 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  4 20:00:26 np0005546222 systemd[1]: var-lib-containers-storage-overlay-compat1693855093-lower\x2dmapped.mount: Deactivated successfully.
Dec  4 20:00:29 np0005546222 podman[88352]: 2025-12-05 01:00:29.662391392 +0000 UTC m=+5.608239358 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec  4 20:00:29 np0005546222 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  4 20:00:29 np0005546222 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  4 20:00:29 np0005546222 podman[88472]: 2025-12-05 01:00:29.883129873 +0000 UTC m=+0.062864896 container create d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  4 20:00:29 np0005546222 podman[88472]: 2025-12-05 01:00:29.851574633 +0000 UTC m=+0.031309626 image pull 3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec  4 20:00:29 np0005546222 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  4 20:00:29 np0005546222 python3[88339]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec  4 20:00:30 np0005546222 python3.9[88662]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 20:00:31 np0005546222 python3.9[88816]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:00:32 np0005546222 python3.9[88892]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 20:00:32 np0005546222 python3.9[89043]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764896432.2564032-581-234032621706389/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:00:33 np0005546222 python3.9[89119]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  4 20:00:33 np0005546222 systemd[1]: Reloading.
Dec  4 20:00:33 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 20:00:33 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 20:00:34 np0005546222 python3.9[89231]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 20:00:34 np0005546222 systemd[1]: Reloading.
Dec  4 20:00:34 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 20:00:34 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 20:00:34 np0005546222 systemd[1]: Starting ovn_controller container...
Dec  4 20:00:34 np0005546222 systemd[1]: Created slice Virtual Machine and Container Slice.
Dec  4 20:00:34 np0005546222 systemd[1]: Started libcrun container.
Dec  4 20:00:34 np0005546222 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ca743cda8be747f1c67d276c4b62aeeadba99090713c7c0f5f1be3652a04951/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec  4 20:00:34 np0005546222 systemd[1]: Started /usr/bin/podman healthcheck run d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d.
Dec  4 20:00:34 np0005546222 podman[89272]: 2025-12-05 01:00:34.893372299 +0000 UTC m=+0.191138383 container init d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  4 20:00:34 np0005546222 ovn_controller[89286]: + sudo -E kolla_set_configs
Dec  4 20:00:34 np0005546222 podman[89272]: 2025-12-05 01:00:34.92866335 +0000 UTC m=+0.226429334 container start d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  4 20:00:34 np0005546222 edpm-start-podman-container[89272]: ovn_controller
Dec  4 20:00:34 np0005546222 systemd[1]: Created slice User Slice of UID 0.
Dec  4 20:00:34 np0005546222 systemd[1]: Starting User Runtime Directory /run/user/0...
Dec  4 20:00:35 np0005546222 systemd[1]: Finished User Runtime Directory /run/user/0.
Dec  4 20:00:35 np0005546222 systemd[1]: Starting User Manager for UID 0...
Dec  4 20:00:35 np0005546222 edpm-start-podman-container[89271]: Creating additional drop-in dependency for "ovn_controller" (d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d)
Dec  4 20:00:35 np0005546222 podman[89293]: 2025-12-05 01:00:35.054310797 +0000 UTC m=+0.099528692 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  4 20:00:35 np0005546222 systemd[1]: d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d-67d8d571b88bbdcc.service: Main process exited, code=exited, status=1/FAILURE
Dec  4 20:00:35 np0005546222 systemd[1]: d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d-67d8d571b88bbdcc.service: Failed with result 'exit-code'.
Dec  4 20:00:35 np0005546222 systemd[1]: Reloading.
Dec  4 20:00:35 np0005546222 systemd[89327]: Queued start job for default target Main User Target.
Dec  4 20:00:35 np0005546222 systemd[89327]: Created slice User Application Slice.
Dec  4 20:00:35 np0005546222 systemd[89327]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec  4 20:00:35 np0005546222 systemd[89327]: Started Daily Cleanup of User's Temporary Directories.
Dec  4 20:00:35 np0005546222 systemd[89327]: Reached target Paths.
Dec  4 20:00:35 np0005546222 systemd[89327]: Reached target Timers.
Dec  4 20:00:35 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 20:00:35 np0005546222 systemd[89327]: Starting D-Bus User Message Bus Socket...
Dec  4 20:00:35 np0005546222 systemd[89327]: Starting Create User's Volatile Files and Directories...
Dec  4 20:00:35 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 20:00:35 np0005546222 systemd[89327]: Finished Create User's Volatile Files and Directories.
Dec  4 20:00:35 np0005546222 systemd[89327]: Listening on D-Bus User Message Bus Socket.
Dec  4 20:00:35 np0005546222 systemd[89327]: Reached target Sockets.
Dec  4 20:00:35 np0005546222 systemd[89327]: Reached target Basic System.
Dec  4 20:00:35 np0005546222 systemd[89327]: Reached target Main User Target.
Dec  4 20:00:35 np0005546222 systemd[89327]: Startup finished in 146ms.
Dec  4 20:00:35 np0005546222 systemd[1]: Started User Manager for UID 0.
Dec  4 20:00:35 np0005546222 systemd[1]: Started ovn_controller container.
Dec  4 20:00:35 np0005546222 systemd[1]: Started Session c1 of User root.
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: INFO:__main__:Validating config file
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: INFO:__main__:Writing out command to execute
Dec  4 20:00:35 np0005546222 systemd[1]: session-c1.scope: Deactivated successfully.
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: ++ cat /run_command
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: + ARGS=
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: + sudo kolla_copy_cacerts
Dec  4 20:00:35 np0005546222 systemd[1]: Started Session c2 of User root.
Dec  4 20:00:35 np0005546222 systemd[1]: session-c2.scope: Deactivated successfully.
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: + [[ ! -n '' ]]
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: + . kolla_extend_start
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: + umask 0022
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: 2025-12-05T01:00:35Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: 2025-12-05T01:00:35Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: 2025-12-05T01:00:35Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: 2025-12-05T01:00:35Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: 2025-12-05T01:00:35Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: 2025-12-05T01:00:35Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Dec  4 20:00:35 np0005546222 NetworkManager[49092]: <info>  [1764896435.5464] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Dec  4 20:00:35 np0005546222 NetworkManager[49092]: <info>  [1764896435.5474] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  4 20:00:35 np0005546222 NetworkManager[49092]: <info>  [1764896435.5486] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Dec  4 20:00:35 np0005546222 NetworkManager[49092]: <info>  [1764896435.5493] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Dec  4 20:00:35 np0005546222 NetworkManager[49092]: <info>  [1764896435.5500] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec  4 20:00:35 np0005546222 kernel: br-int: entered promiscuous mode
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: 2025-12-05T01:00:35Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: 2025-12-05T01:00:35Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: 2025-12-05T01:00:35Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: 2025-12-05T01:00:35Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: 2025-12-05T01:00:35Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: 2025-12-05T01:00:35Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: 2025-12-05T01:00:35Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: 2025-12-05T01:00:35Z|00014|main|INFO|OVS feature set changed, force recompute.
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: 2025-12-05T01:00:35Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: 2025-12-05T01:00:35Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: 2025-12-05T01:00:35Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: 2025-12-05T01:00:35Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: 2025-12-05T01:00:35Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: 2025-12-05T01:00:35Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: 2025-12-05T01:00:35Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: 2025-12-05T01:00:35Z|00022|main|INFO|OVS feature set changed, force recompute.
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: 2025-12-05T01:00:35Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: 2025-12-05T01:00:35Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: 2025-12-05T01:00:35Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: 2025-12-05T01:00:35Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: 2025-12-05T01:00:35Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: 2025-12-05T01:00:35Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: 2025-12-05T01:00:35Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  4 20:00:35 np0005546222 ovn_controller[89286]: 2025-12-05T01:00:35Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  4 20:00:35 np0005546222 NetworkManager[49092]: <info>  [1764896435.5793] manager: (ovn-f2dffe-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Dec  4 20:00:35 np0005546222 systemd-udevd[89430]: Network interface NamePolicy= disabled on kernel command line.
Dec  4 20:00:35 np0005546222 kernel: genev_sys_6081: entered promiscuous mode
Dec  4 20:00:35 np0005546222 systemd-udevd[89441]: Network interface NamePolicy= disabled on kernel command line.
Dec  4 20:00:35 np0005546222 NetworkManager[49092]: <info>  [1764896435.6107] device (genev_sys_6081): carrier: link connected
Dec  4 20:00:35 np0005546222 NetworkManager[49092]: <info>  [1764896435.6116] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Dec  4 20:00:36 np0005546222 python3.9[89554]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 20:00:36 np0005546222 ovs-vsctl[89555]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Dec  4 20:00:37 np0005546222 python3.9[89707]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 20:00:37 np0005546222 ovs-vsctl[89709]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Dec  4 20:00:38 np0005546222 python3.9[89862]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 20:00:38 np0005546222 ovs-vsctl[89863]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Dec  4 20:00:38 np0005546222 systemd[1]: session-19.scope: Deactivated successfully.
Dec  4 20:00:38 np0005546222 systemd[1]: session-19.scope: Consumed 1min 2.506s CPU time.
Dec  4 20:00:38 np0005546222 systemd-logind[792]: Session 19 logged out. Waiting for processes to exit.
Dec  4 20:00:38 np0005546222 systemd-logind[792]: Removed session 19.
Dec  4 20:00:43 np0005546222 systemd-logind[792]: New session 21 of user zuul.
Dec  4 20:00:43 np0005546222 systemd[1]: Started Session 21 of User zuul.
Dec  4 20:00:44 np0005546222 python3.9[90041]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 20:00:45 np0005546222 systemd[1]: Stopping User Manager for UID 0...
Dec  4 20:00:45 np0005546222 systemd[89327]: Activating special unit Exit the Session...
Dec  4 20:00:45 np0005546222 systemd[89327]: Stopped target Main User Target.
Dec  4 20:00:45 np0005546222 systemd[89327]: Stopped target Basic System.
Dec  4 20:00:45 np0005546222 systemd[89327]: Stopped target Paths.
Dec  4 20:00:45 np0005546222 systemd[89327]: Stopped target Sockets.
Dec  4 20:00:45 np0005546222 systemd[89327]: Stopped target Timers.
Dec  4 20:00:45 np0005546222 systemd[89327]: Stopped Daily Cleanup of User's Temporary Directories.
Dec  4 20:00:45 np0005546222 systemd[89327]: Closed D-Bus User Message Bus Socket.
Dec  4 20:00:45 np0005546222 systemd[89327]: Stopped Create User's Volatile Files and Directories.
Dec  4 20:00:45 np0005546222 systemd[89327]: Removed slice User Application Slice.
Dec  4 20:00:45 np0005546222 systemd[89327]: Reached target Shutdown.
Dec  4 20:00:45 np0005546222 systemd[89327]: Finished Exit the Session.
Dec  4 20:00:45 np0005546222 systemd[89327]: Reached target Exit the Session.
Dec  4 20:00:45 np0005546222 systemd[1]: user@0.service: Deactivated successfully.
Dec  4 20:00:45 np0005546222 systemd[1]: Stopped User Manager for UID 0.
Dec  4 20:00:45 np0005546222 systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec  4 20:00:45 np0005546222 systemd[1]: run-user-0.mount: Deactivated successfully.
Dec  4 20:00:45 np0005546222 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec  4 20:00:45 np0005546222 systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec  4 20:00:45 np0005546222 systemd[1]: Removed slice User Slice of UID 0.
Dec  4 20:00:46 np0005546222 python3.9[90200]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 20:00:47 np0005546222 python3.9[90365]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  4 20:00:47 np0005546222 systemd[1]: Reloading.
Dec  4 20:00:47 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 20:00:47 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 20:00:48 np0005546222 python3.9[90549]: ansible-ansible.builtin.service_facts Invoked
Dec  4 20:00:50 np0005546222 network[90566]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  4 20:00:50 np0005546222 network[90567]: 'network-scripts' will be removed from distribution in near future.
Dec  4 20:00:50 np0005546222 network[90568]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  4 20:00:56 np0005546222 python3.9[90830]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 20:00:57 np0005546222 python3.9[90983]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 20:00:59 np0005546222 python3.9[91136]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 20:01:00 np0005546222 python3.9[91289]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 20:01:01 np0005546222 python3.9[91442]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 20:01:02 np0005546222 python3.9[91595]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 20:01:03 np0005546222 python3.9[91763]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 20:01:04 np0005546222 python3.9[91916]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:01:05 np0005546222 ovn_controller[89286]: 2025-12-05T01:01:05Z|00025|memory|INFO|16000 kB peak resident set size after 29.6 seconds
Dec  4 20:01:05 np0005546222 ovn_controller[89286]: 2025-12-05T01:01:05Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Dec  4 20:01:05 np0005546222 podman[92068]: 2025-12-05 01:01:05.188667418 +0000 UTC m=+0.089653596 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec  4 20:01:05 np0005546222 python3.9[92069]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:01:06 np0005546222 python3.9[92247]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:01:06 np0005546222 python3.9[92399]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:01:07 np0005546222 python3.9[92551]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:01:08 np0005546222 python3.9[92703]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:01:08 np0005546222 python3.9[92855]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:01:09 np0005546222 python3.9[93007]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:01:10 np0005546222 python3.9[93159]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:01:11 np0005546222 python3.9[93311]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:01:11 np0005546222 python3.9[93463]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:01:12 np0005546222 python3.9[93615]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:01:13 np0005546222 python3.9[93767]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:01:13 np0005546222 python3.9[93919]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:01:14 np0005546222 python3.9[94071]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 20:01:15 np0005546222 python3.9[94223]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  4 20:01:16 np0005546222 python3.9[94375]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  4 20:01:16 np0005546222 systemd[1]: Reloading.
Dec  4 20:01:16 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 20:01:16 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 20:01:17 np0005546222 python3.9[94563]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 20:01:18 np0005546222 python3.9[94716]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 20:01:19 np0005546222 python3.9[94869]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 20:01:20 np0005546222 python3.9[95022]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 20:01:20 np0005546222 python3.9[95175]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 20:01:21 np0005546222 python3.9[95328]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 20:01:22 np0005546222 python3.9[95481]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 20:01:23 np0005546222 python3.9[95634]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Dec  4 20:01:24 np0005546222 python3.9[95787]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  4 20:01:25 np0005546222 python3.9[95945]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  4 20:01:26 np0005546222 python3.9[96105]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  4 20:01:27 np0005546222 python3.9[96189]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  4 20:01:35 np0005546222 podman[96204]: 2025-12-05 01:01:35.718347841 +0000 UTC m=+0.127896684 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Dec  4 20:01:54 np0005546222 kernel: SELinux:  Converting 2758 SID table entries...
Dec  4 20:01:54 np0005546222 kernel: SELinux:  policy capability network_peer_controls=1
Dec  4 20:01:54 np0005546222 kernel: SELinux:  policy capability open_perms=1
Dec  4 20:01:54 np0005546222 kernel: SELinux:  policy capability extended_socket_class=1
Dec  4 20:01:54 np0005546222 kernel: SELinux:  policy capability always_check_network=0
Dec  4 20:01:54 np0005546222 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  4 20:01:54 np0005546222 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  4 20:01:54 np0005546222 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  4 20:02:03 np0005546222 kernel: SELinux:  Converting 2758 SID table entries...
Dec  4 20:02:03 np0005546222 kernel: SELinux:  policy capability network_peer_controls=1
Dec  4 20:02:03 np0005546222 kernel: SELinux:  policy capability open_perms=1
Dec  4 20:02:03 np0005546222 kernel: SELinux:  policy capability extended_socket_class=1
Dec  4 20:02:03 np0005546222 kernel: SELinux:  policy capability always_check_network=0
Dec  4 20:02:03 np0005546222 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  4 20:02:03 np0005546222 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  4 20:02:03 np0005546222 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  4 20:02:06 np0005546222 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Dec  4 20:02:06 np0005546222 podman[96419]: 2025-12-05 01:02:06.680440104 +0000 UTC m=+0.098326996 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  4 20:02:37 np0005546222 podman[108069]: 2025-12-05 01:02:37.69079617 +0000 UTC m=+0.101137216 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  4 20:02:58 np0005546222 kernel: SELinux:  Converting 2759 SID table entries...
Dec  4 20:02:58 np0005546222 kernel: SELinux:  policy capability network_peer_controls=1
Dec  4 20:02:58 np0005546222 kernel: SELinux:  policy capability open_perms=1
Dec  4 20:02:58 np0005546222 kernel: SELinux:  policy capability extended_socket_class=1
Dec  4 20:02:58 np0005546222 kernel: SELinux:  policy capability always_check_network=0
Dec  4 20:02:58 np0005546222 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  4 20:02:58 np0005546222 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  4 20:02:58 np0005546222 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  4 20:03:00 np0005546222 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Dec  4 20:03:00 np0005546222 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Dec  4 20:03:00 np0005546222 dbus-broker-launch[765]: Noticed file-system modification, trigger reload.
Dec  4 20:03:07 np0005546222 systemd[1]: Stopping OpenSSH server daemon...
Dec  4 20:03:07 np0005546222 systemd[1]: sshd.service: Deactivated successfully.
Dec  4 20:03:07 np0005546222 systemd[1]: Stopped OpenSSH server daemon.
Dec  4 20:03:07 np0005546222 systemd[1]: sshd.service: Consumed 2.109s CPU time, read 32.0K from disk, written 8.0K to disk.
Dec  4 20:03:07 np0005546222 systemd[1]: Stopped target sshd-keygen.target.
Dec  4 20:03:07 np0005546222 systemd[1]: Stopping sshd-keygen.target...
Dec  4 20:03:07 np0005546222 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  4 20:03:07 np0005546222 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  4 20:03:07 np0005546222 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  4 20:03:07 np0005546222 systemd[1]: Reached target sshd-keygen.target.
Dec  4 20:03:07 np0005546222 systemd[1]: Starting OpenSSH server daemon...
Dec  4 20:03:07 np0005546222 systemd[1]: Started OpenSSH server daemon.
Dec  4 20:03:07 np0005546222 podman[114071]: 2025-12-05 01:03:07.879302259 +0000 UTC m=+0.136082333 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  4 20:03:09 np0005546222 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  4 20:03:09 np0005546222 systemd[1]: Starting man-db-cache-update.service...
Dec  4 20:03:09 np0005546222 systemd[1]: Reloading.
Dec  4 20:03:09 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 20:03:09 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 20:03:10 np0005546222 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  4 20:03:14 np0005546222 python3.9[118867]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  4 20:03:14 np0005546222 systemd[1]: Reloading.
Dec  4 20:03:14 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 20:03:14 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 20:03:15 np0005546222 python3.9[120198]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  4 20:03:15 np0005546222 systemd[1]: Reloading.
Dec  4 20:03:15 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 20:03:15 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 20:03:16 np0005546222 python3.9[121425]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  4 20:03:16 np0005546222 systemd[1]: Reloading.
Dec  4 20:03:16 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 20:03:16 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 20:03:17 np0005546222 python3.9[122745]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  4 20:03:17 np0005546222 systemd[1]: Reloading.
Dec  4 20:03:17 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 20:03:17 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 20:03:17 np0005546222 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  4 20:03:17 np0005546222 systemd[1]: Finished man-db-cache-update.service.
Dec  4 20:03:17 np0005546222 systemd[1]: man-db-cache-update.service: Consumed 10.194s CPU time.
Dec  4 20:03:17 np0005546222 systemd[1]: run-r0bd431dd1bb94b368c7680f119d79f43.service: Deactivated successfully.
Dec  4 20:03:18 np0005546222 python3.9[123589]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  4 20:03:19 np0005546222 systemd[1]: Reloading.
Dec  4 20:03:19 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 20:03:19 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 20:03:20 np0005546222 python3.9[123778]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  4 20:03:20 np0005546222 systemd[1]: Reloading.
Dec  4 20:03:20 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 20:03:20 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 20:03:21 np0005546222 python3.9[123968]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  4 20:03:22 np0005546222 systemd[1]: Reloading.
Dec  4 20:03:22 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 20:03:22 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 20:03:23 np0005546222 python3.9[124158]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  4 20:03:25 np0005546222 python3.9[124313]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  4 20:03:25 np0005546222 systemd[1]: Reloading.
Dec  4 20:03:25 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 20:03:25 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 20:03:26 np0005546222 python3.9[124503]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  4 20:03:26 np0005546222 systemd[1]: Reloading.
Dec  4 20:03:26 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 20:03:26 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 20:03:26 np0005546222 systemd[1]: Listening on libvirt proxy daemon socket.
Dec  4 20:03:26 np0005546222 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Dec  4 20:03:27 np0005546222 python3.9[124696]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  4 20:03:28 np0005546222 python3.9[124851]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  4 20:03:30 np0005546222 python3.9[125006]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  4 20:03:31 np0005546222 python3.9[125161]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  4 20:03:32 np0005546222 python3.9[125316]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  4 20:03:33 np0005546222 python3.9[125471]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  4 20:03:34 np0005546222 python3.9[125626]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  4 20:03:35 np0005546222 python3.9[125781]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  4 20:03:36 np0005546222 python3.9[125936]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  4 20:03:37 np0005546222 python3.9[126091]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  4 20:03:38 np0005546222 python3.9[126246]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  4 20:03:38 np0005546222 podman[126248]: 2025-12-05 01:03:38.189401166 +0000 UTC m=+0.138567978 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec  4 20:03:39 np0005546222 python3.9[126429]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  4 20:03:41 np0005546222 python3.9[126584]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  4 20:03:41 np0005546222 python3.9[126739]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  4 20:03:42 np0005546222 python3.9[126894]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  4 20:03:43 np0005546222 python3.9[127046]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  4 20:03:43 np0005546222 python3.9[127198]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 20:03:44 np0005546222 python3.9[127350]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 20:03:45 np0005546222 python3.9[127502]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 20:03:46 np0005546222 python3.9[127654]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  4 20:03:47 np0005546222 python3.9[127806]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:03:47 np0005546222 python3.9[127931]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764896626.346985-554-99597404895868/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:03:48 np0005546222 python3.9[128083]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:03:49 np0005546222 python3.9[128208]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764896628.0172966-554-279389015720054/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:03:49 np0005546222 python3.9[128360]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:03:50 np0005546222 python3.9[128485]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764896629.2994168-554-281457487101718/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:03:51 np0005546222 python3.9[128637]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:03:51 np0005546222 python3.9[128762]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764896630.55554-554-2717490696401/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:03:52 np0005546222 python3.9[128914]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:03:52 np0005546222 python3.9[129039]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764896631.845243-554-80891701137782/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:03:53 np0005546222 python3.9[129191]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:03:54 np0005546222 python3.9[129316]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764896633.0404656-554-13431186608292/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:03:55 np0005546222 python3.9[129468]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:03:55 np0005546222 python3.9[129591]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764896634.432735-554-202042377749202/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:03:56 np0005546222 python3.9[129743]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:03:57 np0005546222 python3.9[129868]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764896635.9371138-554-276378709331709/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:03:57 np0005546222 python3.9[130020]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Dec  4 20:03:58 np0005546222 python3.9[130173]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:03:59 np0005546222 python3.9[130325]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:03:59 np0005546222 python3.9[130477]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:00 np0005546222 python3.9[130629]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:01 np0005546222 python3.9[130781]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:01 np0005546222 python3.9[130933]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:02 np0005546222 python3.9[131085]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:02 np0005546222 python3.9[131237]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:03 np0005546222 python3.9[131389]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:04 np0005546222 python3.9[131541]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:04 np0005546222 python3.9[131693]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:05 np0005546222 python3.9[131845]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:06 np0005546222 python3.9[131997]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:06 np0005546222 python3.9[132149]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:07 np0005546222 python3.9[132301]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:04:08 np0005546222 python3.9[132424]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896646.9896336-775-12580375816305/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:08 np0005546222 podman[132548]: 2025-12-05 01:04:08.606164854 +0000 UTC m=+0.081620709 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  4 20:04:08 np0005546222 python3.9[132594]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:04:09 np0005546222 python3.9[132724]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896648.1929936-775-235011796221774/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:10 np0005546222 python3.9[132876]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:04:10 np0005546222 python3.9[132999]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896649.5615401-775-33584170055738/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:11 np0005546222 python3.9[133151]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:04:11 np0005546222 python3.9[133274]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896650.7667232-775-184861549337845/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:12 np0005546222 python3.9[133426]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:04:13 np0005546222 python3.9[133549]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896651.9649706-775-12252156789663/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:13 np0005546222 python3.9[133701]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:04:14 np0005546222 python3.9[133824]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896653.2135928-775-238228153391353/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:15 np0005546222 python3.9[133976]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:04:15 np0005546222 python3.9[134099]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896654.451138-775-94665213687039/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:16 np0005546222 python3.9[134251]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:04:16 np0005546222 python3.9[134374]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896655.8195562-775-50173359051458/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:17 np0005546222 python3.9[134526]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:04:18 np0005546222 python3.9[134649]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896656.9842217-775-40833753809001/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:18 np0005546222 python3.9[134801]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:04:19 np0005546222 python3.9[134924]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896658.2251844-775-132716557087409/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:20 np0005546222 python3.9[135076]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:04:21 np0005546222 python3.9[135199]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896659.549394-775-3087490862360/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:21 np0005546222 python3.9[135351]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:04:22 np0005546222 python3.9[135474]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896661.3103588-775-199062854097710/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:23 np0005546222 python3.9[135626]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:04:23 np0005546222 python3.9[135749]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896662.586354-775-49232989099221/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:24 np0005546222 python3.9[135901]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:04:25 np0005546222 python3.9[136024]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896663.9311082-775-47238445569365/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:26 np0005546222 python3.9[136174]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 20:04:27 np0005546222 python3.9[136329]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec  4 20:04:28 np0005546222 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Dec  4 20:04:29 np0005546222 python3.9[136485]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:29 np0005546222 python3.9[136637]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:30 np0005546222 python3.9[136789]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:31 np0005546222 python3.9[136941]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:32 np0005546222 python3.9[137093]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:32 np0005546222 python3.9[137245]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:33 np0005546222 python3.9[137397]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:34 np0005546222 python3.9[137549]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:35 np0005546222 python3.9[137701]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:35 np0005546222 python3.9[137853]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:36 np0005546222 python3.9[138005]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  4 20:04:36 np0005546222 systemd[1]: Reloading.
Dec  4 20:04:36 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 20:04:36 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 20:04:37 np0005546222 systemd[1]: Starting libvirt logging daemon socket...
Dec  4 20:04:37 np0005546222 systemd[1]: Listening on libvirt logging daemon socket.
Dec  4 20:04:37 np0005546222 systemd[1]: Starting libvirt logging daemon admin socket...
Dec  4 20:04:37 np0005546222 systemd[1]: Listening on libvirt logging daemon admin socket.
Dec  4 20:04:37 np0005546222 systemd[1]: Starting libvirt logging daemon...
Dec  4 20:04:37 np0005546222 systemd[1]: Started libvirt logging daemon.
Dec  4 20:04:38 np0005546222 python3.9[138198]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  4 20:04:38 np0005546222 systemd[1]: Reloading.
Dec  4 20:04:38 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 20:04:38 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 20:04:38 np0005546222 systemd[1]: Starting libvirt nodedev daemon socket...
Dec  4 20:04:38 np0005546222 systemd[1]: Listening on libvirt nodedev daemon socket.
Dec  4 20:04:38 np0005546222 systemd[1]: Starting libvirt nodedev daemon admin socket...
Dec  4 20:04:38 np0005546222 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Dec  4 20:04:38 np0005546222 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Dec  4 20:04:38 np0005546222 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Dec  4 20:04:38 np0005546222 systemd[1]: Starting libvirt nodedev daemon...
Dec  4 20:04:38 np0005546222 systemd[1]: Started libvirt nodedev daemon.
Dec  4 20:04:39 np0005546222 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Dec  4 20:04:39 np0005546222 podman[138362]: 2025-12-05 01:04:39.232164672 +0000 UTC m=+0.127924047 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  4 20:04:39 np0005546222 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Dec  4 20:04:39 np0005546222 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Dec  4 20:04:39 np0005546222 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Dec  4 20:04:39 np0005546222 python3.9[138441]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  4 20:04:39 np0005546222 systemd[1]: Reloading.
Dec  4 20:04:39 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 20:04:39 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 20:04:39 np0005546222 systemd[1]: Starting libvirt proxy daemon admin socket...
Dec  4 20:04:39 np0005546222 systemd[1]: Starting libvirt proxy daemon read-only socket...
Dec  4 20:04:39 np0005546222 systemd[1]: Listening on libvirt proxy daemon admin socket.
Dec  4 20:04:39 np0005546222 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Dec  4 20:04:39 np0005546222 systemd[1]: Starting libvirt proxy daemon...
Dec  4 20:04:39 np0005546222 systemd[1]: Started libvirt proxy daemon.
Dec  4 20:04:40 np0005546222 setroubleshoot[138363]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 2ebc3f3e-823e-438e-8e55-7092ceae60db
Dec  4 20:04:40 np0005546222 setroubleshoot[138363]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Dec  4 20:04:40 np0005546222 setroubleshoot[138363]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 2ebc3f3e-823e-438e-8e55-7092ceae60db
Dec  4 20:04:40 np0005546222 setroubleshoot[138363]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Dec  4 20:04:40 np0005546222 python3.9[138661]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  4 20:04:40 np0005546222 systemd[1]: Reloading.
Dec  4 20:04:40 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 20:04:40 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 20:04:41 np0005546222 systemd[1]: Listening on libvirt locking daemon socket.
Dec  4 20:04:41 np0005546222 systemd[1]: Starting libvirt QEMU daemon socket...
Dec  4 20:04:41 np0005546222 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec  4 20:04:41 np0005546222 systemd[1]: Starting Virtual Machine and Container Registration Service...
Dec  4 20:04:41 np0005546222 systemd[1]: Listening on libvirt QEMU daemon socket.
Dec  4 20:04:41 np0005546222 systemd[1]: Starting libvirt QEMU daemon admin socket...
Dec  4 20:04:41 np0005546222 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Dec  4 20:04:41 np0005546222 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Dec  4 20:04:41 np0005546222 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Dec  4 20:04:41 np0005546222 systemd[1]: Started Virtual Machine and Container Registration Service.
Dec  4 20:04:41 np0005546222 systemd[1]: Starting libvirt QEMU daemon...
Dec  4 20:04:41 np0005546222 systemd[1]: Started libvirt QEMU daemon.
Dec  4 20:04:42 np0005546222 python3.9[138875]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  4 20:04:42 np0005546222 systemd[1]: Reloading.
Dec  4 20:04:42 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 20:04:42 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 20:04:42 np0005546222 systemd[1]: Starting libvirt secret daemon socket...
Dec  4 20:04:42 np0005546222 systemd[1]: Listening on libvirt secret daemon socket.
Dec  4 20:04:42 np0005546222 systemd[1]: Starting libvirt secret daemon admin socket...
Dec  4 20:04:42 np0005546222 systemd[1]: Starting libvirt secret daemon read-only socket...
Dec  4 20:04:42 np0005546222 systemd[1]: Listening on libvirt secret daemon read-only socket.
Dec  4 20:04:42 np0005546222 systemd[1]: Listening on libvirt secret daemon admin socket.
Dec  4 20:04:42 np0005546222 systemd[1]: Starting libvirt secret daemon...
Dec  4 20:04:42 np0005546222 systemd[1]: Started libvirt secret daemon.
Dec  4 20:04:43 np0005546222 python3.9[139087]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:44 np0005546222 python3.9[139239]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  4 20:04:45 np0005546222 python3.9[139391]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:04:45 np0005546222 python3.9[139514]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896684.8031545-1120-234277022385105/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:46 np0005546222 python3.9[139666]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:47 np0005546222 python3.9[139818]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:04:48 np0005546222 python3.9[139896]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:48 np0005546222 python3.9[140048]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:04:49 np0005546222 python3.9[140126]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.g3sagl8j recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:49 np0005546222 python3.9[140278]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:04:50 np0005546222 python3.9[140356]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:50 np0005546222 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Dec  4 20:04:50 np0005546222 systemd[1]: setroubleshootd.service: Deactivated successfully.
Dec  4 20:04:51 np0005546222 python3.9[140508]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 20:04:52 np0005546222 python3[140661]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  4 20:04:52 np0005546222 python3.9[140813]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:04:53 np0005546222 python3.9[140891]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:54 np0005546222 python3.9[141043]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:04:54 np0005546222 python3.9[141121]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:55 np0005546222 python3.9[141273]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:04:56 np0005546222 python3.9[141351]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:57 np0005546222 python3.9[141503]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:04:57 np0005546222 python3.9[141581]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:58 np0005546222 python3.9[141733]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:04:59 np0005546222 python3.9[141858]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896697.7921166-1245-268524551476562/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:04:59 np0005546222 python3.9[142010]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:05:00 np0005546222 python3.9[142162]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 20:05:01 np0005546222 python3.9[142317]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:05:02 np0005546222 python3.9[142469]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 20:05:02 np0005546222 python3.9[142622]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 20:05:03 np0005546222 python3.9[142776]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 20:05:04 np0005546222 python3.9[142931]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:05:05 np0005546222 python3.9[143083]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:05:05 np0005546222 python3.9[143206]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896704.7191954-1317-86518957866666/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:05:06 np0005546222 python3.9[143358]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:05:07 np0005546222 python3.9[143481]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896706.1512437-1332-202513347411991/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:05:08 np0005546222 python3.9[143633]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:05:08 np0005546222 python3.9[143756]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896707.5948699-1347-147789048625061/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:05:09 np0005546222 podman[143908]: 2025-12-05 01:05:09.44576415 +0000 UTC m=+0.126521338 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  4 20:05:09 np0005546222 python3.9[143909]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 20:05:09 np0005546222 systemd[1]: Reloading.
Dec  4 20:05:09 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 20:05:09 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 20:05:11 np0005546222 systemd[1]: Reached target edpm_libvirt.target.
Dec  4 20:05:11 np0005546222 python3.9[144127]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec  4 20:05:11 np0005546222 systemd[1]: Reloading.
Dec  4 20:05:11 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 20:05:11 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 20:05:12 np0005546222 systemd[1]: Reloading.
Dec  4 20:05:12 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 20:05:12 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 20:05:12 np0005546222 systemd[1]: session-21.scope: Deactivated successfully.
Dec  4 20:05:12 np0005546222 systemd[1]: session-21.scope: Consumed 3min 31.189s CPU time.
Dec  4 20:05:12 np0005546222 systemd-logind[792]: Session 21 logged out. Waiting for processes to exit.
Dec  4 20:05:12 np0005546222 systemd-logind[792]: Removed session 21.
Dec  4 20:05:18 np0005546222 systemd-logind[792]: New session 22 of user zuul.
Dec  4 20:05:18 np0005546222 systemd[1]: Started Session 22 of User zuul.
Dec  4 20:05:20 np0005546222 python3.9[144386]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 20:05:21 np0005546222 python3.9[144542]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  4 20:05:21 np0005546222 systemd[1]: Reloading.
Dec  4 20:05:21 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 20:05:21 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 20:05:23 np0005546222 python3.9[144726]: ansible-ansible.builtin.service_facts Invoked
Dec  4 20:05:23 np0005546222 network[144743]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  4 20:05:23 np0005546222 network[144744]: 'network-scripts' will be removed from distribution in near future.
Dec  4 20:05:23 np0005546222 network[144745]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  4 20:05:29 np0005546222 python3.9[145016]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 20:05:30 np0005546222 python3.9[145169]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:05:31 np0005546222 python3.9[145321]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:05:32 np0005546222 python3.9[145473]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 20:05:33 np0005546222 python3.9[145625]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  4 20:05:34 np0005546222 python3.9[145777]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  4 20:05:34 np0005546222 systemd[1]: Reloading.
Dec  4 20:05:34 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 20:05:34 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 20:05:35 np0005546222 python3.9[145964]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 20:05:35 np0005546222 python3.9[146117]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 20:05:36 np0005546222 python3.9[146267]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 20:05:37 np0005546222 python3.9[146419]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:05:38 np0005546222 python3.9[146540]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764896736.7981956-133-90194017109416/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  4 20:05:39 np0005546222 python3.9[146692]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Dec  4 20:05:39 np0005546222 podman[146769]: 2025-12-05 01:05:39.705129721 +0000 UTC m=+0.117314037 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  4 20:05:40 np0005546222 python3.9[146873]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Dec  4 20:05:40 np0005546222 python3.9[147026]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  4 20:05:41 np0005546222 python3.9[147184]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  4 20:05:43 np0005546222 python3.9[147342]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:05:43 np0005546222 python3.9[147463]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764896742.661243-201-85843281529925/.source.conf _original_basename=ceilometer.conf follow=False checksum=f74f01c63e6cdeca5458ef9aff2a1db5d6a4e4b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:05:44 np0005546222 python3.9[147613]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:05:44 np0005546222 python3.9[147734]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764896743.8280778-201-76061545703720/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:05:45 np0005546222 python3.9[147884]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:05:46 np0005546222 python3.9[148005]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764896744.991766-201-62915567657064/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:05:46 np0005546222 python3.9[148155]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 20:05:47 np0005546222 python3.9[148307]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 20:05:48 np0005546222 python3.9[148459]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:05:48 np0005546222 python3.9[148580]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896747.6889687-260-97483718524400/.source.json follow=False _original_basename=ceilometer-agent-compute.json.j2 checksum=264d11e8d3809e7ef745878dce7edd46098e25b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:05:49 np0005546222 python3.9[148730]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:05:49 np0005546222 python3.9[148806]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:05:50 np0005546222 python3.9[148956]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:05:51 np0005546222 python3.9[149077]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896750.103119-260-115629858909633/.source.json follow=False _original_basename=ceilometer_agent_compute.json.j2 checksum=4096a0f5410f47dcaf8ab19e56a9d8e211effecd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:05:51 np0005546222 python3.9[149227]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:05:52 np0005546222 python3.9[149348]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896751.396997-260-245763008668448/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:05:53 np0005546222 python3.9[149498]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:05:53 np0005546222 python3.9[149619]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896752.6221862-260-148524536485348/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:05:54 np0005546222 python3.9[149769]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:05:55 np0005546222 python3.9[149890]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896753.8618016-260-17095427333205/.source.json follow=False _original_basename=node_exporter.json.j2 checksum=6e4982940d2bfae88404914dfaf72552f6356d81 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:05:55 np0005546222 python3.9[150040]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:05:56 np0005546222 python3.9[150161]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896755.2900317-260-250805069559045/.source.yaml follow=False _original_basename=node_exporter.yaml.j2 checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:05:57 np0005546222 python3.9[150311]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:05:57 np0005546222 python3.9[150432]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896756.647258-260-21738875230583/.source.json follow=False _original_basename=openstack_network_exporter.json.j2 checksum=d474f1e4c3dbd24762592c51cbe5311f0a037273 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:05:58 np0005546222 python3.9[150582]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:05:59 np0005546222 python3.9[150703]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896757.9909909-260-113179732565968/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=2b6bd0891e609bf38a73282f42888052b750bed6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:06:00 np0005546222 python3.9[150853]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:06:00 np0005546222 python3.9[150974]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896759.4269624-260-192098596907389/.source.json follow=False _original_basename=podman_exporter.json.j2 checksum=e342121a88f67e2bae7ebc05d1e6d350470198a5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:06:01 np0005546222 python3.9[151124]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:06:02 np0005546222 python3.9[151245]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896761.0721328-260-233894767635941/.source.yaml follow=False _original_basename=podman_exporter.yaml.j2 checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:06:03 np0005546222 python3.9[151395]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:06:03 np0005546222 python3.9[151471]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:06:04 np0005546222 python3.9[151621]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:06:04 np0005546222 python3.9[151697]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:06:05 np0005546222 python3.9[151847]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:06:06 np0005546222 python3.9[151923]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:06:06 np0005546222 python3.9[152075]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:06:07 np0005546222 python3.9[152227]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:06:08 np0005546222 python3.9[152379]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 20:06:09 np0005546222 python3.9[152531]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 20:06:09 np0005546222 systemd[1]: Reloading.
Dec  4 20:06:09 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 20:06:09 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 20:06:09 np0005546222 systemd[1]: Listening on Podman API Socket.
Dec  4 20:06:09 np0005546222 podman[152570]: 2025-12-05 01:06:09.886829477 +0000 UTC m=+0.095001678 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec  4 20:06:10 np0005546222 python3.9[152746]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:06:11 np0005546222 python3.9[152869]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764896770.14119-482-75715068803620/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  4 20:06:11 np0005546222 python3.9[152945]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:06:12 np0005546222 python3.9[153068]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764896770.14119-482-75715068803620/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  4 20:06:13 np0005546222 python3.9[153220]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False
Dec  4 20:06:14 np0005546222 python3.9[153372]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  4 20:06:15 np0005546222 python3[153524]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec  4 20:06:31 np0005546222 podman[153538]: 2025-12-05 01:06:31.03494709 +0000 UTC m=+15.282830028 image pull b1b6d71b432c07886b3bae74df4dc9841d1f26407d5f96d6c1e400b0154d9a3d quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Dec  4 20:06:31 np0005546222 podman[153679]: 2025-12-05 01:06:31.170939389 +0000 UTC m=+0.046878695 container create 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  4 20:06:31 np0005546222 podman[153679]: 2025-12-05 01:06:31.143006739 +0000 UTC m=+0.018946065 image pull b1b6d71b432c07886b3bae74df4dc9841d1f26407d5f96d6c1e400b0154d9a3d quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Dec  4 20:06:31 np0005546222 python3[153524]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck compute --label config_id=edpm --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested kolla_start
Dec  4 20:06:31 np0005546222 python3.9[153867]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 20:06:32 np0005546222 python3.9[154021]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:06:33 np0005546222 python3.9[154172]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764896792.7826326-546-15650274504505/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:06:34 np0005546222 python3.9[154248]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  4 20:06:34 np0005546222 systemd[1]: Reloading.
Dec  4 20:06:34 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 20:06:34 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 20:06:35 np0005546222 python3.9[154359]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 20:06:35 np0005546222 systemd[1]: Reloading.
Dec  4 20:06:35 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 20:06:35 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 20:06:35 np0005546222 systemd[1]: Starting ceilometer_agent_compute container...
Dec  4 20:06:35 np0005546222 systemd[1]: Started libcrun container.
Dec  4 20:06:35 np0005546222 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e108ee4bc9d8514f675f957a6e3d541692d2b8ecf712c616f7574cf48c93d1e1/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  4 20:06:35 np0005546222 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e108ee4bc9d8514f675f957a6e3d541692d2b8ecf712c616f7574cf48c93d1e1/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec  4 20:06:35 np0005546222 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e108ee4bc9d8514f675f957a6e3d541692d2b8ecf712c616f7574cf48c93d1e1/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec  4 20:06:35 np0005546222 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e108ee4bc9d8514f675f957a6e3d541692d2b8ecf712c616f7574cf48c93d1e1/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec  4 20:06:35 np0005546222 systemd[1]: Started /usr/bin/podman healthcheck run 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424.
Dec  4 20:06:35 np0005546222 podman[154399]: 2025-12-05 01:06:35.798492236 +0000 UTC m=+0.145941617 container init 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4)
Dec  4 20:06:35 np0005546222 ceilometer_agent_compute[154414]: + sudo -E kolla_set_configs
Dec  4 20:06:35 np0005546222 ceilometer_agent_compute[154414]: sudo: unable to send audit message: Operation not permitted
Dec  4 20:06:35 np0005546222 podman[154399]: 2025-12-05 01:06:35.833152734 +0000 UTC m=+0.180602075 container start 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team)
Dec  4 20:06:35 np0005546222 podman[154399]: ceilometer_agent_compute
Dec  4 20:06:35 np0005546222 systemd[1]: Started ceilometer_agent_compute container.
Dec  4 20:06:35 np0005546222 ceilometer_agent_compute[154414]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  4 20:06:35 np0005546222 ceilometer_agent_compute[154414]: INFO:__main__:Validating config file
Dec  4 20:06:35 np0005546222 ceilometer_agent_compute[154414]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  4 20:06:35 np0005546222 ceilometer_agent_compute[154414]: INFO:__main__:Copying service configuration files
Dec  4 20:06:35 np0005546222 ceilometer_agent_compute[154414]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec  4 20:06:35 np0005546222 ceilometer_agent_compute[154414]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec  4 20:06:35 np0005546222 ceilometer_agent_compute[154414]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec  4 20:06:35 np0005546222 ceilometer_agent_compute[154414]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec  4 20:06:35 np0005546222 ceilometer_agent_compute[154414]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec  4 20:06:35 np0005546222 ceilometer_agent_compute[154414]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec  4 20:06:35 np0005546222 ceilometer_agent_compute[154414]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  4 20:06:35 np0005546222 ceilometer_agent_compute[154414]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  4 20:06:35 np0005546222 podman[154421]: 2025-12-05 01:06:35.903685027 +0000 UTC m=+0.054924653 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4)
Dec  4 20:06:35 np0005546222 ceilometer_agent_compute[154414]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  4 20:06:35 np0005546222 ceilometer_agent_compute[154414]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  4 20:06:35 np0005546222 ceilometer_agent_compute[154414]: INFO:__main__:Writing out command to execute
Dec  4 20:06:35 np0005546222 systemd[1]: 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424-3dc2176127e65ca3.service: Main process exited, code=exited, status=1/FAILURE
Dec  4 20:06:35 np0005546222 systemd[1]: 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424-3dc2176127e65ca3.service: Failed with result 'exit-code'.
Dec  4 20:06:35 np0005546222 ceilometer_agent_compute[154414]: ++ cat /run_command
Dec  4 20:06:35 np0005546222 ceilometer_agent_compute[154414]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec  4 20:06:35 np0005546222 ceilometer_agent_compute[154414]: + ARGS=
Dec  4 20:06:35 np0005546222 ceilometer_agent_compute[154414]: + sudo kolla_copy_cacerts
Dec  4 20:06:35 np0005546222 ceilometer_agent_compute[154414]: sudo: unable to send audit message: Operation not permitted
Dec  4 20:06:35 np0005546222 ceilometer_agent_compute[154414]: + [[ ! -n '' ]]
Dec  4 20:06:35 np0005546222 ceilometer_agent_compute[154414]: + . kolla_extend_start
Dec  4 20:06:35 np0005546222 ceilometer_agent_compute[154414]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec  4 20:06:35 np0005546222 ceilometer_agent_compute[154414]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Dec  4 20:06:35 np0005546222 ceilometer_agent_compute[154414]: + umask 0022
Dec  4 20:06:35 np0005546222 ceilometer_agent_compute[154414]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.678 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.678 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.678 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.679 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.679 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.679 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.679 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.679 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.679 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.679 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.680 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.680 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.680 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.680 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.680 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.680 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.680 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.680 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.681 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.681 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.681 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.681 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.681 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.681 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.681 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.681 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.681 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.682 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.682 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.682 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.682 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.682 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.682 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.682 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.682 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.682 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.682 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.682 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.682 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.682 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.682 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.683 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.683 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.683 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.683 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.683 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.683 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.683 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.683 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.683 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.683 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.683 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.683 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.683 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.684 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.684 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.684 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.684 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.684 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.684 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.684 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.684 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.684 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.684 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.684 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.685 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.685 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.685 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.685 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.685 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.685 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.685 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.685 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.685 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.685 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.685 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.685 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.686 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.686 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.686 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.686 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.686 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.686 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.686 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.686 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.686 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.686 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.686 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.686 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.687 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.687 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.687 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.687 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.687 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.687 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.687 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.687 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.687 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.687 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.687 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.687 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.687 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.688 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.688 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.688 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.688 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.688 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.688 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.688 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.688 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.688 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.688 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.688 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.688 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.689 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.689 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.689 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.689 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.689 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.689 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.689 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.689 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.689 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.689 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.689 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.689 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.689 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.690 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.690 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.690 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.690 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.690 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.690 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.690 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.690 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.690 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.690 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.690 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.690 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.690 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.691 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.691 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.691 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.691 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.691 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.691 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.691 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.691 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.691 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.691 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.691 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.691 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.691 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.713 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.714 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.714 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.714 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.714 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.714 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.714 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.714 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.714 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.715 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.715 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.715 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.715 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.715 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.715 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.715 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.715 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.715 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.715 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.716 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.716 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.716 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.716 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.716 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.716 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.716 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.716 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.716 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.716 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.716 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.717 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.717 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.717 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.717 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.717 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.717 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.717 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.717 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.717 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.717 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.717 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.717 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.718 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.718 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.718 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.718 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.718 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.718 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.718 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.718 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.718 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.718 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.718 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.718 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.719 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.719 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.719 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.719 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.719 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.719 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.719 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.719 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.719 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.719 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.719 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.719 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.720 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.720 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.720 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.720 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.720 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.720 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.720 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.720 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.720 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.720 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.721 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.721 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.721 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.721 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.721 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.721 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.721 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.721 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.721 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.721 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.721 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.721 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.722 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.722 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.722 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.722 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.722 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.722 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.722 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.722 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.722 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.722 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.722 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.722 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.722 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.723 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.723 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.723 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.723 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.723 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.723 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.723 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.723 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.723 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.723 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.723 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.723 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.724 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.724 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.724 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.724 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.724 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.724 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.724 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.724 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.724 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.724 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.724 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.725 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.725 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.725 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.725 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.725 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.725 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.725 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.725 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.725 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.725 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.725 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.725 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.726 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.726 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.726 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.726 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.726 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.726 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.726 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.726 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.726 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.726 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.726 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.726 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.726 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.727 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.727 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.727 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.727 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.727 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.729 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.732 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.733 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Dec  4 20:06:36 np0005546222 python3.9[154597]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.942 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.951 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.952 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec  4 20:06:36 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:36.952 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec  4 20:06:36 np0005546222 systemd[1]: Stopping ceilometer_agent_compute container...
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.051 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.068 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.068 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.068 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.068 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.068 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.068 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.068 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.068 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.069 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.069 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.069 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.069 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.069 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.069 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.069 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.069 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.069 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.069 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.070 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.070 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.070 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.070 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.070 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.070 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.070 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.070 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.070 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.070 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.070 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.071 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.071 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.071 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.071 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.071 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.071 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.071 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.071 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.071 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.071 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.072 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.072 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.072 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.072 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.072 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.072 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.072 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.072 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.072 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.072 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.072 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.072 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.073 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.073 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.073 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.073 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.073 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.073 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.073 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.073 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.073 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.073 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.073 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.073 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.074 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.074 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.074 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.074 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.074 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.074 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.074 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.074 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.074 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.074 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.074 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.074 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.074 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.074 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.075 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.075 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.075 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.075 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.075 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.075 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.075 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.075 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.075 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.075 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.075 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.076 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.076 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.076 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.076 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.076 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.076 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.076 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.076 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.076 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.076 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.076 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.076 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.076 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.076 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.077 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.077 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.077 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.077 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.077 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.077 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.077 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.077 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.077 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.077 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.077 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.078 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.078 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.078 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.078 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.078 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.078 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.078 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.078 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.078 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.078 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.078 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.078 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.078 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.078 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.078 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.078 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.079 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.079 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.079 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.079 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.079 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.079 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.079 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.079 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.079 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.079 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.079 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.079 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.079 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.079 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.079 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.079 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.080 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.080 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.080 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.080 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.080 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.080 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.080 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.080 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.080 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.080 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.080 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.080 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.080 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.080 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.081 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.081 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.081 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.081 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.081 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.081 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.081 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.081 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.081 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.081 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.083 14 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.094 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.095 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.095 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a93830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.095 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fe6f6a92390>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.095 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a93860>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.096 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a938c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a91910>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a93920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a91940>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f9cc3170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.096 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a93980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a939e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a93a10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a91a30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f9949a60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a93a70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a93ad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f9db02f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.097 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a902f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a93b30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a91b80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a923c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a91c10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a91ca0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a91d00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a91d30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.098 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a93d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.098 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a90590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fe6f6a93800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.099 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fe6f6a93da0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fe6f6635f70>] with cache [{}], pollster history [{'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.099 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fe6f6a93890>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fe6f6a91af0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fe6f6a938f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fe6f6a91a90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fe6fa43adb0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fe6f6a93950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fe6f6a939b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fe6f7bb14f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.100 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fe6f6a91c70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fe6f6a91a00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fe6f6a93a40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fe6f6a93aa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fe6f6a93d70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fe6f6a902c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fe6f6a93b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.101 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fe6f6a91b50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fe6f92cf770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fe6f6a91be0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fe6f6a919d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fe6f6a91cd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fe6f6a93dd0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fe6f6a93d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.102 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fe6f6a90560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.103 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fe6f91da720>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fe6f7c13a10>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.103 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.152 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:319
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.152 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:323
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.152 14 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [14]
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.152 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentHeartBeatManager(0) [12]
Dec  4 20:06:37 np0005546222 ceilometer_agent_compute[154414]: 2025-12-05 01:06:37.159 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:335
Dec  4 20:06:37 np0005546222 virtqemud[138703]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec  4 20:06:37 np0005546222 virtqemud[138703]: hostname: compute-0
Dec  4 20:06:37 np0005546222 virtqemud[138703]: End of file while reading data: Input/output error
Dec  4 20:06:37 np0005546222 virtqemud[138703]: End of file while reading data: Input/output error
Dec  4 20:06:37 np0005546222 systemd[1]: libpod-01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424.scope: Deactivated successfully.
Dec  4 20:06:37 np0005546222 systemd[1]: libpod-01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424.scope: Consumed 1.470s CPU time.
Dec  4 20:06:37 np0005546222 podman[154609]: 2025-12-05 01:06:37.327359655 +0000 UTC m=+0.315920623 container died 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute)
Dec  4 20:06:37 np0005546222 systemd[1]: 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424-3dc2176127e65ca3.timer: Deactivated successfully.
Dec  4 20:06:37 np0005546222 systemd[1]: Stopped /usr/bin/podman healthcheck run 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424.
Dec  4 20:06:37 np0005546222 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424-userdata-shm.mount: Deactivated successfully.
Dec  4 20:06:37 np0005546222 systemd[1]: var-lib-containers-storage-overlay-e108ee4bc9d8514f675f957a6e3d541692d2b8ecf712c616f7574cf48c93d1e1-merged.mount: Deactivated successfully.
Dec  4 20:06:38 np0005546222 systemd[1]: virtnodedevd.service: Deactivated successfully.
Dec  4 20:06:40 np0005546222 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  4 20:06:40 np0005546222 podman[154647]: 2025-12-05 01:06:40.168713827 +0000 UTC m=+0.122565946 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 20:06:41 np0005546222 podman[154609]: 2025-12-05 01:06:41.10237904 +0000 UTC m=+4.090939938 container cleanup 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=edpm)
Dec  4 20:06:41 np0005546222 podman[154609]: ceilometer_agent_compute
Dec  4 20:06:41 np0005546222 podman[154675]: ceilometer_agent_compute
Dec  4 20:06:41 np0005546222 systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully.
Dec  4 20:06:41 np0005546222 systemd[1]: Stopped ceilometer_agent_compute container.
Dec  4 20:06:41 np0005546222 systemd[1]: Starting ceilometer_agent_compute container...
Dec  4 20:06:41 np0005546222 systemd[1]: Started libcrun container.
Dec  4 20:06:41 np0005546222 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e108ee4bc9d8514f675f957a6e3d541692d2b8ecf712c616f7574cf48c93d1e1/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  4 20:06:41 np0005546222 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e108ee4bc9d8514f675f957a6e3d541692d2b8ecf712c616f7574cf48c93d1e1/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec  4 20:06:41 np0005546222 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e108ee4bc9d8514f675f957a6e3d541692d2b8ecf712c616f7574cf48c93d1e1/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec  4 20:06:41 np0005546222 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e108ee4bc9d8514f675f957a6e3d541692d2b8ecf712c616f7574cf48c93d1e1/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec  4 20:06:41 np0005546222 systemd[1]: Started /usr/bin/podman healthcheck run 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424.
Dec  4 20:06:41 np0005546222 podman[154687]: 2025-12-05 01:06:41.354511407 +0000 UTC m=+0.135858346 container init 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec  4 20:06:41 np0005546222 ceilometer_agent_compute[154702]: + sudo -E kolla_set_configs
Dec  4 20:06:41 np0005546222 ceilometer_agent_compute[154702]: sudo: unable to send audit message: Operation not permitted
Dec  4 20:06:41 np0005546222 podman[154687]: 2025-12-05 01:06:41.391993421 +0000 UTC m=+0.173340330 container start 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  4 20:06:41 np0005546222 podman[154687]: ceilometer_agent_compute
Dec  4 20:06:41 np0005546222 systemd[1]: Started ceilometer_agent_compute container.
Dec  4 20:06:41 np0005546222 ceilometer_agent_compute[154702]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  4 20:06:41 np0005546222 ceilometer_agent_compute[154702]: INFO:__main__:Validating config file
Dec  4 20:06:41 np0005546222 ceilometer_agent_compute[154702]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  4 20:06:41 np0005546222 ceilometer_agent_compute[154702]: INFO:__main__:Copying service configuration files
Dec  4 20:06:41 np0005546222 ceilometer_agent_compute[154702]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec  4 20:06:41 np0005546222 ceilometer_agent_compute[154702]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec  4 20:06:41 np0005546222 ceilometer_agent_compute[154702]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec  4 20:06:41 np0005546222 ceilometer_agent_compute[154702]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec  4 20:06:41 np0005546222 ceilometer_agent_compute[154702]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec  4 20:06:41 np0005546222 ceilometer_agent_compute[154702]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec  4 20:06:41 np0005546222 ceilometer_agent_compute[154702]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  4 20:06:41 np0005546222 ceilometer_agent_compute[154702]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  4 20:06:41 np0005546222 ceilometer_agent_compute[154702]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  4 20:06:41 np0005546222 ceilometer_agent_compute[154702]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  4 20:06:41 np0005546222 ceilometer_agent_compute[154702]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  4 20:06:41 np0005546222 ceilometer_agent_compute[154702]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  4 20:06:41 np0005546222 ceilometer_agent_compute[154702]: INFO:__main__:Writing out command to execute
Dec  4 20:06:41 np0005546222 podman[154709]: 2025-12-05 01:06:41.468172828 +0000 UTC m=+0.064171518 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible)
Dec  4 20:06:41 np0005546222 ceilometer_agent_compute[154702]: ++ cat /run_command
Dec  4 20:06:41 np0005546222 ceilometer_agent_compute[154702]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec  4 20:06:41 np0005546222 ceilometer_agent_compute[154702]: + ARGS=
Dec  4 20:06:41 np0005546222 ceilometer_agent_compute[154702]: + sudo kolla_copy_cacerts
Dec  4 20:06:41 np0005546222 systemd[1]: 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424-57d4f94636a0dba8.service: Main process exited, code=exited, status=1/FAILURE
Dec  4 20:06:41 np0005546222 systemd[1]: 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424-57d4f94636a0dba8.service: Failed with result 'exit-code'.
Dec  4 20:06:41 np0005546222 ceilometer_agent_compute[154702]: sudo: unable to send audit message: Operation not permitted
Dec  4 20:06:41 np0005546222 ceilometer_agent_compute[154702]: + [[ ! -n '' ]]
Dec  4 20:06:41 np0005546222 ceilometer_agent_compute[154702]: + . kolla_extend_start
Dec  4 20:06:41 np0005546222 ceilometer_agent_compute[154702]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec  4 20:06:41 np0005546222 ceilometer_agent_compute[154702]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Dec  4 20:06:41 np0005546222 ceilometer_agent_compute[154702]: + umask 0022
Dec  4 20:06:41 np0005546222 ceilometer_agent_compute[154702]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Dec  4 20:06:42 np0005546222 python3.9[154885]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.322 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.322 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.322 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.322 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.322 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.322 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.323 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.323 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.323 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.323 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.323 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.323 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.323 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.323 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.323 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.323 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.324 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.324 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.324 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.324 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.324 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.324 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.324 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.324 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.324 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.324 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.324 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.325 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.325 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.325 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.325 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.325 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.325 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.325 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.325 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.325 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.326 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.326 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.326 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.326 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.326 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.326 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.326 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.326 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.326 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.326 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.327 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.327 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.327 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.327 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.327 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.327 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.327 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.327 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.327 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.327 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.327 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.327 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.328 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.328 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.328 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.328 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.328 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.328 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.328 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.328 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.328 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.328 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.328 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.329 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.329 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.329 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.329 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.329 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.329 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.329 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.329 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.329 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.329 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.329 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.329 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.330 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.330 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.330 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.330 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.330 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.330 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.330 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.330 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.330 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.330 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.330 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.331 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.331 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.331 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.331 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.331 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.331 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.331 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.331 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.331 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.331 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.331 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.331 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.332 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.332 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.332 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.332 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.332 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.332 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.332 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.332 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.332 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.332 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.332 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.332 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.333 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.333 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.333 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.333 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.333 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.333 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.333 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.333 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.333 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.333 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.333 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.333 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.333 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.334 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.334 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.334 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.334 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.334 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.334 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.334 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.334 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.334 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.334 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.334 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.334 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.334 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.335 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.335 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.335 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.335 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.335 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.335 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.335 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.335 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.335 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.335 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.335 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.358 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.358 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.359 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.359 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.359 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.359 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.359 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.359 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.359 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.360 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.360 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.360 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.360 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.360 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.360 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.360 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.360 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.360 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.361 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.361 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.361 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.361 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.361 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.361 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.361 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.361 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.361 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.361 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.362 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.362 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.362 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.362 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.362 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.362 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.362 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.362 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.362 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.363 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.363 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.363 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.363 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.363 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.363 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.363 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.363 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.363 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.363 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.363 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.364 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.364 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.364 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.364 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.364 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.364 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.364 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.364 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.364 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.364 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.365 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.365 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.365 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.365 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.365 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.365 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.365 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.365 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.365 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.365 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.366 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.366 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.366 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.366 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.366 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.366 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.366 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.366 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.366 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.366 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.367 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.367 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.367 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.367 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.367 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.367 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.367 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.367 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.367 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.368 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.368 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.368 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.368 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.368 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.368 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.368 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.368 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.368 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.368 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.368 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.368 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.369 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.369 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.369 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.369 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.369 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.369 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.369 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.369 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.369 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.369 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.369 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.370 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.370 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.370 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.370 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.370 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.370 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.370 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.370 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.370 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.371 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.371 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.371 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.371 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.371 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.371 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.371 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.371 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.371 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.371 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.371 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.372 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.372 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.372 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.372 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.372 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.372 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.372 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.372 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.372 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.372 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.372 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.372 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.373 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.373 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.373 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.373 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.373 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.373 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.373 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.373 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.373 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.373 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.373 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.373 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.375 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.377 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.378 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.380 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.387 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.387 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.387 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec  4 20:06:42 np0005546222 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.509 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.509 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.509 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.509 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.509 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.510 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.510 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.510 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.510 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.510 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.510 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.510 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.510 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.510 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.510 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.511 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.511 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.511 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.511 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.511 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.511 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.511 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.511 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.511 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.511 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.511 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.512 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.512 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.512 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.512 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.512 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.512 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.512 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.512 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.512 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.512 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.512 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.512 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.512 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.512 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.513 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.513 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.513 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.513 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.513 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.513 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.513 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.513 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.513 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.513 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.513 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.513 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.514 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.514 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.514 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.514 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.514 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.514 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.514 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.514 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.514 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.514 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.514 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.514 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.514 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.515 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.515 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.515 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.515 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.515 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.515 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.515 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.515 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.515 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.515 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.515 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.515 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.515 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.515 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.516 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.516 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.516 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.516 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.516 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.516 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.516 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.516 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.516 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.516 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.516 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.516 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.517 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.517 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.517 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.517 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.517 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.517 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.517 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.517 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.517 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.517 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.517 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.517 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.517 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.517 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.518 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.518 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.518 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.518 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.518 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.518 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.518 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.518 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.518 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.518 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.518 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.518 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.518 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.519 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.519 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.519 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.519 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.519 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.519 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.519 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.519 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.519 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.519 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.519 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.519 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.519 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.519 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.519 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.519 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.519 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.519 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.519 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.519 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.520 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.520 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.520 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.520 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.520 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.520 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.520 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.520 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.520 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.520 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.520 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.520 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.520 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.520 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.520 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.521 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.521 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.521 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.521 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.521 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.521 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.521 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.521 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.521 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.521 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.521 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.521 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.521 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.522 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.522 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.522 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.524 14 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.538 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.539 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.539 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f83151a5f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.539 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f83151a6690>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.539 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.540 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.540 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.540 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8316c39160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.540 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.540 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee59a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.541 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f941a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.541 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee79e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.541 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.541 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f942c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.541 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee6300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.541 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.541 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.542 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.542 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.542 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee74d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.542 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.542 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.542 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.542 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.542 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.542 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee76b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.544 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.544 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.544 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.544 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.545 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8314f94050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.545 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.545 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8314f940e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.545 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.545 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f831506dc10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.545 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.545 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8314ee7950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.546 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.546 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8314ee7a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.546 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.546 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8314f94170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.546 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.546 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8314ee79b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.546 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.546 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8314f94200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.547 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.547 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8314f94290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.547 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8314ee7ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.548 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8314f94320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.548 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8314ee59d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.548 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8314ee7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.548 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8314ee7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.548 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8314ee74a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.548 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8314ee7500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.549 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.549 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8314ee7560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.549 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.549 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8314ee75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.549 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.549 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8314f945f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.549 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.549 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8314ee7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.549 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.549 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8314ee7680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.549 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.549 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8314ee76e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.550 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.550 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8314ee7f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.550 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.550 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8314ee7740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.550 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.550 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8314ee7f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.550 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.552 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.552 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.552 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.552 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.552 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.552 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.552 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.552 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.552 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:06:42.552 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:06:42 np0005546222 python3.9[155022]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764896801.6723359-578-202301564311480/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  4 20:06:43 np0005546222 python3.9[155174]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=node_exporter.json debug=False
Dec  4 20:06:44 np0005546222 python3.9[155326]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  4 20:06:45 np0005546222 python3[155478]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=node_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec  4 20:06:46 np0005546222 podman[155491]: 2025-12-05 01:06:46.992875053 +0000 UTC m=+1.219950093 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Dec  4 20:06:47 np0005546222 podman[155587]: 2025-12-05 01:06:47.13104152 +0000 UTC m=+0.043271064 container create 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, config_id=edpm, container_name=node_exporter, managed_by=edpm_ansible)
Dec  4 20:06:47 np0005546222 podman[155587]: 2025-12-05 01:06:47.106728891 +0000 UTC m=+0.018958395 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Dec  4 20:06:47 np0005546222 python3[155478]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck node_exporter --label config_id=edpm --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter:v1.5.0 --web.config.file=/etc/node_exporter/node_exporter.yaml --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Dec  4 20:06:47 np0005546222 python3.9[155777]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 20:06:48 np0005546222 python3.9[155931]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:06:49 np0005546222 python3.9[156082]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764896808.884332-631-140619042267442/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:06:50 np0005546222 python3.9[156158]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  4 20:06:50 np0005546222 systemd[1]: Reloading.
Dec  4 20:06:50 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 20:06:50 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 20:06:51 np0005546222 python3.9[156270]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 20:06:51 np0005546222 systemd[1]: Reloading.
Dec  4 20:06:51 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 20:06:51 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 20:06:51 np0005546222 systemd[1]: Starting node_exporter container...
Dec  4 20:06:51 np0005546222 systemd[1]: Started libcrun container.
Dec  4 20:06:51 np0005546222 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae76d3462a5826074750f1233391fe337ca691f19a9e669132d737b113b57717/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  4 20:06:51 np0005546222 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae76d3462a5826074750f1233391fe337ca691f19a9e669132d737b113b57717/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  4 20:06:51 np0005546222 systemd[1]: Started /usr/bin/podman healthcheck run 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a.
Dec  4 20:06:51 np0005546222 podman[156311]: 2025-12-05 01:06:51.811723054 +0000 UTC m=+0.156042898 container init 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.825Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.825Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.825Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.825Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.825Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.826Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.826Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.826Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.826Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.826Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.826Z caller=node_exporter.go:117 level=info collector=arp
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.826Z caller=node_exporter.go:117 level=info collector=bcache
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.826Z caller=node_exporter.go:117 level=info collector=bonding
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.826Z caller=node_exporter.go:117 level=info collector=btrfs
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.826Z caller=node_exporter.go:117 level=info collector=conntrack
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.826Z caller=node_exporter.go:117 level=info collector=cpu
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.826Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.826Z caller=node_exporter.go:117 level=info collector=diskstats
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.826Z caller=node_exporter.go:117 level=info collector=edac
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.826Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=filefd
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=filesystem
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=infiniband
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=ipvs
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=loadavg
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=mdadm
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=meminfo
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=netclass
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=netdev
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=netstat
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=nfs
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=nfsd
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=nvme
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=schedstat
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=sockstat
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=softnet
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=systemd
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=tapestats
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=vmstat
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=xfs
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.827Z caller=node_exporter.go:117 level=info collector=zfs
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.828Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Dec  4 20:06:51 np0005546222 node_exporter[156326]: ts=2025-12-05T01:06:51.828Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Dec  4 20:06:51 np0005546222 podman[156311]: 2025-12-05 01:06:51.845001669 +0000 UTC m=+0.189321453 container start 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  4 20:06:51 np0005546222 podman[156311]: node_exporter
Dec  4 20:06:51 np0005546222 systemd[1]: Started node_exporter container.
Dec  4 20:06:51 np0005546222 podman[156335]: 2025-12-05 01:06:51.909812935 +0000 UTC m=+0.050961320 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  4 20:06:52 np0005546222 python3.9[156510]: ansible-ansible.builtin.systemd Invoked with name=edpm_node_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  4 20:06:53 np0005546222 systemd[1]: Stopping node_exporter container...
Dec  4 20:06:53 np0005546222 systemd[1]: libpod-6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a.scope: Deactivated successfully.
Dec  4 20:06:53 np0005546222 podman[156514]: 2025-12-05 01:06:53.859101825 +0000 UTC m=+0.053616573 container died 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  4 20:06:53 np0005546222 systemd[1]: 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a-631ee002f4e24dfa.timer: Deactivated successfully.
Dec  4 20:06:53 np0005546222 systemd[1]: Stopped /usr/bin/podman healthcheck run 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a.
Dec  4 20:06:53 np0005546222 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a-userdata-shm.mount: Deactivated successfully.
Dec  4 20:06:53 np0005546222 systemd[1]: var-lib-containers-storage-overlay-ae76d3462a5826074750f1233391fe337ca691f19a9e669132d737b113b57717-merged.mount: Deactivated successfully.
Dec  4 20:06:54 np0005546222 podman[156514]: 2025-12-05 01:06:54.031545148 +0000 UTC m=+0.226059886 container cleanup 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  4 20:06:54 np0005546222 podman[156514]: node_exporter
Dec  4 20:06:54 np0005546222 systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec  4 20:06:54 np0005546222 podman[156546]: node_exporter
Dec  4 20:06:54 np0005546222 systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'.
Dec  4 20:06:54 np0005546222 systemd[1]: Stopped node_exporter container.
Dec  4 20:06:54 np0005546222 systemd[1]: Starting node_exporter container...
Dec  4 20:06:54 np0005546222 systemd[1]: Started libcrun container.
Dec  4 20:06:54 np0005546222 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae76d3462a5826074750f1233391fe337ca691f19a9e669132d737b113b57717/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  4 20:06:54 np0005546222 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae76d3462a5826074750f1233391fe337ca691f19a9e669132d737b113b57717/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  4 20:06:54 np0005546222 systemd[1]: Started /usr/bin/podman healthcheck run 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a.
Dec  4 20:06:54 np0005546222 podman[156559]: 2025-12-05 01:06:54.269981513 +0000 UTC m=+0.129353766 container init 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.289Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.289Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.289Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.289Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.290Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.290Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.290Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.290Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.290Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=arp
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=bcache
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=bonding
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=btrfs
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=conntrack
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=cpu
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=diskstats
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=edac
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=filefd
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=filesystem
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=infiniband
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=ipvs
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=loadavg
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=mdadm
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=meminfo
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=netclass
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=netdev
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=netstat
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=nfs
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=nfsd
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=nvme
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=schedstat
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=sockstat
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=softnet
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=systemd
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=tapestats
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=vmstat
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=xfs
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.291Z caller=node_exporter.go:117 level=info collector=zfs
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.292Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Dec  4 20:06:54 np0005546222 node_exporter[156575]: ts=2025-12-05T01:06:54.292Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Dec  4 20:06:54 np0005546222 podman[156559]: 2025-12-05 01:06:54.304240128 +0000 UTC m=+0.163612401 container start 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  4 20:06:54 np0005546222 podman[156559]: node_exporter
Dec  4 20:06:54 np0005546222 systemd[1]: Started node_exporter container.
Dec  4 20:06:54 np0005546222 podman[156584]: 2025-12-05 01:06:54.403114774 +0000 UTC m=+0.078818579 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  4 20:06:55 np0005546222 python3.9[156758]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:06:55 np0005546222 python3.9[156881]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764896814.5393744-663-258535666563393/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  4 20:06:56 np0005546222 python3.9[157033]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False
Dec  4 20:06:57 np0005546222 python3.9[157185]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  4 20:06:58 np0005546222 python3[157337]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec  4 20:07:00 np0005546222 podman[157350]: 2025-12-05 01:07:00.068777021 +0000 UTC m=+1.387408642 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Dec  4 20:07:00 np0005546222 podman[157447]: 2025-12-05 01:07:00.181975248 +0000 UTC m=+0.040117087 container create 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, config_id=edpm, container_name=podman_exporter, managed_by=edpm_ansible)
Dec  4 20:07:00 np0005546222 podman[157447]: 2025-12-05 01:07:00.160420834 +0000 UTC m=+0.018562673 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Dec  4 20:07:00 np0005546222 python3[157337]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Dec  4 20:07:01 np0005546222 python3.9[157638]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 20:07:01 np0005546222 python3.9[157792]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:07:02 np0005546222 python3.9[157943]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764896822.005405-716-253758520666698/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:07:03 np0005546222 python3.9[158019]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  4 20:07:03 np0005546222 systemd[1]: Reloading.
Dec  4 20:07:03 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 20:07:03 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 20:07:04 np0005546222 python3.9[158130]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 20:07:04 np0005546222 systemd[1]: Reloading.
Dec  4 20:07:04 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 20:07:04 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 20:07:04 np0005546222 systemd[1]: Starting podman_exporter container...
Dec  4 20:07:04 np0005546222 systemd[1]: Started libcrun container.
Dec  4 20:07:04 np0005546222 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/492903c1adf186d3b8596a7f27f4933dbed2c6566affc3d01772a4df0cbd308a/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  4 20:07:04 np0005546222 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/492903c1adf186d3b8596a7f27f4933dbed2c6566affc3d01772a4df0cbd308a/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  4 20:07:04 np0005546222 systemd[1]: Started /usr/bin/podman healthcheck run 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e.
Dec  4 20:07:04 np0005546222 podman[158171]: 2025-12-05 01:07:04.943193373 +0000 UTC m=+0.161237708 container init 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  4 20:07:04 np0005546222 podman_exporter[158186]: ts=2025-12-05T01:07:04.964Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Dec  4 20:07:04 np0005546222 podman_exporter[158186]: ts=2025-12-05T01:07:04.964Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Dec  4 20:07:04 np0005546222 podman_exporter[158186]: ts=2025-12-05T01:07:04.964Z caller=handler.go:94 level=info msg="enabled collectors"
Dec  4 20:07:04 np0005546222 podman_exporter[158186]: ts=2025-12-05T01:07:04.964Z caller=handler.go:105 level=info collector=container
Dec  4 20:07:04 np0005546222 podman[158171]: 2025-12-05 01:07:04.985229478 +0000 UTC m=+0.203273763 container start 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  4 20:07:04 np0005546222 podman[158171]: podman_exporter
Dec  4 20:07:04 np0005546222 systemd[1]: Starting Podman API Service...
Dec  4 20:07:04 np0005546222 systemd[1]: Started Podman API Service.
Dec  4 20:07:05 np0005546222 systemd[1]: Started podman_exporter container.
Dec  4 20:07:05 np0005546222 podman[158197]: time="2025-12-05T01:07:05Z" level=info msg="/usr/bin/podman filtering at log level info"
Dec  4 20:07:05 np0005546222 podman[158197]: time="2025-12-05T01:07:05Z" level=info msg="Setting parallel job count to 25"
Dec  4 20:07:05 np0005546222 podman[158197]: time="2025-12-05T01:07:05Z" level=info msg="Using sqlite as database backend"
Dec  4 20:07:05 np0005546222 podman[158197]: time="2025-12-05T01:07:05Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Dec  4 20:07:05 np0005546222 podman[158197]: time="2025-12-05T01:07:05Z" level=info msg="Using systemd socket activation to determine API endpoint"
Dec  4 20:07:05 np0005546222 podman[158197]: time="2025-12-05T01:07:05Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Dec  4 20:07:05 np0005546222 podman[158197]: @ - - [05/Dec/2025:01:07:05 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Dec  4 20:07:05 np0005546222 podman[158197]: time="2025-12-05T01:07:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  4 20:07:05 np0005546222 podman[158197]: @ - - [05/Dec/2025:01:07:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 9686 "" "Go-http-client/1.1"
Dec  4 20:07:05 np0005546222 podman_exporter[158186]: ts=2025-12-05T01:07:05.082Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Dec  4 20:07:05 np0005546222 podman_exporter[158186]: ts=2025-12-05T01:07:05.083Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Dec  4 20:07:05 np0005546222 podman_exporter[158186]: ts=2025-12-05T01:07:05.084Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Dec  4 20:07:05 np0005546222 podman[158196]: 2025-12-05 01:07:05.090434329 +0000 UTC m=+0.084970589 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=starting, health_failing_streak=1, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  4 20:07:05 np0005546222 systemd[1]: 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e-3d5bdfaef99ef5a6.service: Main process exited, code=exited, status=1/FAILURE
Dec  4 20:07:05 np0005546222 systemd[1]: 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e-3d5bdfaef99ef5a6.service: Failed with result 'exit-code'.
Dec  4 20:07:05 np0005546222 python3.9[158384]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  4 20:07:06 np0005546222 systemd[1]: Stopping podman_exporter container...
Dec  4 20:07:06 np0005546222 podman[158197]: @ - - [05/Dec/2025:01:07:05 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 1449 "" "Go-http-client/1.1"
Dec  4 20:07:06 np0005546222 systemd[1]: libpod-63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e.scope: Deactivated successfully.
Dec  4 20:07:06 np0005546222 podman[158388]: 2025-12-05 01:07:06.0996875 +0000 UTC m=+0.066652134 container died 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  4 20:07:06 np0005546222 systemd[1]: 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e-3d5bdfaef99ef5a6.timer: Deactivated successfully.
Dec  4 20:07:06 np0005546222 systemd[1]: Stopped /usr/bin/podman healthcheck run 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e.
Dec  4 20:07:06 np0005546222 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e-userdata-shm.mount: Deactivated successfully.
Dec  4 20:07:06 np0005546222 systemd[1]: var-lib-containers-storage-overlay-492903c1adf186d3b8596a7f27f4933dbed2c6566affc3d01772a4df0cbd308a-merged.mount: Deactivated successfully.
Dec  4 20:07:06 np0005546222 podman[158388]: 2025-12-05 01:07:06.384451532 +0000 UTC m=+0.351416136 container cleanup 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  4 20:07:06 np0005546222 podman[158388]: podman_exporter
Dec  4 20:07:06 np0005546222 systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec  4 20:07:06 np0005546222 podman[158417]: podman_exporter
Dec  4 20:07:06 np0005546222 systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'.
Dec  4 20:07:06 np0005546222 systemd[1]: Stopped podman_exporter container.
Dec  4 20:07:06 np0005546222 systemd[1]: Starting podman_exporter container...
Dec  4 20:07:06 np0005546222 systemd[1]: Started libcrun container.
Dec  4 20:07:06 np0005546222 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/492903c1adf186d3b8596a7f27f4933dbed2c6566affc3d01772a4df0cbd308a/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  4 20:07:06 np0005546222 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/492903c1adf186d3b8596a7f27f4933dbed2c6566affc3d01772a4df0cbd308a/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  4 20:07:06 np0005546222 systemd[1]: Started /usr/bin/podman healthcheck run 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e.
Dec  4 20:07:06 np0005546222 podman[158430]: 2025-12-05 01:07:06.688245871 +0000 UTC m=+0.172842186 container init 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  4 20:07:06 np0005546222 podman_exporter[158445]: ts=2025-12-05T01:07:06.713Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Dec  4 20:07:06 np0005546222 podman_exporter[158445]: ts=2025-12-05T01:07:06.713Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Dec  4 20:07:06 np0005546222 podman_exporter[158445]: ts=2025-12-05T01:07:06.713Z caller=handler.go:94 level=info msg="enabled collectors"
Dec  4 20:07:06 np0005546222 podman_exporter[158445]: ts=2025-12-05T01:07:06.713Z caller=handler.go:105 level=info collector=container
Dec  4 20:07:06 np0005546222 podman[158197]: @ - - [05/Dec/2025:01:07:06 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Dec  4 20:07:06 np0005546222 podman[158197]: time="2025-12-05T01:07:06Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  4 20:07:06 np0005546222 podman[158430]: 2025-12-05 01:07:06.730532924 +0000 UTC m=+0.215129179 container start 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  4 20:07:06 np0005546222 podman[158430]: podman_exporter
Dec  4 20:07:06 np0005546222 podman[158197]: @ - - [05/Dec/2025:01:07:06 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 9688 "" "Go-http-client/1.1"
Dec  4 20:07:06 np0005546222 podman_exporter[158445]: ts=2025-12-05T01:07:06.740Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Dec  4 20:07:06 np0005546222 podman_exporter[158445]: ts=2025-12-05T01:07:06.741Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Dec  4 20:07:06 np0005546222 podman_exporter[158445]: ts=2025-12-05T01:07:06.741Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Dec  4 20:07:06 np0005546222 systemd[1]: Started podman_exporter container.
Dec  4 20:07:06 np0005546222 podman[158454]: 2025-12-05 01:07:06.850486209 +0000 UTC m=+0.103791789 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  4 20:07:07 np0005546222 python3.9[158633]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:07:08 np0005546222 python3.9[158756]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764896827.0250788-748-227243265095570/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  4 20:07:09 np0005546222 python3.9[158908]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False
Dec  4 20:07:10 np0005546222 python3.9[159060]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  4 20:07:11 np0005546222 python3[159212]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec  4 20:07:11 np0005546222 podman[159238]: 2025-12-05 01:07:11.669796943 +0000 UTC m=+0.076790517 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=2, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.build-date=20251125)
Dec  4 20:07:11 np0005546222 systemd[1]: 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424-57d4f94636a0dba8.service: Main process exited, code=exited, status=1/FAILURE
Dec  4 20:07:11 np0005546222 systemd[1]: 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424-57d4f94636a0dba8.service: Failed with result 'exit-code'.
Dec  4 20:07:11 np0005546222 podman[159239]: 2025-12-05 01:07:11.7326963 +0000 UTC m=+0.139104016 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller)
Dec  4 20:07:13 np0005546222 podman[159225]: 2025-12-05 01:07:13.630138903 +0000 UTC m=+2.459739805 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec  4 20:07:13 np0005546222 podman[159367]: 2025-12-05 01:07:13.76639071 +0000 UTC m=+0.048892547 container create 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, architecture=x86_64, maintainer=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.33.7, name=ubi9-minimal, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc.)
Dec  4 20:07:13 np0005546222 podman[159367]: 2025-12-05 01:07:13.73909883 +0000 UTC m=+0.021600667 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec  4 20:07:13 np0005546222 python3[159212]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OS_ENDPOINT_TYPE=internal --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=edpm --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec  4 20:07:14 np0005546222 python3.9[159559]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 20:07:15 np0005546222 python3.9[159713]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:07:16 np0005546222 python3.9[159864]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764896835.4069624-801-173414154476593/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:07:16 np0005546222 python3.9[159940]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  4 20:07:16 np0005546222 systemd[1]: Reloading.
Dec  4 20:07:16 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 20:07:16 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 20:07:17 np0005546222 python3.9[160051]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 20:07:17 np0005546222 systemd[1]: Reloading.
Dec  4 20:07:17 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 20:07:17 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 20:07:17 np0005546222 systemd[1]: Starting openstack_network_exporter container...
Dec  4 20:07:18 np0005546222 systemd[1]: Started libcrun container.
Dec  4 20:07:18 np0005546222 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952f396ae4fa4ce7b65d319291f161c33a679374731006ecac684d22de091d0e/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec  4 20:07:18 np0005546222 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952f396ae4fa4ce7b65d319291f161c33a679374731006ecac684d22de091d0e/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  4 20:07:18 np0005546222 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952f396ae4fa4ce7b65d319291f161c33a679374731006ecac684d22de091d0e/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  4 20:07:18 np0005546222 systemd[1]: Started /usr/bin/podman healthcheck run 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88.
Dec  4 20:07:18 np0005546222 podman[160091]: 2025-12-05 01:07:18.115495829 +0000 UTC m=+0.126020983 container init 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., release=1755695350, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.33.7, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter)
Dec  4 20:07:18 np0005546222 openstack_network_exporter[160106]: INFO    01:07:18 main.go:48: registering *bridge.Collector
Dec  4 20:07:18 np0005546222 openstack_network_exporter[160106]: INFO    01:07:18 main.go:48: registering *coverage.Collector
Dec  4 20:07:18 np0005546222 openstack_network_exporter[160106]: INFO    01:07:18 main.go:48: registering *datapath.Collector
Dec  4 20:07:18 np0005546222 openstack_network_exporter[160106]: INFO    01:07:18 main.go:48: registering *iface.Collector
Dec  4 20:07:18 np0005546222 openstack_network_exporter[160106]: INFO    01:07:18 main.go:48: registering *memory.Collector
Dec  4 20:07:18 np0005546222 openstack_network_exporter[160106]: INFO    01:07:18 main.go:48: registering *ovnnorthd.Collector
Dec  4 20:07:18 np0005546222 openstack_network_exporter[160106]: INFO    01:07:18 main.go:48: registering *ovn.Collector
Dec  4 20:07:18 np0005546222 openstack_network_exporter[160106]: INFO    01:07:18 main.go:48: registering *ovsdbserver.Collector
Dec  4 20:07:18 np0005546222 openstack_network_exporter[160106]: INFO    01:07:18 main.go:48: registering *pmd_perf.Collector
Dec  4 20:07:18 np0005546222 openstack_network_exporter[160106]: INFO    01:07:18 main.go:48: registering *pmd_rxq.Collector
Dec  4 20:07:18 np0005546222 openstack_network_exporter[160106]: INFO    01:07:18 main.go:48: registering *vswitch.Collector
Dec  4 20:07:18 np0005546222 openstack_network_exporter[160106]: NOTICE  01:07:18 main.go:76: listening on https://:9105/metrics
Dec  4 20:07:18 np0005546222 podman[160091]: 2025-12-05 01:07:18.14666715 +0000 UTC m=+0.157192284 container start 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=edpm, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  4 20:07:18 np0005546222 podman[160091]: openstack_network_exporter
Dec  4 20:07:18 np0005546222 systemd[1]: Started openstack_network_exporter container.
Dec  4 20:07:18 np0005546222 podman[160117]: 2025-12-05 01:07:18.298513587 +0000 UTC m=+0.144173822 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, version=9.6, distribution-scope=public, container_name=openstack_network_exporter, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git)
Dec  4 20:07:19 np0005546222 python3.9[160290]: ansible-ansible.builtin.systemd Invoked with name=edpm_openstack_network_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  4 20:07:19 np0005546222 systemd[1]: Stopping openstack_network_exporter container...
Dec  4 20:07:19 np0005546222 systemd[1]: libpod-348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88.scope: Deactivated successfully.
Dec  4 20:07:19 np0005546222 podman[160294]: 2025-12-05 01:07:19.27081464 +0000 UTC m=+0.066825409 container died 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, vcs-type=git, io.openshift.tags=minimal rhel9, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  4 20:07:19 np0005546222 systemd[1]: 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88-2a2a4f50eb063d7d.timer: Deactivated successfully.
Dec  4 20:07:19 np0005546222 systemd[1]: Stopped /usr/bin/podman healthcheck run 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88.
Dec  4 20:07:19 np0005546222 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88-userdata-shm.mount: Deactivated successfully.
Dec  4 20:07:19 np0005546222 systemd[1]: var-lib-containers-storage-overlay-952f396ae4fa4ce7b65d319291f161c33a679374731006ecac684d22de091d0e-merged.mount: Deactivated successfully.
Dec  4 20:07:20 np0005546222 podman[160294]: 2025-12-05 01:07:20.114278425 +0000 UTC m=+0.910289234 container cleanup 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, release=1755695350, vendor=Red Hat, Inc., architecture=x86_64, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.component=ubi9-minimal-container, config_id=edpm)
Dec  4 20:07:20 np0005546222 podman[160294]: openstack_network_exporter
Dec  4 20:07:20 np0005546222 systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec  4 20:07:20 np0005546222 podman[160321]: openstack_network_exporter
Dec  4 20:07:20 np0005546222 systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'.
Dec  4 20:07:20 np0005546222 systemd[1]: Stopped openstack_network_exporter container.
Dec  4 20:07:20 np0005546222 systemd[1]: Starting openstack_network_exporter container...
Dec  4 20:07:20 np0005546222 systemd[1]: Started libcrun container.
Dec  4 20:07:20 np0005546222 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952f396ae4fa4ce7b65d319291f161c33a679374731006ecac684d22de091d0e/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec  4 20:07:20 np0005546222 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952f396ae4fa4ce7b65d319291f161c33a679374731006ecac684d22de091d0e/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  4 20:07:20 np0005546222 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/952f396ae4fa4ce7b65d319291f161c33a679374731006ecac684d22de091d0e/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  4 20:07:20 np0005546222 systemd[1]: Started /usr/bin/podman healthcheck run 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88.
Dec  4 20:07:20 np0005546222 podman[160334]: 2025-12-05 01:07:20.339507033 +0000 UTC m=+0.136427203 container init 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_id=edpm, maintainer=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, architecture=x86_64, distribution-scope=public, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  4 20:07:20 np0005546222 openstack_network_exporter[160350]: INFO    01:07:20 main.go:48: registering *bridge.Collector
Dec  4 20:07:20 np0005546222 openstack_network_exporter[160350]: INFO    01:07:20 main.go:48: registering *coverage.Collector
Dec  4 20:07:20 np0005546222 openstack_network_exporter[160350]: INFO    01:07:20 main.go:48: registering *datapath.Collector
Dec  4 20:07:20 np0005546222 openstack_network_exporter[160350]: INFO    01:07:20 main.go:48: registering *iface.Collector
Dec  4 20:07:20 np0005546222 openstack_network_exporter[160350]: INFO    01:07:20 main.go:48: registering *memory.Collector
Dec  4 20:07:20 np0005546222 openstack_network_exporter[160350]: INFO    01:07:20 main.go:48: registering *ovnnorthd.Collector
Dec  4 20:07:20 np0005546222 openstack_network_exporter[160350]: INFO    01:07:20 main.go:48: registering *ovn.Collector
Dec  4 20:07:20 np0005546222 openstack_network_exporter[160350]: INFO    01:07:20 main.go:48: registering *ovsdbserver.Collector
Dec  4 20:07:20 np0005546222 openstack_network_exporter[160350]: INFO    01:07:20 main.go:48: registering *pmd_perf.Collector
Dec  4 20:07:20 np0005546222 openstack_network_exporter[160350]: INFO    01:07:20 main.go:48: registering *pmd_rxq.Collector
Dec  4 20:07:20 np0005546222 openstack_network_exporter[160350]: INFO    01:07:20 main.go:48: registering *vswitch.Collector
Dec  4 20:07:20 np0005546222 openstack_network_exporter[160350]: NOTICE  01:07:20 main.go:76: listening on https://:9105/metrics
Dec  4 20:07:20 np0005546222 podman[160334]: 2025-12-05 01:07:20.382926131 +0000 UTC m=+0.179846361 container start 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, container_name=openstack_network_exporter, distribution-scope=public, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_id=edpm, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., io.buildah.version=1.33.7, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.expose-services=)
Dec  4 20:07:20 np0005546222 podman[160334]: openstack_network_exporter
Dec  4 20:07:20 np0005546222 systemd[1]: Started openstack_network_exporter container.
Dec  4 20:07:20 np0005546222 podman[160360]: 2025-12-05 01:07:20.504248868 +0000 UTC m=+0.097595587 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, config_id=edpm, version=9.6, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible)
Dec  4 20:07:21 np0005546222 python3.9[160529]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  4 20:07:22 np0005546222 python3.9[160681]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Dec  4 20:07:23 np0005546222 python3.9[160846]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  4 20:07:23 np0005546222 systemd[1]: Started libpod-conmon-d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d.scope.
Dec  4 20:07:23 np0005546222 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  4 20:07:23 np0005546222 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  4 20:07:23 np0005546222 podman[160847]: 2025-12-05 01:07:23.745978433 +0000 UTC m=+0.113842528 container exec d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  4 20:07:23 np0005546222 podman[160868]: 2025-12-05 01:07:23.850168173 +0000 UTC m=+0.080552242 container exec_died d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 20:07:23 np0005546222 podman[160847]: 2025-12-05 01:07:23.920312654 +0000 UTC m=+0.288176719 container exec_died d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 20:07:23 np0005546222 systemd[1]: libpod-conmon-d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d.scope: Deactivated successfully.
Dec  4 20:07:24 np0005546222 podman[161002]: 2025-12-05 01:07:24.626539559 +0000 UTC m=+0.049167585 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  4 20:07:24 np0005546222 python3.9[161053]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  4 20:07:24 np0005546222 systemd[1]: Started libpod-conmon-d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d.scope.
Dec  4 20:07:24 np0005546222 podman[161054]: 2025-12-05 01:07:24.903385748 +0000 UTC m=+0.086141265 container exec d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  4 20:07:24 np0005546222 podman[161054]: 2025-12-05 01:07:24.936095786 +0000 UTC m=+0.118851283 container exec_died d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  4 20:07:24 np0005546222 systemd[1]: libpod-conmon-d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d.scope: Deactivated successfully.
Dec  4 20:07:25 np0005546222 python3.9[161237]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:07:26 np0005546222 python3.9[161389]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Dec  4 20:07:27 np0005546222 python3.9[161554]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  4 20:07:27 np0005546222 systemd[1]: Started libpod-conmon-01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424.scope.
Dec  4 20:07:27 np0005546222 podman[161555]: 2025-12-05 01:07:27.456910896 +0000 UTC m=+0.105542124 container exec 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec  4 20:07:27 np0005546222 podman[161555]: 2025-12-05 01:07:27.465155906 +0000 UTC m=+0.113787104 container exec_died 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec  4 20:07:27 np0005546222 systemd[1]: libpod-conmon-01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424.scope: Deactivated successfully.
Dec  4 20:07:28 np0005546222 python3.9[161739]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  4 20:07:28 np0005546222 systemd[1]: Started libpod-conmon-01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424.scope.
Dec  4 20:07:28 np0005546222 podman[161740]: 2025-12-05 01:07:28.407682983 +0000 UTC m=+0.109465223 container exec 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  4 20:07:28 np0005546222 podman[161740]: 2025-12-05 01:07:28.448296515 +0000 UTC m=+0.150078655 container exec_died 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec  4 20:07:28 np0005546222 systemd[1]: libpod-conmon-01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424.scope: Deactivated successfully.
Dec  4 20:07:29 np0005546222 python3.9[161922]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:07:30 np0005546222 python3.9[162074]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Dec  4 20:07:31 np0005546222 python3.9[162239]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  4 20:07:31 np0005546222 systemd[1]: Started libpod-conmon-6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a.scope.
Dec  4 20:07:31 np0005546222 podman[162240]: 2025-12-05 01:07:31.345524141 +0000 UTC m=+0.078613643 container exec 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  4 20:07:31 np0005546222 podman[162259]: 2025-12-05 01:07:31.408107655 +0000 UTC m=+0.051594139 container exec_died 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  4 20:07:31 np0005546222 podman[162240]: 2025-12-05 01:07:31.414990106 +0000 UTC m=+0.148079578 container exec_died 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  4 20:07:31 np0005546222 systemd[1]: libpod-conmon-6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a.scope: Deactivated successfully.
Dec  4 20:07:32 np0005546222 python3.9[162423]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  4 20:07:32 np0005546222 systemd[1]: Started libpod-conmon-6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a.scope.
Dec  4 20:07:32 np0005546222 podman[162424]: 2025-12-05 01:07:32.297544383 +0000 UTC m=+0.099197437 container exec 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  4 20:07:32 np0005546222 podman[162424]: 2025-12-05 01:07:32.332363203 +0000 UTC m=+0.134016277 container exec_died 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  4 20:07:32 np0005546222 systemd[1]: libpod-conmon-6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a.scope: Deactivated successfully.
Dec  4 20:07:33 np0005546222 python3.9[162603]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:07:34 np0005546222 python3.9[162755]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Dec  4 20:07:34 np0005546222 python3.9[162920]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  4 20:07:34 np0005546222 systemd[1]: Started libpod-conmon-63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e.scope.
Dec  4 20:07:34 np0005546222 podman[162921]: 2025-12-05 01:07:34.8060262 +0000 UTC m=+0.067226725 container exec 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  4 20:07:34 np0005546222 podman[162921]: 2025-12-05 01:07:34.839168864 +0000 UTC m=+0.100369359 container exec_died 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  4 20:07:34 np0005546222 systemd[1]: libpod-conmon-63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e.scope: Deactivated successfully.
Dec  4 20:07:35 np0005546222 python3.9[163102]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  4 20:07:35 np0005546222 systemd[1]: Started libpod-conmon-63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e.scope.
Dec  4 20:07:35 np0005546222 podman[163103]: 2025-12-05 01:07:35.608040419 +0000 UTC m=+0.054480410 container exec 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  4 20:07:35 np0005546222 podman[163122]: 2025-12-05 01:07:35.667148937 +0000 UTC m=+0.048665807 container exec_died 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  4 20:07:35 np0005546222 podman[163103]: 2025-12-05 01:07:35.673079863 +0000 UTC m=+0.119519664 container exec_died 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  4 20:07:35 np0005546222 systemd[1]: libpod-conmon-63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e.scope: Deactivated successfully.
Dec  4 20:07:36 np0005546222 python3.9[163286]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:07:37 np0005546222 python3.9[163438]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Dec  4 20:07:37 np0005546222 podman[163575]: 2025-12-05 01:07:37.594802881 +0000 UTC m=+0.062883184 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  4 20:07:37 np0005546222 python3.9[163624]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  4 20:07:37 np0005546222 systemd[1]: Started libpod-conmon-348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88.scope.
Dec  4 20:07:37 np0005546222 podman[163627]: 2025-12-05 01:07:37.894094645 +0000 UTC m=+0.087951673 container exec 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., version=9.6, release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7)
Dec  4 20:07:37 np0005546222 podman[163627]: 2025-12-05 01:07:37.923404722 +0000 UTC m=+0.117261720 container exec_died 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, name=ubi9-minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, maintainer=Red Hat, Inc., architecture=x86_64, version=9.6, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.buildah.version=1.33.7, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible)
Dec  4 20:07:37 np0005546222 systemd[1]: libpod-conmon-348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88.scope: Deactivated successfully.
Dec  4 20:07:38 np0005546222 python3.9[163810]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  4 20:07:38 np0005546222 systemd[1]: Started libpod-conmon-348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88.scope.
Dec  4 20:07:38 np0005546222 podman[163811]: 2025-12-05 01:07:38.789364255 +0000 UTC m=+0.080809274 container exec 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, container_name=openstack_network_exporter, distribution-scope=public, maintainer=Red Hat, Inc., config_id=edpm, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, name=ubi9-minimal, vendor=Red Hat, Inc.)
Dec  4 20:07:38 np0005546222 podman[163811]: 2025-12-05 01:07:38.821270044 +0000 UTC m=+0.112715033 container exec_died 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.expose-services=, name=ubi9-minimal, config_id=edpm, release=1755695350, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  4 20:07:38 np0005546222 systemd[1]: libpod-conmon-348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88.scope: Deactivated successfully.
Dec  4 20:07:39 np0005546222 python3.9[163996]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:07:40 np0005546222 python3.9[164148]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:07:40 np0005546222 python3.9[164300]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:07:41 np0005546222 python3.9[164423]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896860.4550586-1016-203472479103166/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:07:42 np0005546222 podman[164547]: 2025-12-05 01:07:42.018211316 +0000 UTC m=+0.059582402 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Dec  4 20:07:42 np0005546222 podman[164548]: 2025-12-05 01:07:42.074108065 +0000 UTC m=+0.113095024 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  4 20:07:42 np0005546222 python3.9[164612]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:07:42 np0005546222 python3.9[164774]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:07:43 np0005546222 python3.9[164852]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:07:43 np0005546222 python3.9[165004]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:07:44 np0005546222 python3.9[165082]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.c2522dd1 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:07:44 np0005546222 python3.9[165234]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:07:45 np0005546222 python3.9[165312]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:07:46 np0005546222 python3.9[165464]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 20:07:46 np0005546222 python3[165617]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  4 20:07:47 np0005546222 python3.9[165769]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:07:47 np0005546222 python3.9[165847]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:07:48 np0005546222 python3.9[165999]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:07:49 np0005546222 python3.9[166077]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:07:49 np0005546222 python3.9[166229]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:07:50 np0005546222 python3.9[166307]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:07:50 np0005546222 podman[166384]: 2025-12-05 01:07:50.653840398 +0000 UTC m=+0.077273615 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, config_id=edpm, release=1755695350, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.buildah.version=1.33.7, name=ubi9-minimal, architecture=x86_64, container_name=openstack_network_exporter, io.openshift.expose-services=, version=9.6, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  4 20:07:50 np0005546222 python3.9[166481]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:07:51 np0005546222 python3.9[166559]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:07:52 np0005546222 python3.9[166711]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:07:52 np0005546222 python3.9[166836]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764896871.5155437-1141-40659162752585/.source.nft follow=False _original_basename=ruleset.j2 checksum=bc835bd485c96b4ac7465e87d3a790a8d097f2aa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:07:53 np0005546222 python3.9[166988]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:07:54 np0005546222 python3.9[167140]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 20:07:54 np0005546222 podman[167267]: 2025-12-05 01:07:54.941383955 +0000 UTC m=+0.065026424 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  4 20:07:55 np0005546222 python3.9[167309]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:07:55 np0005546222 python3.9[167470]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 20:07:56 np0005546222 python3.9[167623]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 20:07:57 np0005546222 python3.9[167777]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 20:07:58 np0005546222 python3.9[167932]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:07:58 np0005546222 systemd[1]: session-22.scope: Deactivated successfully.
Dec  4 20:07:58 np0005546222 systemd[1]: session-22.scope: Consumed 2min 4.612s CPU time.
Dec  4 20:07:58 np0005546222 systemd-logind[792]: Session 22 logged out. Waiting for processes to exit.
Dec  4 20:07:58 np0005546222 systemd-logind[792]: Removed session 22.
Dec  4 20:07:59 np0005546222 podman[158197]: time="2025-12-05T01:07:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  4 20:07:59 np0005546222 podman[158197]: @ - - [05/Dec/2025:01:07:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 12784 "" "Go-http-client/1.1"
Dec  4 20:07:59 np0005546222 podman[158197]: @ - - [05/Dec/2025:01:07:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2132 "" "Go-http-client/1.1"
Dec  4 20:08:01 np0005546222 openstack_network_exporter[160350]: ERROR   01:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  4 20:08:01 np0005546222 openstack_network_exporter[160350]: ERROR   01:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  4 20:08:01 np0005546222 openstack_network_exporter[160350]: ERROR   01:08:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  4 20:08:01 np0005546222 openstack_network_exporter[160350]: ERROR   01:08:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  4 20:08:01 np0005546222 openstack_network_exporter[160350]: 
Dec  4 20:08:01 np0005546222 openstack_network_exporter[160350]: ERROR   01:08:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  4 20:08:01 np0005546222 openstack_network_exporter[160350]: 
Dec  4 20:08:04 np0005546222 systemd-logind[792]: New session 23 of user zuul.
Dec  4 20:08:04 np0005546222 systemd[1]: Started Session 23 of User zuul.
Dec  4 20:08:05 np0005546222 python3.9[168119]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  4 20:08:05 np0005546222 systemd[1]: Reloading.
Dec  4 20:08:05 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 20:08:05 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 20:08:06 np0005546222 python3.9[168304]: ansible-ansible.builtin.service_facts Invoked
Dec  4 20:08:06 np0005546222 network[168321]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  4 20:08:06 np0005546222 network[168322]: 'network-scripts' will be removed from distribution in near future.
Dec  4 20:08:06 np0005546222 network[168323]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  4 20:08:07 np0005546222 podman[168347]: 2025-12-05 01:08:07.723871192 +0000 UTC m=+0.076289328 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  4 20:08:12 np0005546222 podman[168460]: 2025-12-05 01:08:12.659121928 +0000 UTC m=+0.083228222 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4)
Dec  4 20:08:12 np0005546222 podman[168461]: 2025-12-05 01:08:12.671728909 +0000 UTC m=+0.090410681 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251125, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 20:08:13 np0005546222 python3.9[168662]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 20:08:14 np0005546222 python3.9[168815]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:08:15 np0005546222 python3.9[168967]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:08:16 np0005546222 python3.9[169119]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 20:08:17 np0005546222 python3.9[169271]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  4 20:08:18 np0005546222 python3.9[169423]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  4 20:08:18 np0005546222 systemd[1]: Reloading.
Dec  4 20:08:18 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 20:08:18 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 20:08:19 np0005546222 python3.9[169609]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 20:08:20 np0005546222 python3.9[169762]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry-power-monitoring recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 20:08:21 np0005546222 podman[169886]: 2025-12-05 01:08:21.354689141 +0000 UTC m=+0.074424436 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, container_name=openstack_network_exporter, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, vcs-type=git, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, release=1755695350, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible)
Dec  4 20:08:21 np0005546222 python3.9[169928]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 20:08:22 np0005546222 python3.9[170084]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:08:23 np0005546222 python3.9[170205]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764896901.7899587-125-141671540543922/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  4 20:08:24 np0005546222 python3.9[170358]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Dec  4 20:08:25 np0005546222 podman[170483]: 2025-12-05 01:08:25.321150315 +0000 UTC m=+0.066296089 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  4 20:08:25 np0005546222 python3.9[170521]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:08:26 np0005546222 python3.9[170653]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764896904.9715354-171-54553531384899/.source.conf _original_basename=ceilometer.conf follow=False checksum=e93ef84feaa07737af66c0c1da2fd4bdcae81d37 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:08:26 np0005546222 python3.9[170803]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:08:27 np0005546222 python3.9[170924]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764896906.322149-171-192554944794391/.source.yaml _original_basename=polling.yaml follow=False checksum=5ef7021082c6431099dde63e021011029cd65119 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:08:27 np0005546222 python3.9[171074]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:08:28 np0005546222 python3.9[171195]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764896907.5024347-171-281401353561977/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:08:29 np0005546222 python3.9[171345]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 20:08:29 np0005546222 podman[158197]: time="2025-12-05T01:08:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  4 20:08:29 np0005546222 podman[158197]: @ - - [05/Dec/2025:01:08:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 12784 "" "Go-http-client/1.1"
Dec  4 20:08:29 np0005546222 podman[158197]: @ - - [05/Dec/2025:01:08:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2136 "" "Go-http-client/1.1"
Dec  4 20:08:30 np0005546222 python3.9[171499]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 20:08:30 np0005546222 python3.9[171651]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:08:31 np0005546222 openstack_network_exporter[160350]: ERROR   01:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  4 20:08:31 np0005546222 openstack_network_exporter[160350]: ERROR   01:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  4 20:08:31 np0005546222 openstack_network_exporter[160350]: ERROR   01:08:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  4 20:08:31 np0005546222 openstack_network_exporter[160350]: ERROR   01:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  4 20:08:31 np0005546222 openstack_network_exporter[160350]: 
Dec  4 20:08:31 np0005546222 openstack_network_exporter[160350]: ERROR   01:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  4 20:08:31 np0005546222 openstack_network_exporter[160350]: 
Dec  4 20:08:31 np0005546222 python3.9[171772]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896910.4038968-230-200323059941808/.source.json follow=False _original_basename=ceilometer-agent-ipmi.json.j2 checksum=21255e7f7db3155b4a491729298d9407fe6f8335 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:08:32 np0005546222 python3.9[171922]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:08:32 np0005546222 python3.9[171998]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:08:33 np0005546222 python3.9[172148]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:08:33 np0005546222 python3.9[172269]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896912.816547-230-18849588089998/.source.json follow=False _original_basename=ceilometer_agent_ipmi.json.j2 checksum=cf81874b7544c057599ec397442879f74d42b3ec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:08:34 np0005546222 python3.9[172419]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:08:35 np0005546222 python3.9[172540]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896914.1797688-230-14887794234914/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:08:35 np0005546222 python3.9[172690]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:08:36 np0005546222 python3.9[172811]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896915.3937232-230-235198200570475/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:08:37 np0005546222 python3.9[172961]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:08:37 np0005546222 python3.9[173082]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896916.7616167-230-168119229064234/.source.json follow=False _original_basename=kepler.json.j2 checksum=89451093c8765edd3915016a9e87770fe489178d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:08:37 np0005546222 podman[173083]: 2025-12-05 01:08:37.863783096 +0000 UTC m=+0.071546343 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  4 20:08:38 np0005546222 python3.9[173257]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:08:39 np0005546222 python3.9[173333]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:08:40 np0005546222 python3.9[173485]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:08:40 np0005546222 python3.9[173637]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:08:41 np0005546222 python3.9[173789]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  4 20:08:42 np0005546222 python3.9[173941]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.539 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.539 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.539 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f83151a5f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.540 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f83151a6690>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.540 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.540 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.541 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8316c39160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.541 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.541 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee59a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.541 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f941a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.541 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee79e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.541 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.541 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f942c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.541 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee6300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.541 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.542 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.542 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.542 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.542 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee74d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.542 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.542 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.542 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.542 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.542 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.543 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.543 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8314f94050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.543 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee76b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.543 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.543 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.544 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8314f940e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.544 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.544 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.544 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.544 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f831506dc10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.544 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8314155760>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.545 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.545 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8314ee7950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.545 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.545 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8314ee7a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.545 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.545 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8314f94170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.545 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.545 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8314ee79b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.546 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.546 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8314f94200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.546 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.546 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8314f94290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.546 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.546 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8314ee7ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.546 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.546 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8314f94320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.546 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.546 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8314ee59d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.546 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.547 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8314ee7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.547 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.547 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8314ee7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.547 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.547 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8314ee74a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.547 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.547 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8314ee7500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.547 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.547 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8314ee7560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.547 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.547 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8314ee75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.548 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8314f945f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.548 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8314ee7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.548 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8314ee7680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.548 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8314ee76e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.548 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8314ee7f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.548 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.549 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8314ee7740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.549 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.549 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8314ee7f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.549 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.549 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.550 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:08:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:08:42.551 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:08:42 np0005546222 podman[174065]: 2025-12-05 01:08:42.781868334 +0000 UTC m=+0.057914010 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, io.buildah.version=1.41.4)
Dec  4 20:08:42 np0005546222 podman[174066]: 2025-12-05 01:08:42.819808666 +0000 UTC m=+0.095080652 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec  4 20:08:42 np0005546222 python3.9[174067]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764896921.8865435-349-9481887317414/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  4 20:08:43 np0005546222 python3.9[174186]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:08:43 np0005546222 python3.9[174309]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764896921.8865435-349-9481887317414/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  4 20:08:44 np0005546222 python3.9[174461]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/kepler/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:08:45 np0005546222 python3.9[174584]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/kepler/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764896924.1043966-349-159969000862248/.source _original_basename=healthcheck follow=False checksum=57ed53cc150174efd98819129660d5b9ea9ea61a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  4 20:08:46 np0005546222 python3.9[174736]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=ceilometer_agent_ipmi.json debug=False
Dec  4 20:08:47 np0005546222 python3.9[174888]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  4 20:08:48 np0005546222 python3[175040]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=ceilometer_agent_ipmi.json log_base_path=/var/log/containers/stdouts debug=False
Dec  4 20:08:51 np0005546222 podman[175099]: 2025-12-05 01:08:51.92709733 +0000 UTC m=+0.327313355 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, distribution-scope=public, vcs-type=git, version=9.6, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.openshift.expose-services=, release=1755695350, vendor=Red Hat, Inc., config_id=edpm, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  4 20:08:53 np0005546222 podman[175054]: 2025-12-05 01:08:53.994024611 +0000 UTC m=+5.583194417 image pull 24d4416455a3caf43088be1a1fdcd72d9680ad5e64ac2b338cb2cc50d15f5acc quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Dec  4 20:08:54 np0005546222 podman[175173]: 2025-12-05 01:08:54.146479228 +0000 UTC m=+0.044875862 container create 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.build-date=20251125)
Dec  4 20:08:54 np0005546222 podman[175173]: 2025-12-05 01:08:54.119783897 +0000 UTC m=+0.018180531 image pull 24d4416455a3caf43088be1a1fdcd72d9680ad5e64ac2b338cb2cc50d15f5acc quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Dec  4 20:08:54 np0005546222 python3[175040]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck ipmi --label config_id=edpm --label container_name=ceilometer_agent_ipmi --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified kolla_start
Dec  4 20:08:54 np0005546222 python3.9[175363]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 20:08:55 np0005546222 podman[175489]: 2025-12-05 01:08:55.635048347 +0000 UTC m=+0.060586689 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  4 20:08:55 np0005546222 python3.9[175530]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:08:56 np0005546222 python3.9[175694]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764896935.892654-427-208129851032019/source dest=/etc/systemd/system/edpm_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:08:57 np0005546222 python3.9[175770]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  4 20:08:57 np0005546222 systemd[1]: Reloading.
Dec  4 20:08:57 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 20:08:57 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 20:08:58 np0005546222 python3.9[175880]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 20:08:58 np0005546222 systemd[1]: Reloading.
Dec  4 20:08:58 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 20:08:58 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 20:08:58 np0005546222 systemd[1]: Starting ceilometer_agent_ipmi container...
Dec  4 20:08:58 np0005546222 systemd[1]: Started libcrun container.
Dec  4 20:08:58 np0005546222 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aeaaee3422df2ccf4d4601c96e9c4f445969cd6f5a16b56f77ac2bf9514fd4d/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  4 20:08:58 np0005546222 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aeaaee3422df2ccf4d4601c96e9c4f445969cd6f5a16b56f77ac2bf9514fd4d/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec  4 20:08:58 np0005546222 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aeaaee3422df2ccf4d4601c96e9c4f445969cd6f5a16b56f77ac2bf9514fd4d/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec  4 20:08:58 np0005546222 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aeaaee3422df2ccf4d4601c96e9c4f445969cd6f5a16b56f77ac2bf9514fd4d/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec  4 20:08:58 np0005546222 systemd[1]: Started /usr/bin/podman healthcheck run 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335.
Dec  4 20:08:58 np0005546222 podman[175920]: 2025-12-05 01:08:58.995373695 +0000 UTC m=+0.168675488 container init 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125)
Dec  4 20:08:59 np0005546222 ceilometer_agent_ipmi[175935]: + sudo -E kolla_set_configs
Dec  4 20:08:59 np0005546222 podman[175920]: 2025-12-05 01:08:59.037971997 +0000 UTC m=+0.211273740 container start 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  4 20:08:59 np0005546222 podman[175920]: ceilometer_agent_ipmi
Dec  4 20:08:59 np0005546222 systemd[1]: Started ceilometer_agent_ipmi container.
Dec  4 20:08:59 np0005546222 ceilometer_agent_ipmi[175935]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  4 20:08:59 np0005546222 ceilometer_agent_ipmi[175935]: INFO:__main__:Validating config file
Dec  4 20:08:59 np0005546222 ceilometer_agent_ipmi[175935]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  4 20:08:59 np0005546222 ceilometer_agent_ipmi[175935]: INFO:__main__:Copying service configuration files
Dec  4 20:08:59 np0005546222 ceilometer_agent_ipmi[175935]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec  4 20:08:59 np0005546222 ceilometer_agent_ipmi[175935]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec  4 20:08:59 np0005546222 ceilometer_agent_ipmi[175935]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec  4 20:08:59 np0005546222 ceilometer_agent_ipmi[175935]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec  4 20:08:59 np0005546222 ceilometer_agent_ipmi[175935]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec  4 20:08:59 np0005546222 ceilometer_agent_ipmi[175935]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec  4 20:08:59 np0005546222 ceilometer_agent_ipmi[175935]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  4 20:08:59 np0005546222 ceilometer_agent_ipmi[175935]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  4 20:08:59 np0005546222 ceilometer_agent_ipmi[175935]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  4 20:08:59 np0005546222 ceilometer_agent_ipmi[175935]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  4 20:08:59 np0005546222 ceilometer_agent_ipmi[175935]: INFO:__main__:Writing out command to execute
Dec  4 20:08:59 np0005546222 ceilometer_agent_ipmi[175935]: ++ cat /run_command
Dec  4 20:08:59 np0005546222 ceilometer_agent_ipmi[175935]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec  4 20:08:59 np0005546222 ceilometer_agent_ipmi[175935]: + ARGS=
Dec  4 20:08:59 np0005546222 ceilometer_agent_ipmi[175935]: + sudo kolla_copy_cacerts
Dec  4 20:08:59 np0005546222 podman[175942]: 2025-12-05 01:08:59.139593808 +0000 UTC m=+0.077908138 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 20:08:59 np0005546222 systemd[1]: 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335-775ecfd96ef75a18.service: Main process exited, code=exited, status=1/FAILURE
Dec  4 20:08:59 np0005546222 systemd[1]: 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335-775ecfd96ef75a18.service: Failed with result 'exit-code'.
Dec  4 20:08:59 np0005546222 ceilometer_agent_ipmi[175935]: + [[ ! -n '' ]]
Dec  4 20:08:59 np0005546222 ceilometer_agent_ipmi[175935]: + . kolla_extend_start
Dec  4 20:08:59 np0005546222 ceilometer_agent_ipmi[175935]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec  4 20:08:59 np0005546222 ceilometer_agent_ipmi[175935]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Dec  4 20:08:59 np0005546222 ceilometer_agent_ipmi[175935]: + umask 0022
Dec  4 20:08:59 np0005546222 ceilometer_agent_ipmi[175935]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Dec  4 20:08:59 np0005546222 podman[158197]: time="2025-12-05T01:08:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  4 20:08:59 np0005546222 podman[158197]: @ - - [05/Dec/2025:01:08:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 15575 "" "Go-http-client/1.1"
Dec  4 20:08:59 np0005546222 podman[158197]: @ - - [05/Dec/2025:01:08:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2567 "" "Go-http-client/1.1"
Dec  4 20:09:00 np0005546222 python3.9[176118]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=kepler.json debug=False
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.166 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.166 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.166 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.166 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.167 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.167 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.167 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.167 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.167 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.167 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.167 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.167 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.168 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.168 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.168 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.168 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.168 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.168 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.168 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.168 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.168 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.168 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.168 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.168 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.169 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.169 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.169 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.169 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.169 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.169 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.169 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.169 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.169 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.169 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.169 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.169 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.170 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.170 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.170 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.170 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.170 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.170 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.170 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.170 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.170 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.170 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.170 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.170 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.171 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.171 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.171 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.171 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.171 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.171 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.171 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.171 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.171 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.171 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.171 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.171 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.172 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.172 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.172 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.172 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.172 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.172 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.172 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.172 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.172 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.172 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.173 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.173 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.173 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.173 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.173 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.173 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.173 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.173 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.173 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.173 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.173 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.174 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.174 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.174 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.174 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.174 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.174 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.174 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.174 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.174 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.174 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.175 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.175 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.175 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.175 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.175 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.175 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.175 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.175 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.175 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.175 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.175 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.176 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.176 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.176 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.176 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.176 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.176 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.176 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.176 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.176 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.176 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.176 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.177 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.177 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.177 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.177 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.177 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.177 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.177 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.177 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.177 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.177 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.177 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.178 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.178 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.178 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.178 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.178 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.178 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.178 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.178 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.178 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.178 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.178 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.178 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.179 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.179 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.179 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.179 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.179 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.179 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.179 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.179 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.179 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.179 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.179 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.180 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.180 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.180 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.180 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.180 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.180 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.180 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.180 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.180 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.180 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.180 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.180 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.181 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.181 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.181 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.181 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.181 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.181 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.201 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.203 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.204 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec  4 20:09:00 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:00.383 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpazubod4i/privsep.sock']
Dec  4 20:09:00 np0005546222 python3.9[176277]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  4 20:09:01 np0005546222 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.205 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.206 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpazubod4i/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.027 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.032 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.036 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.036 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.316 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.317 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.317 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.318 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.318 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.318 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.318 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.318 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.318 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.318 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.318 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.319 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.319 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.321 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.322 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.322 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.322 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.322 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.322 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.322 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.322 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.322 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.322 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.322 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.323 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.323 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.323 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.323 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.323 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.323 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.323 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.323 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.324 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.324 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.324 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.324 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.324 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.324 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.324 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.324 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.324 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.324 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.325 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.325 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.325 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.325 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.325 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.325 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.325 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.325 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.325 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.325 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.325 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.325 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.325 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.326 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.326 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.326 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.326 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.326 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.326 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.326 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.326 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.326 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.326 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.327 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.327 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.327 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.327 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.327 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.327 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.327 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.327 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.327 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.328 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.328 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.328 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.328 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.328 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.328 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.328 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.328 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.329 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.329 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.329 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.329 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.329 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.329 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.329 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.329 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.329 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.329 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.330 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.330 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.330 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.330 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.330 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.330 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.330 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.330 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.330 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.331 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.331 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.331 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.331 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.331 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.331 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.331 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.331 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.331 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.332 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.332 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.332 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.332 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.332 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.332 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.332 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.332 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.333 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.333 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.333 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.333 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.333 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.333 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.333 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.333 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.334 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.334 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.334 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.334 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.334 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.334 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.334 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.334 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.334 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.334 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.335 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.335 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.335 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.335 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.335 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.335 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.335 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.335 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.335 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.335 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.336 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.336 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.336 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.336 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.336 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.336 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.336 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.336 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.336 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.336 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.337 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.337 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.337 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.337 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.337 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.337 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.337 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.337 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.337 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.337 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.338 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.338 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.338 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.338 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.338 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.338 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.338 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.338 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.338 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.339 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.339 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.339 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.339 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.339 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.339 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.339 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.339 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.339 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.339 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.339 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.340 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.340 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.340 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.340 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.340 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.340 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.340 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.340 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.340 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.341 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.341 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.341 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.341 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.341 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.341 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.341 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.341 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.341 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.341 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.342 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.342 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.342 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.342 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.342 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.342 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.342 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.342 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.342 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.342 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.342 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Dec  4 20:09:01 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:01.345 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Dec  4 20:09:01 np0005546222 openstack_network_exporter[160350]: ERROR   01:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  4 20:09:01 np0005546222 openstack_network_exporter[160350]: ERROR   01:09:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  4 20:09:01 np0005546222 openstack_network_exporter[160350]: ERROR   01:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  4 20:09:01 np0005546222 openstack_network_exporter[160350]: ERROR   01:09:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  4 20:09:01 np0005546222 openstack_network_exporter[160350]: 
Dec  4 20:09:01 np0005546222 openstack_network_exporter[160350]: ERROR   01:09:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  4 20:09:01 np0005546222 openstack_network_exporter[160350]: 
Dec  4 20:09:01 np0005546222 python3[176435]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=kepler.json log_base_path=/var/log/containers/stdouts debug=False
Dec  4 20:09:08 np0005546222 podman[176579]: 2025-12-05 01:09:08.232270043 +0000 UTC m=+0.151458752 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  4 20:09:08 np0005546222 podman[176449]: 2025-12-05 01:09:08.325806805 +0000 UTC m=+6.426361077 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Dec  4 20:09:08 np0005546222 podman[176675]: 2025-12-05 01:09:08.50599192 +0000 UTC m=+0.063385182 container create de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, release-0.7.12=, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, vcs-type=git, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, release=1214.1726694543, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, managed_by=edpm_ansible)
Dec  4 20:09:08 np0005546222 podman[176675]: 2025-12-05 01:09:08.469527996 +0000 UTC m=+0.026921338 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Dec  4 20:09:08 np0005546222 python3[176435]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name kepler --conmon-pidfile /run/kepler.pid --env ENABLE_GPU=true --env EXPOSE_CONTAINER_METRICS=true --env ENABLE_PROCESS_METRICS=true --env EXPOSE_VM_METRICS=true --env EXPOSE_ESTIMATED_IDLE_POWER_METRICS=false --env LIBVIRT_METADATA_URI=http://openstack.org/xmlns/libvirt/nova/1.1 --healthcheck-command /openstack/healthcheck kepler --label config_id=edpm --label container_name=kepler --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 8888:8888 --volume /lib/modules:/lib/modules:ro --volume /run/libvirt:/run/libvirt:shared,ro --volume /sys:/sys --volume /proc:/proc --volume /var/lib/openstack/healthchecks/kepler:/openstack:ro,z quay.io/sustainable_computing_io/kepler:release-0.7.12 -v=2
Dec  4 20:09:09 np0005546222 python3.9[176865]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 20:09:10 np0005546222 python3.9[177019]: ansible-file Invoked with path=/etc/systemd/system/edpm_kepler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:09:11 np0005546222 python3.9[177170]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764896950.3974333-489-70350947086373/source dest=/etc/systemd/system/edpm_kepler.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:09:11 np0005546222 python3.9[177246]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  4 20:09:11 np0005546222 systemd[1]: Reloading.
Dec  4 20:09:11 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 20:09:11 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 20:09:12 np0005546222 python3.9[177358]: ansible-systemd Invoked with state=restarted name=edpm_kepler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  4 20:09:12 np0005546222 systemd[1]: Reloading.
Dec  4 20:09:12 np0005546222 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  4 20:09:12 np0005546222 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  4 20:09:13 np0005546222 systemd[1]: Starting kepler container...
Dec  4 20:09:13 np0005546222 podman[177398]: 2025-12-05 01:09:13.274008852 +0000 UTC m=+0.231680869 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  4 20:09:13 np0005546222 podman[177396]: 2025-12-05 01:09:13.297430758 +0000 UTC m=+0.257587629 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  4 20:09:13 np0005546222 systemd[1]: Started libcrun container.
Dec  4 20:09:15 np0005546222 systemd[1]: Started /usr/bin/podman healthcheck run de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91.
Dec  4 20:09:15 np0005546222 podman[177399]: 2025-12-05 01:09:15.365410717 +0000 UTC m=+2.304707788 container init de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.openshift.expose-services=, vcs-type=git, version=9.4, architecture=x86_64, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, managed_by=edpm_ansible, io.openshift.tags=base rhel9, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, name=ubi9)
Dec  4 20:09:15 np0005546222 kepler[177459]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec  4 20:09:15 np0005546222 kepler[177459]: I1205 01:09:15.392856       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Dec  4 20:09:15 np0005546222 kepler[177459]: I1205 01:09:15.393081       1 config.go:293] using gCgroup ID in the BPF program: true
Dec  4 20:09:15 np0005546222 kepler[177459]: I1205 01:09:15.393134       1 config.go:295] kernel version: 5.14
Dec  4 20:09:15 np0005546222 kepler[177459]: I1205 01:09:15.393854       1 power.go:78] Unable to obtain power, use estimate method
Dec  4 20:09:15 np0005546222 kepler[177459]: I1205 01:09:15.393904       1 redfish.go:169] failed to get redfish credential file path
Dec  4 20:09:15 np0005546222 kepler[177459]: I1205 01:09:15.394257       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Dec  4 20:09:15 np0005546222 kepler[177459]: I1205 01:09:15.394263       1 power.go:79] using none to obtain power
Dec  4 20:09:15 np0005546222 kepler[177459]: E1205 01:09:15.394278       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Dec  4 20:09:15 np0005546222 kepler[177459]: E1205 01:09:15.394535       1 exporter.go:154] failed to init GPU accelerators: no devices found
Dec  4 20:09:15 np0005546222 kepler[177459]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec  4 20:09:15 np0005546222 podman[177399]: 2025-12-05 01:09:15.395761753 +0000 UTC m=+2.335058734 container start de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1214.1726694543, vcs-type=git, name=ubi9, version=9.4, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  4 20:09:15 np0005546222 kepler[177459]: I1205 01:09:15.396733       1 exporter.go:84] Number of CPUs: 8
Dec  4 20:09:15 np0005546222 podman[177399]: kepler
Dec  4 20:09:15 np0005546222 systemd[1]: Started kepler container.
Dec  4 20:09:15 np0005546222 podman[177469]: 2025-12-05 01:09:15.488293549 +0000 UTC m=+0.076626145 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, config_id=edpm, vendor=Red Hat, Inc., container_name=kepler, io.openshift.tags=base rhel9, architecture=x86_64, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, release=1214.1726694543, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec  4 20:09:15 np0005546222 systemd[1]: de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91-15f04539a3e49d9c.service: Main process exited, code=exited, status=1/FAILURE
Dec  4 20:09:15 np0005546222 systemd[1]: de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91-15f04539a3e49d9c.service: Failed with result 'exit-code'.
Dec  4 20:09:15 np0005546222 kepler[177459]: I1205 01:09:15.965392       1 watcher.go:83] Using in cluster k8s config
Dec  4 20:09:15 np0005546222 kepler[177459]: I1205 01:09:15.965608       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Dec  4 20:09:15 np0005546222 kepler[177459]: E1205 01:09:15.965795       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Dec  4 20:09:15 np0005546222 kepler[177459]: I1205 01:09:15.971500       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Dec  4 20:09:15 np0005546222 kepler[177459]: I1205 01:09:15.971651       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Dec  4 20:09:15 np0005546222 kepler[177459]: I1205 01:09:15.980004       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Dec  4 20:09:15 np0005546222 kepler[177459]: I1205 01:09:15.980153       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Dec  4 20:09:15 np0005546222 kepler[177459]: I1205 01:09:15.988818       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  4 20:09:15 np0005546222 kepler[177459]: I1205 01:09:15.989021       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec  4 20:09:15 np0005546222 kepler[177459]: I1205 01:09:15.989144       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Dec  4 20:09:16 np0005546222 kepler[177459]: I1205 01:09:16.004836       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  4 20:09:16 np0005546222 kepler[177459]: I1205 01:09:16.005007       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  4 20:09:16 np0005546222 kepler[177459]: I1205 01:09:16.005018       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  4 20:09:16 np0005546222 kepler[177459]: I1205 01:09:16.005027       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  4 20:09:16 np0005546222 kepler[177459]: I1205 01:09:16.005039       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec  4 20:09:16 np0005546222 kepler[177459]: I1205 01:09:16.005066       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Dec  4 20:09:16 np0005546222 kepler[177459]: I1205 01:09:16.005219       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Dec  4 20:09:16 np0005546222 kepler[177459]: I1205 01:09:16.005343       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Dec  4 20:09:16 np0005546222 kepler[177459]: I1205 01:09:16.005419       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Dec  4 20:09:16 np0005546222 kepler[177459]: I1205 01:09:16.005452       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Dec  4 20:09:16 np0005546222 kepler[177459]: I1205 01:09:16.005612       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Dec  4 20:09:16 np0005546222 kepler[177459]: I1205 01:09:16.006493       1 exporter.go:208] Started Kepler in 613.897533ms
Dec  4 20:09:16 np0005546222 python3.9[177655]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_ipmi.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  4 20:09:16 np0005546222 systemd[1]: Stopping ceilometer_agent_ipmi container...
Dec  4 20:09:16 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:16.568 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Dec  4 20:09:16 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:16.670 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Dec  4 20:09:16 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:16.670 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Dec  4 20:09:16 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:16.671 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Dec  4 20:09:16 np0005546222 ceilometer_agent_ipmi[175935]: 2025-12-05 01:09:16.687 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Dec  4 20:09:16 np0005546222 systemd[1]: libpod-88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335.scope: Deactivated successfully.
Dec  4 20:09:16 np0005546222 systemd[1]: libpod-88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335.scope: Consumed 2.677s CPU time.
Dec  4 20:09:16 np0005546222 podman[177659]: 2025-12-05 01:09:16.910289934 +0000 UTC m=+0.422569221 container died 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Dec  4 20:09:16 np0005546222 systemd[1]: 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335-775ecfd96ef75a18.timer: Deactivated successfully.
Dec  4 20:09:16 np0005546222 systemd[1]: Stopped /usr/bin/podman healthcheck run 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335.
Dec  4 20:09:16 np0005546222 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335-userdata-shm.mount: Deactivated successfully.
Dec  4 20:09:16 np0005546222 systemd[1]: var-lib-containers-storage-overlay-5aeaaee3422df2ccf4d4601c96e9c4f445969cd6f5a16b56f77ac2bf9514fd4d-merged.mount: Deactivated successfully.
Dec  4 20:09:17 np0005546222 podman[177659]: 2025-12-05 01:09:17.476566715 +0000 UTC m=+0.988846022 container cleanup 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  4 20:09:17 np0005546222 podman[177659]: ceilometer_agent_ipmi
Dec  4 20:09:17 np0005546222 podman[177686]: ceilometer_agent_ipmi
Dec  4 20:09:17 np0005546222 systemd[1]: edpm_ceilometer_agent_ipmi.service: Deactivated successfully.
Dec  4 20:09:17 np0005546222 systemd[1]: Stopped ceilometer_agent_ipmi container.
Dec  4 20:09:17 np0005546222 systemd[1]: Starting ceilometer_agent_ipmi container...
Dec  4 20:09:17 np0005546222 systemd[1]: Started libcrun container.
Dec  4 20:09:17 np0005546222 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aeaaee3422df2ccf4d4601c96e9c4f445969cd6f5a16b56f77ac2bf9514fd4d/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  4 20:09:17 np0005546222 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aeaaee3422df2ccf4d4601c96e9c4f445969cd6f5a16b56f77ac2bf9514fd4d/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec  4 20:09:17 np0005546222 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aeaaee3422df2ccf4d4601c96e9c4f445969cd6f5a16b56f77ac2bf9514fd4d/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec  4 20:09:17 np0005546222 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aeaaee3422df2ccf4d4601c96e9c4f445969cd6f5a16b56f77ac2bf9514fd4d/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec  4 20:09:17 np0005546222 systemd[1]: Started /usr/bin/podman healthcheck run 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335.
Dec  4 20:09:17 np0005546222 podman[177698]: 2025-12-05 01:09:17.835681313 +0000 UTC m=+0.217724758 container init 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Dec  4 20:09:17 np0005546222 ceilometer_agent_ipmi[177712]: + sudo -E kolla_set_configs
Dec  4 20:09:17 np0005546222 podman[177698]: 2025-12-05 01:09:17.868065451 +0000 UTC m=+0.250108886 container start 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  4 20:09:17 np0005546222 podman[177698]: ceilometer_agent_ipmi
Dec  4 20:09:17 np0005546222 systemd[1]: Started ceilometer_agent_ipmi container.
Dec  4 20:09:17 np0005546222 ceilometer_agent_ipmi[177712]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  4 20:09:17 np0005546222 ceilometer_agent_ipmi[177712]: INFO:__main__:Validating config file
Dec  4 20:09:17 np0005546222 ceilometer_agent_ipmi[177712]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  4 20:09:17 np0005546222 ceilometer_agent_ipmi[177712]: INFO:__main__:Copying service configuration files
Dec  4 20:09:17 np0005546222 ceilometer_agent_ipmi[177712]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec  4 20:09:17 np0005546222 ceilometer_agent_ipmi[177712]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec  4 20:09:17 np0005546222 ceilometer_agent_ipmi[177712]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec  4 20:09:17 np0005546222 ceilometer_agent_ipmi[177712]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec  4 20:09:17 np0005546222 ceilometer_agent_ipmi[177712]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec  4 20:09:17 np0005546222 ceilometer_agent_ipmi[177712]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec  4 20:09:17 np0005546222 ceilometer_agent_ipmi[177712]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  4 20:09:17 np0005546222 ceilometer_agent_ipmi[177712]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  4 20:09:17 np0005546222 ceilometer_agent_ipmi[177712]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  4 20:09:17 np0005546222 ceilometer_agent_ipmi[177712]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  4 20:09:17 np0005546222 ceilometer_agent_ipmi[177712]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  4 20:09:17 np0005546222 ceilometer_agent_ipmi[177712]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  4 20:09:17 np0005546222 ceilometer_agent_ipmi[177712]: INFO:__main__:Writing out command to execute
Dec  4 20:09:17 np0005546222 ceilometer_agent_ipmi[177712]: ++ cat /run_command
Dec  4 20:09:17 np0005546222 ceilometer_agent_ipmi[177712]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec  4 20:09:17 np0005546222 ceilometer_agent_ipmi[177712]: + ARGS=
Dec  4 20:09:17 np0005546222 ceilometer_agent_ipmi[177712]: + sudo kolla_copy_cacerts
Dec  4 20:09:17 np0005546222 podman[177719]: 2025-12-05 01:09:17.975578725 +0000 UTC m=+0.089020536 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec  4 20:09:17 np0005546222 systemd[1]: 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335-3bd305131483247.service: Main process exited, code=exited, status=1/FAILURE
Dec  4 20:09:17 np0005546222 systemd[1]: 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335-3bd305131483247.service: Failed with result 'exit-code'.
Dec  4 20:09:17 np0005546222 ceilometer_agent_ipmi[177712]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec  4 20:09:17 np0005546222 ceilometer_agent_ipmi[177712]: + [[ ! -n '' ]]
Dec  4 20:09:17 np0005546222 ceilometer_agent_ipmi[177712]: + . kolla_extend_start
Dec  4 20:09:17 np0005546222 ceilometer_agent_ipmi[177712]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Dec  4 20:09:17 np0005546222 ceilometer_agent_ipmi[177712]: + umask 0022
Dec  4 20:09:17 np0005546222 ceilometer_agent_ipmi[177712]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.865 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.865 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.865 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.865 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.865 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.866 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.866 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.866 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.866 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.866 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.866 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.866 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.866 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.867 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.867 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.867 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.867 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.867 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.867 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.867 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.867 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.867 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.867 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.867 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.868 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.868 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.868 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.868 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.868 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.868 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.868 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.868 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.868 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.868 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.868 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.868 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.868 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.869 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.869 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.869 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.869 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.869 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.869 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.869 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.869 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.869 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.869 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.869 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.870 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.870 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.870 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.870 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.870 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.870 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.870 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.870 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.870 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.870 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.870 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.870 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.871 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.871 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.871 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.871 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.871 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.871 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.871 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.871 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.871 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.871 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.871 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.872 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.872 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.872 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.872 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.872 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.872 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.872 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.872 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.872 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.872 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.872 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.872 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.873 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.873 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.873 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.873 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.873 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.873 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.873 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.873 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.873 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.873 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.874 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.874 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.874 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.874 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.874 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.874 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.874 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.874 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.874 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.875 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.875 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.875 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.875 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.875 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.875 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.875 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.875 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.875 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.875 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.876 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.876 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.876 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.876 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.876 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.876 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.876 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.876 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.877 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.877 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.877 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.877 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.877 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.877 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.877 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.877 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.877 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.878 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.878 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.878 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.878 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.878 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.878 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.878 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.878 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.878 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.879 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.879 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.879 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.879 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.879 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.879 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.879 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.879 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.879 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.879 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.880 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.880 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.880 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.880 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.880 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.880 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.880 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.880 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.880 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.880 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.880 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.881 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.881 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.881 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.881 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.881 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.881 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.902 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.903 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.904 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec  4 20:09:18 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:18.921 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmps8_kvf0_/privsep.sock']
Dec  4 20:09:18 np0005546222 python3.9[177894]: ansible-ansible.builtin.systemd Invoked with name=edpm_kepler.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  4 20:09:19 np0005546222 systemd[1]: Stopping kepler container...
Dec  4 20:09:19 np0005546222 kepler[177459]: I1205 01:09:19.110720       1 exporter.go:218] Received shutdown signal
Dec  4 20:09:19 np0005546222 kepler[177459]: I1205 01:09:19.111878       1 exporter.go:226] Exiting...
Dec  4 20:09:19 np0005546222 systemd[1]: libpod-de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91.scope: Deactivated successfully.
Dec  4 20:09:19 np0005546222 podman[177905]: 2025-12-05 01:09:19.313254306 +0000 UTC m=+0.255581878 container died de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vendor=Red Hat, Inc., config_id=edpm, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, architecture=x86_64, build-date=2024-09-18T21:23:30)
Dec  4 20:09:19 np0005546222 systemd[1]: de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91-15f04539a3e49d9c.timer: Deactivated successfully.
Dec  4 20:09:19 np0005546222 systemd[1]: Stopped /usr/bin/podman healthcheck run de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91.
Dec  4 20:09:19 np0005546222 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91-userdata-shm.mount: Deactivated successfully.
Dec  4 20:09:19 np0005546222 systemd[1]: var-lib-containers-storage-overlay-a374ec8aa50f4d970047ac6324333a688dcc2712f075ca8bf268b9db1c5579b0-merged.mount: Deactivated successfully.
Dec  4 20:09:19 np0005546222 podman[177905]: 2025-12-05 01:09:19.913402684 +0000 UTC m=+0.855730246 container cleanup de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., vcs-type=git, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, container_name=kepler, config_id=edpm, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, release=1214.1726694543, architecture=x86_64)
Dec  4 20:09:19 np0005546222 podman[177905]: kepler
Dec  4 20:09:19 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:19.949 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec  4 20:09:19 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:19.950 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmps8_kvf0_/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec  4 20:09:19 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:19.488 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec  4 20:09:19 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:19.496 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec  4 20:09:19 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:19.501 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec  4 20:09:19 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:19.501 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Dec  4 20:09:19 np0005546222 podman[177935]: kepler
Dec  4 20:09:19 np0005546222 systemd[1]: edpm_kepler.service: Deactivated successfully.
Dec  4 20:09:19 np0005546222 systemd[1]: Stopped kepler container.
Dec  4 20:09:20 np0005546222 systemd[1]: Starting kepler container...
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.055 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.055 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.056 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.056 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.056 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.057 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.057 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.057 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.057 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.057 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.057 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.057 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.057 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.060 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.060 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.060 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.060 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.060 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.061 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.061 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.061 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.061 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.061 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.061 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.061 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.061 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.061 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.061 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.062 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.062 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.062 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.062 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.062 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.062 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.062 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.062 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.062 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.062 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.063 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.063 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.063 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.063 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.063 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.063 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.063 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.063 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.063 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.063 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.064 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.064 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.064 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.064 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.064 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.064 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.064 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.064 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.064 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.064 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.064 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.065 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.065 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.065 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.065 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.065 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.065 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.065 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.065 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.065 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.065 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.065 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.066 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.066 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.066 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.066 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.066 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.066 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.066 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.066 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.066 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.066 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.067 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.067 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.067 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.067 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.067 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.067 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.067 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.067 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.067 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.067 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.068 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.068 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.068 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.068 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.068 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.068 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.068 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.068 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.068 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.068 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.069 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.069 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.069 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.069 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.069 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.069 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.069 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.069 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.069 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.069 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.070 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.070 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.070 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.070 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.070 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.070 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.070 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.070 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.070 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.071 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.071 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.071 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.071 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.071 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.071 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.071 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.071 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.071 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.071 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.072 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.072 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.072 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.072 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.072 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.072 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.072 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.072 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.072 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.072 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.072 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.073 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.073 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.073 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.073 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.073 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.073 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.073 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.073 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.073 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.073 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.074 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.074 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.074 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.074 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.074 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.074 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.074 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.074 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.074 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.074 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.074 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.075 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.075 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.075 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.075 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.075 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.075 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.075 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.075 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.075 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.075 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.075 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.075 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.076 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.076 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.076 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.076 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.076 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.076 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.076 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.076 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.076 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.076 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.076 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.076 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.077 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.077 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.077 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.077 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.077 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.077 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.077 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.077 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.077 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.077 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.078 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.078 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.078 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.078 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.078 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.078 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.078 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.078 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.078 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.078 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.078 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.079 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.079 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.079 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.079 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.079 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.079 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.079 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.079 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.079 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.079 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Dec  4 20:09:20 np0005546222 systemd[1]: Started libcrun container.
Dec  4 20:09:20 np0005546222 ceilometer_agent_ipmi[177712]: 2025-12-05 01:09:20.082 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Dec  4 20:09:20 np0005546222 systemd[1]: Started /usr/bin/podman healthcheck run de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91.
Dec  4 20:09:20 np0005546222 podman[177950]: 2025-12-05 01:09:20.118615037 +0000 UTC m=+0.104875586 container init de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, container_name=kepler, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release-0.7.12=)
Dec  4 20:09:20 np0005546222 kepler[177967]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec  4 20:09:20 np0005546222 podman[177950]: 2025-12-05 01:09:20.146000476 +0000 UTC m=+0.132261005 container start de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.openshift.expose-services=, name=ubi9, com.redhat.component=ubi9-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vendor=Red Hat, Inc.)
Dec  4 20:09:20 np0005546222 kepler[177967]: I1205 01:09:20.148795       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Dec  4 20:09:20 np0005546222 kepler[177967]: I1205 01:09:20.148978       1 config.go:293] using gCgroup ID in the BPF program: true
Dec  4 20:09:20 np0005546222 kepler[177967]: I1205 01:09:20.149015       1 config.go:295] kernel version: 5.14
Dec  4 20:09:20 np0005546222 kepler[177967]: I1205 01:09:20.149609       1 power.go:78] Unable to obtain power, use estimate method
Dec  4 20:09:20 np0005546222 kepler[177967]: I1205 01:09:20.149634       1 redfish.go:169] failed to get redfish credential file path
Dec  4 20:09:20 np0005546222 podman[177950]: kepler
Dec  4 20:09:20 np0005546222 kepler[177967]: I1205 01:09:20.150218       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Dec  4 20:09:20 np0005546222 kepler[177967]: I1205 01:09:20.150238       1 power.go:79] using none to obtain power
Dec  4 20:09:20 np0005546222 kepler[177967]: E1205 01:09:20.150260       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Dec  4 20:09:20 np0005546222 kepler[177967]: E1205 01:09:20.150293       1 exporter.go:154] failed to init GPU accelerators: no devices found
Dec  4 20:09:20 np0005546222 kepler[177967]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec  4 20:09:20 np0005546222 kepler[177967]: I1205 01:09:20.152488       1 exporter.go:84] Number of CPUs: 8
Dec  4 20:09:20 np0005546222 systemd[1]: Started kepler container.
Dec  4 20:09:20 np0005546222 podman[177978]: 2025-12-05 01:09:20.238327086 +0000 UTC m=+0.082400984 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., vcs-type=git, release=1214.1726694543, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.openshift.expose-services=, com.redhat.component=ubi9-container, managed_by=edpm_ansible, version=9.4, distribution-scope=public, io.buildah.version=1.29.0)
Dec  4 20:09:20 np0005546222 systemd[1]: de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91-4ebf4b8c79608771.service: Main process exited, code=exited, status=1/FAILURE
Dec  4 20:09:20 np0005546222 systemd[1]: de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91-4ebf4b8c79608771.service: Failed with result 'exit-code'.
Dec  4 20:09:20 np0005546222 kepler[177967]: I1205 01:09:20.576599       1 watcher.go:83] Using in cluster k8s config
Dec  4 20:09:20 np0005546222 kepler[177967]: I1205 01:09:20.576642       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Dec  4 20:09:20 np0005546222 kepler[177967]: E1205 01:09:20.576722       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Dec  4 20:09:20 np0005546222 kepler[177967]: I1205 01:09:20.582683       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Dec  4 20:09:20 np0005546222 kepler[177967]: I1205 01:09:20.582736       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Dec  4 20:09:20 np0005546222 kepler[177967]: I1205 01:09:20.587496       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Dec  4 20:09:20 np0005546222 kepler[177967]: I1205 01:09:20.587537       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Dec  4 20:09:20 np0005546222 kepler[177967]: I1205 01:09:20.594689       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  4 20:09:20 np0005546222 kepler[177967]: I1205 01:09:20.594730       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec  4 20:09:20 np0005546222 kepler[177967]: I1205 01:09:20.594751       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Dec  4 20:09:20 np0005546222 kepler[177967]: I1205 01:09:20.601332       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  4 20:09:20 np0005546222 kepler[177967]: I1205 01:09:20.601371       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  4 20:09:20 np0005546222 kepler[177967]: I1205 01:09:20.601376       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  4 20:09:20 np0005546222 kepler[177967]: I1205 01:09:20.601382       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  4 20:09:20 np0005546222 kepler[177967]: I1205 01:09:20.601390       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec  4 20:09:20 np0005546222 kepler[177967]: I1205 01:09:20.601406       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Dec  4 20:09:20 np0005546222 kepler[177967]: I1205 01:09:20.601507       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Dec  4 20:09:20 np0005546222 kepler[177967]: I1205 01:09:20.601547       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Dec  4 20:09:20 np0005546222 kepler[177967]: I1205 01:09:20.601596       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Dec  4 20:09:20 np0005546222 kepler[177967]: I1205 01:09:20.601624       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Dec  4 20:09:20 np0005546222 kepler[177967]: I1205 01:09:20.601716       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Dec  4 20:09:20 np0005546222 kepler[177967]: I1205 01:09:20.602173       1 exporter.go:208] Started Kepler in 453.628694ms
Dec  4 20:09:20 np0005546222 python3.9[178163]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  4 20:09:22 np0005546222 python3.9[178315]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Dec  4 20:09:23 np0005546222 python3.9[178479]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  4 20:09:23 np0005546222 podman[178480]: 2025-12-05 01:09:23.753953474 +0000 UTC m=+0.159876490 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., name=ubi9-minimal, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, release=1755695350, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, version=9.6, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  4 20:09:23 np0005546222 systemd[1]: Started libpod-conmon-d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d.scope.
Dec  4 20:09:23 np0005546222 podman[178486]: 2025-12-05 01:09:23.813455795 +0000 UTC m=+0.181693945 container exec d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 20:09:23 np0005546222 podman[178486]: 2025-12-05 01:09:23.846374047 +0000 UTC m=+0.214612197 container exec_died d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 20:09:23 np0005546222 systemd[1]: libpod-conmon-d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d.scope: Deactivated successfully.
Dec  4 20:09:24 np0005546222 python3.9[178680]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  4 20:09:25 np0005546222 systemd[1]: Started libpod-conmon-d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d.scope.
Dec  4 20:09:25 np0005546222 podman[178681]: 2025-12-05 01:09:25.074333219 +0000 UTC m=+0.157368085 container exec d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  4 20:09:25 np0005546222 podman[178681]: 2025-12-05 01:09:25.109086399 +0000 UTC m=+0.192121265 container exec_died d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  4 20:09:25 np0005546222 systemd[1]: libpod-conmon-d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d.scope: Deactivated successfully.
Dec  4 20:09:26 np0005546222 podman[178834]: 2025-12-05 01:09:26.016844419 +0000 UTC m=+0.117819310 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  4 20:09:26 np0005546222 python3.9[178887]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:09:27 np0005546222 python3.9[179039]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Dec  4 20:09:28 np0005546222 python3.9[179204]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  4 20:09:28 np0005546222 systemd[1]: Started libpod-conmon-01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424.scope.
Dec  4 20:09:28 np0005546222 podman[179205]: 2025-12-05 01:09:28.344650396 +0000 UTC m=+0.121330302 container exec 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute)
Dec  4 20:09:28 np0005546222 podman[179205]: 2025-12-05 01:09:28.379306383 +0000 UTC m=+0.155986299 container exec_died 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  4 20:09:28 np0005546222 systemd[1]: libpod-conmon-01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424.scope: Deactivated successfully.
Dec  4 20:09:29 np0005546222 python3.9[179384]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  4 20:09:29 np0005546222 systemd[1]: Started libpod-conmon-01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424.scope.
Dec  4 20:09:29 np0005546222 podman[179385]: 2025-12-05 01:09:29.595998952 +0000 UTC m=+0.173024360 container exec 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  4 20:09:29 np0005546222 podman[179385]: 2025-12-05 01:09:29.60441258 +0000 UTC m=+0.181437958 container exec_died 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  4 20:09:29 np0005546222 systemd[1]: libpod-conmon-01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424.scope: Deactivated successfully.
Dec  4 20:09:29 np0005546222 podman[158197]: time="2025-12-05T01:09:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  4 20:09:29 np0005546222 podman[158197]: @ - - [05/Dec/2025:01:09:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18535 "" "Go-http-client/1.1"
Dec  4 20:09:29 np0005546222 podman[158197]: @ - - [05/Dec/2025:01:09:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2991 "" "Go-http-client/1.1"
Dec  4 20:09:30 np0005546222 python3.9[179568]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:09:31 np0005546222 openstack_network_exporter[160350]: ERROR   01:09:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  4 20:09:31 np0005546222 openstack_network_exporter[160350]: ERROR   01:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  4 20:09:31 np0005546222 openstack_network_exporter[160350]: ERROR   01:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  4 20:09:31 np0005546222 openstack_network_exporter[160350]: ERROR   01:09:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  4 20:09:31 np0005546222 openstack_network_exporter[160350]: 
Dec  4 20:09:31 np0005546222 openstack_network_exporter[160350]: ERROR   01:09:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  4 20:09:31 np0005546222 openstack_network_exporter[160350]: 
Dec  4 20:09:31 np0005546222 python3.9[179720]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Dec  4 20:09:32 np0005546222 python3.9[179887]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  4 20:09:32 np0005546222 systemd[1]: Started libpod-conmon-6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a.scope.
Dec  4 20:09:32 np0005546222 podman[179888]: 2025-12-05 01:09:32.956527046 +0000 UTC m=+0.108525751 container exec 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  4 20:09:33 np0005546222 podman[179909]: 2025-12-05 01:09:33.124719849 +0000 UTC m=+0.149658534 container exec_died 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  4 20:09:33 np0005546222 podman[179888]: 2025-12-05 01:09:33.151759429 +0000 UTC m=+0.303758084 container exec_died 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  4 20:09:33 np0005546222 systemd[1]: libpod-conmon-6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a.scope: Deactivated successfully.
Dec  4 20:09:34 np0005546222 python3.9[180073]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  4 20:09:34 np0005546222 systemd[1]: Started libpod-conmon-6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a.scope.
Dec  4 20:09:34 np0005546222 podman[180074]: 2025-12-05 01:09:34.460359949 +0000 UTC m=+0.146753960 container exec 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  4 20:09:34 np0005546222 podman[180074]: 2025-12-05 01:09:34.496365321 +0000 UTC m=+0.182759272 container exec_died 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  4 20:09:34 np0005546222 systemd[1]: libpod-conmon-6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a.scope: Deactivated successfully.
Dec  4 20:09:35 np0005546222 python3.9[180255]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:09:36 np0005546222 python3.9[180407]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Dec  4 20:09:37 np0005546222 python3.9[180572]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  4 20:09:37 np0005546222 systemd[1]: Started libpod-conmon-63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e.scope.
Dec  4 20:09:37 np0005546222 podman[180573]: 2025-12-05 01:09:37.844355372 +0000 UTC m=+0.126959039 container exec 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  4 20:09:37 np0005546222 podman[180573]: 2025-12-05 01:09:37.883947085 +0000 UTC m=+0.166550742 container exec_died 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  4 20:09:37 np0005546222 systemd[1]: libpod-conmon-63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e.scope: Deactivated successfully.
Dec  4 20:09:38 np0005546222 podman[180652]: 2025-12-05 01:09:38.479443698 +0000 UTC m=+0.134334803 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  4 20:09:38 np0005546222 python3.9[180779]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  4 20:09:39 np0005546222 systemd[1]: Started libpod-conmon-63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e.scope.
Dec  4 20:09:39 np0005546222 podman[180780]: 2025-12-05 01:09:39.207136808 +0000 UTC m=+0.211321132 container exec 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  4 20:09:39 np0005546222 podman[180780]: 2025-12-05 01:09:39.242032878 +0000 UTC m=+0.246217112 container exec_died 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  4 20:09:39 np0005546222 systemd[1]: libpod-conmon-63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e.scope: Deactivated successfully.
Dec  4 20:09:41 np0005546222 python3.9[180962]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:09:42 np0005546222 python3.9[181114]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Dec  4 20:09:43 np0005546222 python3.9[181278]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  4 20:09:43 np0005546222 systemd[1]: Started libpod-conmon-348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88.scope.
Dec  4 20:09:43 np0005546222 podman[181279]: 2025-12-05 01:09:43.340792844 +0000 UTC m=+0.145241534 container exec 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, managed_by=edpm_ansible, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., release=1755695350, build-date=2025-08-20T13:12:41, distribution-scope=public, io.openshift.tags=minimal rhel9, name=ubi9-minimal, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6)
Dec  4 20:09:43 np0005546222 podman[181279]: 2025-12-05 01:09:43.379103388 +0000 UTC m=+0.183552088 container exec_died 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=minimal rhel9, name=ubi9-minimal, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, config_id=edpm, managed_by=edpm_ansible, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, com.redhat.component=ubi9-minimal-container)
Dec  4 20:09:43 np0005546222 systemd[1]: libpod-conmon-348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88.scope: Deactivated successfully.
Dec  4 20:09:43 np0005546222 podman[181297]: 2025-12-05 01:09:43.491341989 +0000 UTC m=+0.140722527 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team)
Dec  4 20:09:43 np0005546222 podman[181294]: 2025-12-05 01:09:43.523079603 +0000 UTC m=+0.170618325 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  4 20:09:44 np0005546222 python3.9[181499]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  4 20:09:44 np0005546222 systemd[1]: Started libpod-conmon-348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88.scope.
Dec  4 20:09:44 np0005546222 podman[181500]: 2025-12-05 01:09:44.729996563 +0000 UTC m=+0.178219936 container exec 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, distribution-scope=public, architecture=x86_64, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., release=1755695350, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.expose-services=, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  4 20:09:44 np0005546222 podman[181500]: 2025-12-05 01:09:44.764578044 +0000 UTC m=+0.212801347 container exec_died 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, release=1755695350, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, version=9.6, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_id=edpm, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, distribution-scope=public)
Dec  4 20:09:44 np0005546222 systemd[1]: libpod-conmon-348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88.scope: Deactivated successfully.
Dec  4 20:09:45 np0005546222 python3.9[181679]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:09:46 np0005546222 python3.9[181831]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Dec  4 20:09:48 np0005546222 podman[181996]: 2025-12-05 01:09:48.266738413 +0000 UTC m=+0.149107241 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=2, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  4 20:09:48 np0005546222 systemd[1]: 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335-3bd305131483247.service: Main process exited, code=exited, status=1/FAILURE
Dec  4 20:09:48 np0005546222 systemd[1]: 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335-3bd305131483247.service: Failed with result 'exit-code'.
Dec  4 20:09:48 np0005546222 python3.9[181997]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  4 20:09:48 np0005546222 systemd[1]: Started libpod-conmon-88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335.scope.
Dec  4 20:09:48 np0005546222 podman[182016]: 2025-12-05 01:09:48.546868685 +0000 UTC m=+0.148520654 container exec 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  4 20:09:48 np0005546222 podman[182016]: 2025-12-05 01:09:48.582129836 +0000 UTC m=+0.183781745 container exec_died 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  4 20:09:48 np0005546222 systemd[1]: libpod-conmon-88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335.scope: Deactivated successfully.
Dec  4 20:09:49 np0005546222 python3.9[182198]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  4 20:09:49 np0005546222 systemd[1]: Started libpod-conmon-88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335.scope.
Dec  4 20:09:49 np0005546222 podman[182199]: 2025-12-05 01:09:49.868700978 +0000 UTC m=+0.142734678 container exec 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  4 20:09:49 np0005546222 podman[182199]: 2025-12-05 01:09:49.903053622 +0000 UTC m=+0.177087232 container exec_died 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  4 20:09:50 np0005546222 systemd[1]: libpod-conmon-88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335.scope: Deactivated successfully.
Dec  4 20:09:50 np0005546222 podman[182312]: 2025-12-05 01:09:50.728574884 +0000 UTC m=+0.138993984 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., vcs-type=git, release-0.7.12=, com.redhat.component=ubi9-container, version=9.4, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64)
Dec  4 20:09:51 np0005546222 python3.9[182401]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:09:52 np0005546222 python3.9[182553]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Dec  4 20:09:53 np0005546222 python3.9[182717]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  4 20:09:53 np0005546222 systemd[1]: Started libpod-conmon-de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91.scope.
Dec  4 20:09:53 np0005546222 podman[182718]: 2025-12-05 01:09:53.950318933 +0000 UTC m=+0.148286596 container exec de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., release=1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, config_id=edpm, version=9.4, distribution-scope=public)
Dec  4 20:09:53 np0005546222 podman[182718]: 2025-12-05 01:09:53.984981096 +0000 UTC m=+0.182948739 container exec_died de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.4, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, name=ubi9, com.redhat.component=ubi9-container, container_name=kepler, maintainer=Red Hat, Inc., release=1214.1726694543, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, release-0.7.12=)
Dec  4 20:09:54 np0005546222 systemd[1]: libpod-conmon-de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91.scope: Deactivated successfully.
Dec  4 20:09:54 np0005546222 podman[182734]: 2025-12-05 01:09:54.073598309 +0000 UTC m=+0.122283227 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, version=9.6, vcs-type=git, config_id=edpm, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.expose-services=, release=1755695350, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  4 20:09:55 np0005546222 python3.9[182922]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  4 20:09:55 np0005546222 systemd[1]: Started libpod-conmon-de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91.scope.
Dec  4 20:09:55 np0005546222 podman[182923]: 2025-12-05 01:09:55.329343393 +0000 UTC m=+0.167787059 container exec de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-type=git, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.29.0, name=ubi9, build-date=2024-09-18T21:23:30)
Dec  4 20:09:55 np0005546222 podman[182923]: 2025-12-05 01:09:55.364587604 +0000 UTC m=+0.203031240 container exec_died de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, architecture=x86_64, config_id=edpm, io.openshift.expose-services=, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=base rhel9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, container_name=kepler, release=1214.1726694543, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, maintainer=Red Hat, Inc.)
Dec  4 20:09:55 np0005546222 systemd[1]: libpod-conmon-de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91.scope: Deactivated successfully.
Dec  4 20:09:56 np0005546222 podman[183079]: 2025-12-05 01:09:56.288359322 +0000 UTC m=+0.122939027 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  4 20:09:56 np0005546222 python3.9[183129]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:09:57 np0005546222 python3.9[183282]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:09:58 np0005546222 python3.9[183434]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/kepler.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:09:59 np0005546222 python3.9[183557]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/kepler.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764896997.9472897-778-271174939500151/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:09:59 np0005546222 podman[158197]: time="2025-12-05T01:09:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  4 20:09:59 np0005546222 podman[158197]: @ - - [05/Dec/2025:01:09:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18534 "" "Go-http-client/1.1"
Dec  4 20:09:59 np0005546222 podman[158197]: @ - - [05/Dec/2025:01:09:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2989 "" "Go-http-client/1.1"
Dec  4 20:10:00 np0005546222 python3.9[183709]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:10:01 np0005546222 openstack_network_exporter[160350]: ERROR   01:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  4 20:10:01 np0005546222 openstack_network_exporter[160350]: ERROR   01:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  4 20:10:01 np0005546222 openstack_network_exporter[160350]: ERROR   01:10:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  4 20:10:01 np0005546222 openstack_network_exporter[160350]: ERROR   01:10:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  4 20:10:01 np0005546222 openstack_network_exporter[160350]: 
Dec  4 20:10:01 np0005546222 openstack_network_exporter[160350]: ERROR   01:10:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  4 20:10:01 np0005546222 openstack_network_exporter[160350]: 
Dec  4 20:10:01 np0005546222 python3.9[183861]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:10:02 np0005546222 python3.9[183939]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:10:03 np0005546222 python3.9[184091]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:10:04 np0005546222 python3.9[184169]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.x9n4pzcd recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:10:05 np0005546222 python3.9[184321]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:10:05 np0005546222 python3.9[184399]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:10:07 np0005546222 python3.9[184551]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 20:10:08 np0005546222 python3[184704]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  4 20:10:08 np0005546222 podman[184729]: 2025-12-05 01:10:08.706163254 +0000 UTC m=+0.106394754 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  4 20:10:09 np0005546222 python3.9[184878]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:10:10 np0005546222 python3.9[184956]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:10:11 np0005546222 python3.9[185108]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:10:12 np0005546222 python3.9[185186]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:10:12 np0005546222 python3.9[185338]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:10:13 np0005546222 python3.9[185416]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:10:13 np0005546222 podman[185417]: 2025-12-05 01:10:13.693676183 +0000 UTC m=+0.110213460 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  4 20:10:13 np0005546222 podman[185418]: 2025-12-05 01:10:13.73338959 +0000 UTC m=+0.132827077 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  4 20:10:14 np0005546222 python3.9[185613]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:10:15 np0005546222 python3.9[185691]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:10:16 np0005546222 python3.9[185843]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:10:17 np0005546222 python3.9[185968]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764897015.8137627-903-20766571221428/.source.nft follow=False _original_basename=ruleset.j2 checksum=195cfcdc3ed4fc7d98b13eed88ef5cb7956fa1b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:10:18 np0005546222 podman[186092]: 2025-12-05 01:10:18.588132846 +0000 UTC m=+0.132814397 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec  4 20:10:18 np0005546222 python3.9[186136]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:10:19 np0005546222 python3.9[186291]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 20:10:20 np0005546222 podman[186418]: 2025-12-05 01:10:20.948824672 +0000 UTC m=+0.113724326 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release=1214.1726694543, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, com.redhat.component=ubi9-container, name=ubi9, config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler)
Dec  4 20:10:21 np0005546222 python3.9[186463]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:10:22 np0005546222 python3.9[186615]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 20:10:23 np0005546222 python3.9[186768]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  4 20:10:24 np0005546222 podman[186922]: 2025-12-05 01:10:24.383529352 +0000 UTC m=+0.131919939 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, release=1755695350, vcs-type=git, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.expose-services=, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9)
Dec  4 20:10:24 np0005546222 python3.9[186923]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  4 20:10:25 np0005546222 python3.9[187098]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:10:25 np0005546222 systemd[1]: session-23.scope: Deactivated successfully.
Dec  4 20:10:25 np0005546222 systemd[1]: session-23.scope: Consumed 1min 59.058s CPU time.
Dec  4 20:10:25 np0005546222 systemd-logind[792]: Session 23 logged out. Waiting for processes to exit.
Dec  4 20:10:25 np0005546222 systemd-logind[792]: Removed session 23.
Dec  4 20:10:26 np0005546222 podman[187123]: 2025-12-05 01:10:26.704810832 +0000 UTC m=+0.110878110 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  4 20:10:29 np0005546222 podman[158197]: time="2025-12-05T01:10:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  4 20:10:29 np0005546222 podman[158197]: @ - - [05/Dec/2025:01:10:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18533 "" "Go-http-client/1.1"
Dec  4 20:10:29 np0005546222 podman[158197]: @ - - [05/Dec/2025:01:10:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2994 "" "Go-http-client/1.1"
Dec  4 20:10:31 np0005546222 openstack_network_exporter[160350]: ERROR   01:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  4 20:10:31 np0005546222 openstack_network_exporter[160350]: ERROR   01:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  4 20:10:31 np0005546222 openstack_network_exporter[160350]: ERROR   01:10:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  4 20:10:31 np0005546222 openstack_network_exporter[160350]: ERROR   01:10:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  4 20:10:31 np0005546222 openstack_network_exporter[160350]: 
Dec  4 20:10:31 np0005546222 openstack_network_exporter[160350]: ERROR   01:10:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  4 20:10:31 np0005546222 openstack_network_exporter[160350]: 
Dec  4 20:10:31 np0005546222 systemd-logind[792]: New session 24 of user zuul.
Dec  4 20:10:31 np0005546222 systemd[1]: Started Session 24 of User zuul.
Dec  4 20:10:33 np0005546222 python3.9[187300]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  4 20:10:34 np0005546222 python3.9[187456]: ansible-ansible.builtin.systemd Invoked with name=rsyslog daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None masked=None
Dec  4 20:10:36 np0005546222 python3.9[187609]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  4 20:10:37 np0005546222 python3.9[187693]: ansible-ansible.legacy.dnf Invoked with name=['rsyslog-openssl'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  4 20:10:39 np0005546222 podman[187695]: 2025-12-05 01:10:39.706452865 +0000 UTC m=+0.111244718 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.541 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.542 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.542 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f83151a5f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.543 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f83151a6690>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.544 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.544 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8316c39160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.545 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.546 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8314f94050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.546 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.546 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee59a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.546 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8314f940e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.547 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f831506dc10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.548 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8314ee7950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.548 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f941a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8314ee7a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.549 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee79e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.550 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8314f94170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.550 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8314ee79b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.552 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.552 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f942c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.553 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee6300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.553 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.553 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.553 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.553 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.553 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee74d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.554 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.554 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.554 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.554 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.554 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8314f94200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8314f94290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8314ee7ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8314f94320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8314ee59d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8314ee7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8314ee7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8314ee74a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8314ee7500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8314ee7560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8314ee75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8314f945f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8314ee7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.554 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee76b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.558 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8314ee7680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8314ee76e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.558 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.560 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.560 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8316db1370>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8314ee7f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.561 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.561 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8314ee7740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.561 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.561 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8314ee7f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.561 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.565 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.565 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.565 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.565 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.566 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.566 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.566 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.566 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:10:42 np0005546222 ceilometer_agent_compute[154702]: 2025-12-05 01:10:42.568 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  4 20:10:44 np0005546222 podman[187806]: 2025-12-05 01:10:44.737330347 +0000 UTC m=+0.136689577 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  4 20:10:44 np0005546222 podman[187813]: 2025-12-05 01:10:44.765819323 +0000 UTC m=+0.158080480 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Dec  4 20:10:45 np0005546222 python3.9[187918]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/rsyslog/ca-openshift.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:10:46 np0005546222 python3.9[188041]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/rsyslog/ca-openshift.crt mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764897044.2251117-54-91166646645123/.source.crt _original_basename=ca-openshift.crt follow=False checksum=1d88bab26da5c85710a770c705f3555781bf2a38 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:10:47 np0005546222 python3.9[188193]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/rsyslog.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  4 20:10:48 np0005546222 python3.9[188345]: ansible-ansible.legacy.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  4 20:10:48 np0005546222 podman[188440]: 2025-12-05 01:10:48.930854627 +0000 UTC m=+0.086106728 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  4 20:10:49 np0005546222 python3.9[188487]: ansible-ansible.legacy.copy Invoked with dest=/etc/rsyslog.d/10-telemetry.conf mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764897047.520598-77-78670246744713/.source.conf _original_basename=10-telemetry.conf follow=False checksum=76865d9dd4bf9cd322a47065c046bcac194645ab backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:10:50 compute-0 python3.9[188640]: ansible-ansible.builtin.systemd Invoked with name=rsyslog.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  5 01:10:50 compute-0 systemd[1]: Stopping System Logging Service...
Dec  5 01:10:50 compute-0 rsyslogd[1008]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1008" x-info="https://www.rsyslog.com"] exiting on signal 15.
Dec  5 01:10:50 compute-0 systemd[1]: rsyslog.service: Deactivated successfully.
Dec  5 01:10:50 compute-0 systemd[1]: Stopped System Logging Service.
Dec  5 01:10:50 compute-0 systemd[1]: rsyslog.service: Consumed 1.818s CPU time, 5.2M memory peak, read 0B from disk, written 3.7M to disk.
Dec  5 01:10:50 compute-0 systemd[1]: Starting System Logging Service...
Dec  5 01:10:50 compute-0 rsyslogd[188644]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="188644" x-info="https://www.rsyslog.com"] start
Dec  5 01:10:50 compute-0 systemd[1]: Started System Logging Service.
Dec  5 01:10:50 compute-0 rsyslogd[188644]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  5 01:10:50 compute-0 rsyslogd[188644]: Warning: Certificate file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2330 ]
Dec  5 01:10:50 compute-0 rsyslogd[188644]: Warning: Key file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2331 ]
Dec  5 01:10:50 compute-0 rsyslogd[188644]: nsd_ossl: TLS Connection initiated with remote syslog server '172.17.0.80'. [v8.2510.0-2.el9]
Dec  5 01:10:50 compute-0 rsyslogd[188644]: nsd_ossl: Information, no shared curve between syslog client '172.17.0.80' and server [v8.2510.0-2.el9]
Dec  5 01:10:51 compute-0 systemd[1]: session-24.scope: Deactivated successfully.
Dec  5 01:10:51 compute-0 systemd[1]: session-24.scope: Consumed 15.771s CPU time.
Dec  5 01:10:51 compute-0 systemd-logind[792]: Session 24 logged out. Waiting for processes to exit.
Dec  5 01:10:51 compute-0 systemd-logind[792]: Removed session 24.
Dec  5 01:10:51 compute-0 podman[188674]: 2025-12-05 01:10:51.242122718 +0000 UTC m=+0.118139406 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, io.openshift.tags=base rhel9, version=9.4, container_name=kepler, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container)
Dec  5 01:10:54 compute-0 podman[188695]: 2025-12-05 01:10:54.724515533 +0000 UTC m=+0.130061337 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, version=9.6, io.buildah.version=1.33.7, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vcs-type=git, name=ubi9-minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, release=1755695350, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Dec  5 01:10:57 compute-0 podman[188716]: 2025-12-05 01:10:57.67616102 +0000 UTC m=+0.083431191 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 01:10:59 compute-0 podman[158197]: time="2025-12-05T01:10:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:10:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:10:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18533 "" "Go-http-client/1.1"
Dec  5 01:10:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:10:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2991 "" "Go-http-client/1.1"
Dec  5 01:11:00 compute-0 systemd-logind[792]: New session 25 of user zuul.
Dec  5 01:11:00 compute-0 systemd[1]: Started Session 25 of User zuul.
Dec  5 01:11:01 compute-0 openstack_network_exporter[160350]: ERROR   01:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:11:01 compute-0 openstack_network_exporter[160350]: ERROR   01:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:11:01 compute-0 openstack_network_exporter[160350]: ERROR   01:11:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:11:01 compute-0 openstack_network_exporter[160350]: ERROR   01:11:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:11:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:11:01 compute-0 openstack_network_exporter[160350]: ERROR   01:11:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:11:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:11:06 compute-0 python3[189481]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  5 01:11:08 compute-0 python3[189585]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  5 01:11:09 compute-0 podman[189612]: 2025-12-05 01:11:09.955528788 +0000 UTC m=+0.098783701 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 01:11:10 compute-0 python3[189613]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  5 01:11:10 compute-0 python3[189662]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:11:10 compute-0 kernel: loop: module loaded
Dec  5 01:11:10 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Dec  5 01:11:10 compute-0 python3[189697]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:11:11 compute-0 lvm[189700]: PV /dev/loop3 not used.
Dec  5 01:11:11 compute-0 lvm[189702]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  5 01:11:11 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Dec  5 01:11:11 compute-0 lvm[189710]:  1 logical volume(s) in volume group "ceph_vg0" now active
Dec  5 01:11:11 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Dec  5 01:11:11 compute-0 lvm[189712]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  5 01:11:11 compute-0 lvm[189712]: VG ceph_vg0 finished
Dec  5 01:11:12 compute-0 python3[189790]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  5 01:11:12 compute-0 python3[189863]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764897071.5515091-36706-13281376886548/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:11:13 compute-0 python3[189913]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  5 01:11:14 compute-0 systemd[1]: Reloading.
Dec  5 01:11:14 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:11:14 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:11:14 compute-0 systemd[1]: Starting Ceph OSD losetup...
Dec  5 01:11:14 compute-0 bash[189955]: /dev/loop3: [64513]:4194935 (/var/lib/ceph-osd-0.img)
Dec  5 01:11:15 compute-0 podman[189952]: 2025-12-05 01:11:15.007280828 +0000 UTC m=+0.086507529 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  5 01:11:15 compute-0 podman[189954]: 2025-12-05 01:11:15.09880486 +0000 UTC m=+0.168951931 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible)
Dec  5 01:11:15 compute-0 systemd[1]: Finished Ceph OSD losetup.
Dec  5 01:11:15 compute-0 lvm[189997]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  5 01:11:15 compute-0 lvm[189997]: VG ceph_vg0 finished
Dec  5 01:11:15 compute-0 python3[190023]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  5 01:11:17 compute-0 python3[190050]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  5 01:11:17 compute-0 python3[190076]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:11:17 compute-0 kernel: loop4: detected capacity change from 0 to 41943040
Dec  5 01:11:18 compute-0 python3[190107]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:11:18 compute-0 lvm[190110]: PV /dev/loop4 not used.
Dec  5 01:11:18 compute-0 lvm[190121]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  5 01:11:18 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Dec  5 01:11:18 compute-0 lvm[190123]:  1 logical volume(s) in volume group "ceph_vg1" now active
Dec  5 01:11:18 compute-0 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Dec  5 01:11:18 compute-0 python3[190201]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  5 01:11:19 compute-0 podman[190274]: 2025-12-05 01:11:19.523389731 +0000 UTC m=+0.113244266 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2)
Dec  5 01:11:19 compute-0 python3[190275]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764897078.5805826-36733-66576700948134/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:11:20 compute-0 python3[190345]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  5 01:11:20 compute-0 systemd[1]: Reloading.
Dec  5 01:11:20 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:11:20 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:11:20 compute-0 systemd[1]: Starting Ceph OSD losetup...
Dec  5 01:11:20 compute-0 bash[190386]: /dev/loop4: [64513]:4330406 (/var/lib/ceph-osd-1.img)
Dec  5 01:11:20 compute-0 systemd[1]: Finished Ceph OSD losetup.
Dec  5 01:11:20 compute-0 lvm[190387]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  5 01:11:20 compute-0 lvm[190387]: VG ceph_vg1 finished
Dec  5 01:11:21 compute-0 python3[190413]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  5 01:11:21 compute-0 podman[190415]: 2025-12-05 01:11:21.718794652 +0000 UTC m=+0.133754743 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, distribution-scope=public, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, config_id=edpm, io.openshift.tags=base rhel9, vcs-type=git, version=9.4, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30)
Dec  5 01:11:22 compute-0 python3[190458]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  5 01:11:23 compute-0 python3[190484]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G#012losetup /dev/loop5 /var/lib/ceph-osd-2.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:11:23 compute-0 kernel: loop5: detected capacity change from 0 to 41943040
Dec  5 01:11:23 compute-0 python3[190516]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5#012vgcreate ceph_vg2 /dev/loop5#012lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:11:23 compute-0 lvm[190519]: PV /dev/loop5 not used.
Dec  5 01:11:23 compute-0 lvm[190521]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  5 01:11:23 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Dec  5 01:11:23 compute-0 lvm[190528]:  1 logical volume(s) in volume group "ceph_vg2" now active
Dec  5 01:11:23 compute-0 lvm[190532]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  5 01:11:23 compute-0 lvm[190532]: VG ceph_vg2 finished
Dec  5 01:11:23 compute-0 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Dec  5 01:11:24 compute-0 python3[190610]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  5 01:11:24 compute-0 podman[190683]: 2025-12-05 01:11:24.926549737 +0000 UTC m=+0.118684841 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, version=9.6, maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container)
Dec  5 01:11:24 compute-0 python3[190684]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764897084.077308-36760-67253213508547/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:11:25 compute-0 python3[190753]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  5 01:11:25 compute-0 systemd[1]: Reloading.
Dec  5 01:11:25 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:11:25 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:11:25 compute-0 systemd[1]: Starting Ceph OSD losetup...
Dec  5 01:11:26 compute-0 bash[190793]: /dev/loop5: [64513]:4391047 (/var/lib/ceph-osd-2.img)
Dec  5 01:11:26 compute-0 systemd[1]: Finished Ceph OSD losetup.
Dec  5 01:11:26 compute-0 lvm[190794]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  5 01:11:26 compute-0 lvm[190794]: VG ceph_vg2 finished
Dec  5 01:11:28 compute-0 python3[190818]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  5 01:11:28 compute-0 podman[190870]: 2025-12-05 01:11:28.673720268 +0000 UTC m=+0.095020644 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 01:11:29 compute-0 podman[158197]: time="2025-12-05T01:11:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:11:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:11:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18533 "" "Go-http-client/1.1"
Dec  5 01:11:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:11:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2995 "" "Go-http-client/1.1"
Dec  5 01:11:30 compute-0 python3[190942]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  5 01:11:31 compute-0 openstack_network_exporter[160350]: ERROR   01:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:11:31 compute-0 openstack_network_exporter[160350]: ERROR   01:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:11:31 compute-0 openstack_network_exporter[160350]: ERROR   01:11:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:11:31 compute-0 openstack_network_exporter[160350]: ERROR   01:11:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:11:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:11:31 compute-0 openstack_network_exporter[160350]: ERROR   01:11:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:11:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:11:34 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  5 01:11:34 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec  5 01:11:35 compute-0 python3[191069]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  5 01:11:35 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  5 01:11:35 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec  5 01:11:35 compute-0 systemd[1]: run-r79e069bdae344900b779ae4a1d576cad.service: Deactivated successfully.
Dec  5 01:11:35 compute-0 python3[191097]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:11:37 compute-0 python3[191162]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:11:37 compute-0 python3[191188]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:11:38 compute-0 python3[191266]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  5 01:11:38 compute-0 python3[191339]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764897097.8567848-36917-80351689164863/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:11:40 compute-0 python3[191441]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  5 01:11:40 compute-0 podman[191514]: 2025-12-05 01:11:40.678488009 +0000 UTC m=+0.104493365 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 01:11:40 compute-0 python3[191515]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764897099.337382-36935-237579661061677/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:11:41 compute-0 python3[191589]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  5 01:11:41 compute-0 python3[191617]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  5 01:11:42 compute-0 python3[191645]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  5 01:11:42 compute-0 python3[191673]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:11:42 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Dec  5 01:11:42 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec  5 01:11:42 compute-0 systemd-logind[792]: New session 26 of user ceph-admin.
Dec  5 01:11:42 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec  5 01:11:42 compute-0 systemd[1]: Starting User Manager for UID 42477...
Dec  5 01:11:43 compute-0 systemd[191693]: Queued start job for default target Main User Target.
Dec  5 01:11:43 compute-0 systemd[191693]: Created slice User Application Slice.
Dec  5 01:11:43 compute-0 systemd[191693]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  5 01:11:43 compute-0 systemd[191693]: Started Daily Cleanup of User's Temporary Directories.
Dec  5 01:11:43 compute-0 systemd[191693]: Reached target Paths.
Dec  5 01:11:43 compute-0 systemd[191693]: Reached target Timers.
Dec  5 01:11:43 compute-0 systemd[191693]: Starting D-Bus User Message Bus Socket...
Dec  5 01:11:43 compute-0 systemd[191693]: Starting Create User's Volatile Files and Directories...
Dec  5 01:11:43 compute-0 systemd[191693]: Finished Create User's Volatile Files and Directories.
Dec  5 01:11:43 compute-0 systemd[191693]: Listening on D-Bus User Message Bus Socket.
Dec  5 01:11:43 compute-0 systemd[191693]: Reached target Sockets.
Dec  5 01:11:43 compute-0 systemd[191693]: Reached target Basic System.
Dec  5 01:11:43 compute-0 systemd[191693]: Reached target Main User Target.
Dec  5 01:11:43 compute-0 systemd[191693]: Startup finished in 175ms.
Dec  5 01:11:43 compute-0 systemd[1]: Started User Manager for UID 42477.
Dec  5 01:11:43 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Dec  5 01:11:43 compute-0 systemd[1]: session-26.scope: Deactivated successfully.
Dec  5 01:11:43 compute-0 systemd-logind[792]: Session 26 logged out. Waiting for processes to exit.
Dec  5 01:11:43 compute-0 systemd-logind[792]: Removed session 26.
Dec  5 01:11:45 compute-0 podman[191770]: 2025-12-05 01:11:45.670114128 +0000 UTC m=+0.084277216 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute)
Dec  5 01:11:45 compute-0 podman[191771]: 2025-12-05 01:11:45.708963061 +0000 UTC m=+0.119049739 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_id=ovn_controller)
Dec  5 01:11:49 compute-0 podman[191833]: 2025-12-05 01:11:49.671236495 +0000 UTC m=+0.083566407 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm)
Dec  5 01:11:52 compute-0 podman[191853]: 2025-12-05 01:11:52.726714692 +0000 UTC m=+0.130390035 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, com.redhat.component=ubi9-container, io.openshift.expose-services=, container_name=kepler, release=1214.1726694543, version=9.4, name=ubi9, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, config_id=edpm)
Dec  5 01:11:53 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Dec  5 01:11:53 compute-0 systemd[191693]: Activating special unit Exit the Session...
Dec  5 01:11:53 compute-0 systemd[191693]: Stopped target Main User Target.
Dec  5 01:11:53 compute-0 systemd[191693]: Stopped target Basic System.
Dec  5 01:11:53 compute-0 systemd[191693]: Stopped target Paths.
Dec  5 01:11:53 compute-0 systemd[191693]: Stopped target Sockets.
Dec  5 01:11:53 compute-0 systemd[191693]: Stopped target Timers.
Dec  5 01:11:53 compute-0 systemd[191693]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec  5 01:11:53 compute-0 systemd[191693]: Stopped Daily Cleanup of User's Temporary Directories.
Dec  5 01:11:53 compute-0 systemd[191693]: Closed D-Bus User Message Bus Socket.
Dec  5 01:11:53 compute-0 systemd[191693]: Stopped Create User's Volatile Files and Directories.
Dec  5 01:11:53 compute-0 systemd[191693]: Removed slice User Application Slice.
Dec  5 01:11:53 compute-0 systemd[191693]: Reached target Shutdown.
Dec  5 01:11:53 compute-0 systemd[191693]: Finished Exit the Session.
Dec  5 01:11:53 compute-0 systemd[191693]: Reached target Exit the Session.
Dec  5 01:11:53 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Dec  5 01:11:53 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Dec  5 01:11:53 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Dec  5 01:11:53 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Dec  5 01:11:53 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Dec  5 01:11:53 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Dec  5 01:11:53 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Dec  5 01:11:55 compute-0 podman[191875]: 2025-12-05 01:11:55.705936002 +0000 UTC m=+0.114392742 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, config_id=edpm, vcs-type=git, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  5 01:11:59 compute-0 podman[191895]: 2025-12-05 01:11:59.709202718 +0000 UTC m=+0.111147895 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  5 01:11:59 compute-0 podman[158197]: time="2025-12-05T01:11:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:11:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:11:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 18533 "" "Go-http-client/1.1"
Dec  5 01:11:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:11:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 2998 "" "Go-http-client/1.1"
Dec  5 01:12:01 compute-0 openstack_network_exporter[160350]: ERROR   01:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:12:01 compute-0 openstack_network_exporter[160350]: ERROR   01:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:12:01 compute-0 openstack_network_exporter[160350]: ERROR   01:12:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:12:01 compute-0 openstack_network_exporter[160350]: ERROR   01:12:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:12:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:12:01 compute-0 openstack_network_exporter[160350]: ERROR   01:12:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:12:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:12:16 compute-0 podman[191933]: 2025-12-05 01:12:16.560543489 +0000 UTC m=+4.964706126 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  5 01:12:16 compute-0 podman[191746]: 2025-12-05 01:12:16.600741669 +0000 UTC m=+33.259050898 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:12:16 compute-0 podman[191956]: 2025-12-05 01:12:16.697304247 +0000 UTC m=+0.114749142 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true)
Dec  5 01:12:16 compute-0 podman[191976]: 2025-12-05 01:12:16.724875355 +0000 UTC m=+0.069827495 container create ec543270cd0457ef1f82168712759ff418ecf8c37571393c546f602340e15172 (image=quay.io/ceph/ceph:v18, name=zen_goodall, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:12:16 compute-0 podman[191957]: 2025-12-05 01:12:16.762301749 +0000 UTC m=+0.171722887 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  5 01:12:16 compute-0 podman[191976]: 2025-12-05 01:12:16.696821514 +0000 UTC m=+0.041773704 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:12:16 compute-0 systemd[1]: Started libpod-conmon-ec543270cd0457ef1f82168712759ff418ecf8c37571393c546f602340e15172.scope.
Dec  5 01:12:16 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:12:16 compute-0 podman[191976]: 2025-12-05 01:12:16.86040501 +0000 UTC m=+0.205357250 container init ec543270cd0457ef1f82168712759ff418ecf8c37571393c546f602340e15172 (image=quay.io/ceph/ceph:v18, name=zen_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Dec  5 01:12:16 compute-0 podman[191976]: 2025-12-05 01:12:16.872678652 +0000 UTC m=+0.217630792 container start ec543270cd0457ef1f82168712759ff418ecf8c37571393c546f602340e15172 (image=quay.io/ceph/ceph:v18, name=zen_goodall, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  5 01:12:16 compute-0 podman[191976]: 2025-12-05 01:12:16.878027127 +0000 UTC m=+0.222979267 container attach ec543270cd0457ef1f82168712759ff418ecf8c37571393c546f602340e15172 (image=quay.io/ceph/ceph:v18, name=zen_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  5 01:12:17 compute-0 zen_goodall[192015]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Dec  5 01:12:17 compute-0 systemd[1]: libpod-ec543270cd0457ef1f82168712759ff418ecf8c37571393c546f602340e15172.scope: Deactivated successfully.
Dec  5 01:12:17 compute-0 podman[191976]: 2025-12-05 01:12:17.196552864 +0000 UTC m=+0.541505034 container died ec543270cd0457ef1f82168712759ff418ecf8c37571393c546f602340e15172 (image=quay.io/ceph/ceph:v18, name=zen_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  5 01:12:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ae68a9956e87d1e61773f661f138d561ab2f577464dfd36aa0e2f4fbcb6645d-merged.mount: Deactivated successfully.
Dec  5 01:12:17 compute-0 podman[191976]: 2025-12-05 01:12:17.272316678 +0000 UTC m=+0.617268818 container remove ec543270cd0457ef1f82168712759ff418ecf8c37571393c546f602340e15172 (image=quay.io/ceph/ceph:v18, name=zen_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:12:17 compute-0 systemd[1]: libpod-conmon-ec543270cd0457ef1f82168712759ff418ecf8c37571393c546f602340e15172.scope: Deactivated successfully.
Dec  5 01:12:17 compute-0 podman[192030]: 2025-12-05 01:12:17.402228761 +0000 UTC m=+0.089231351 container create 01481de9943c0ce58fda3014cdba4bfcb26a3ee3a394d58456f8569f3b29b0c6 (image=quay.io/ceph/ceph:v18, name=brave_black, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:12:17 compute-0 podman[192030]: 2025-12-05 01:12:17.356710947 +0000 UTC m=+0.043713577 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:12:17 compute-0 systemd[1]: Started libpod-conmon-01481de9943c0ce58fda3014cdba4bfcb26a3ee3a394d58456f8569f3b29b0c6.scope.
Dec  5 01:12:17 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:12:17 compute-0 podman[192030]: 2025-12-05 01:12:17.542014781 +0000 UTC m=+0.229017431 container init 01481de9943c0ce58fda3014cdba4bfcb26a3ee3a394d58456f8569f3b29b0c6 (image=quay.io/ceph/ceph:v18, name=brave_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  5 01:12:17 compute-0 podman[192030]: 2025-12-05 01:12:17.558045196 +0000 UTC m=+0.245047806 container start 01481de9943c0ce58fda3014cdba4bfcb26a3ee3a394d58456f8569f3b29b0c6 (image=quay.io/ceph/ceph:v18, name=brave_black, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  5 01:12:17 compute-0 podman[192030]: 2025-12-05 01:12:17.566150906 +0000 UTC m=+0.253153566 container attach 01481de9943c0ce58fda3014cdba4bfcb26a3ee3a394d58456f8569f3b29b0c6 (image=quay.io/ceph/ceph:v18, name=brave_black, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:12:17 compute-0 brave_black[192046]: 167 167
Dec  5 01:12:17 compute-0 systemd[1]: libpod-01481de9943c0ce58fda3014cdba4bfcb26a3ee3a394d58456f8569f3b29b0c6.scope: Deactivated successfully.
Dec  5 01:12:17 compute-0 podman[192030]: 2025-12-05 01:12:17.572329223 +0000 UTC m=+0.259331813 container died 01481de9943c0ce58fda3014cdba4bfcb26a3ee3a394d58456f8569f3b29b0c6 (image=quay.io/ceph/ceph:v18, name=brave_black, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  5 01:12:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-5885bfb2b3b9ffa2b48dddbf2cabd3a3cea005bfc229cf066fffaaa966445ae3-merged.mount: Deactivated successfully.
Dec  5 01:12:17 compute-0 podman[192030]: 2025-12-05 01:12:17.627218711 +0000 UTC m=+0.314221311 container remove 01481de9943c0ce58fda3014cdba4bfcb26a3ee3a394d58456f8569f3b29b0c6 (image=quay.io/ceph/ceph:v18, name=brave_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:12:17 compute-0 systemd[1]: libpod-conmon-01481de9943c0ce58fda3014cdba4bfcb26a3ee3a394d58456f8569f3b29b0c6.scope: Deactivated successfully.
Dec  5 01:12:17 compute-0 podman[192064]: 2025-12-05 01:12:17.762993002 +0000 UTC m=+0.089052405 container create da9cc9c44d3f0089b2b293771d1e4d0fb2bbf557700437931425edb3e4a3ee8f (image=quay.io/ceph/ceph:v18, name=strange_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  5 01:12:17 compute-0 podman[192064]: 2025-12-05 01:12:17.726328448 +0000 UTC m=+0.052387901 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:12:17 compute-0 systemd[1]: Started libpod-conmon-da9cc9c44d3f0089b2b293771d1e4d0fb2bbf557700437931425edb3e4a3ee8f.scope.
Dec  5 01:12:17 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:12:17 compute-0 podman[192064]: 2025-12-05 01:12:17.901615521 +0000 UTC m=+0.227674894 container init da9cc9c44d3f0089b2b293771d1e4d0fb2bbf557700437931425edb3e4a3ee8f (image=quay.io/ceph/ceph:v18, name=strange_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:12:17 compute-0 podman[192064]: 2025-12-05 01:12:17.916979497 +0000 UTC m=+0.243038870 container start da9cc9c44d3f0089b2b293771d1e4d0fb2bbf557700437931425edb3e4a3ee8f (image=quay.io/ceph/ceph:v18, name=strange_cartwright, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:12:17 compute-0 podman[192064]: 2025-12-05 01:12:17.922393374 +0000 UTC m=+0.248452747 container attach da9cc9c44d3f0089b2b293771d1e4d0fb2bbf557700437931425edb3e4a3ee8f (image=quay.io/ceph/ceph:v18, name=strange_cartwright, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Dec  5 01:12:17 compute-0 strange_cartwright[192081]: AQBxMTJp11UxORAAqP9btdAKgROyxK7Fdlo7XQ==
Dec  5 01:12:17 compute-0 systemd[1]: libpod-da9cc9c44d3f0089b2b293771d1e4d0fb2bbf557700437931425edb3e4a3ee8f.scope: Deactivated successfully.
Dec  5 01:12:17 compute-0 podman[192064]: 2025-12-05 01:12:17.967712303 +0000 UTC m=+0.293771696 container died da9cc9c44d3f0089b2b293771d1e4d0fb2bbf557700437931425edb3e4a3ee8f (image=quay.io/ceph/ceph:v18, name=strange_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:12:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-86464f0965cf29acc65b13ae1bc9a91d4243e966b25559e3a1a46bd0e24d66cf-merged.mount: Deactivated successfully.
Dec  5 01:12:18 compute-0 podman[192064]: 2025-12-05 01:12:18.043120377 +0000 UTC m=+0.369179790 container remove da9cc9c44d3f0089b2b293771d1e4d0fb2bbf557700437931425edb3e4a3ee8f (image=quay.io/ceph/ceph:v18, name=strange_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  5 01:12:18 compute-0 systemd[1]: libpod-conmon-da9cc9c44d3f0089b2b293771d1e4d0fb2bbf557700437931425edb3e4a3ee8f.scope: Deactivated successfully.
Dec  5 01:12:18 compute-0 podman[192100]: 2025-12-05 01:12:18.162081363 +0000 UTC m=+0.085111079 container create f4472a8da98ef3edfd66fc4b145e047e3fee7ba985a0c92be6d44bae37f1dc72 (image=quay.io/ceph/ceph:v18, name=focused_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:12:18 compute-0 systemd[1]: Started libpod-conmon-f4472a8da98ef3edfd66fc4b145e047e3fee7ba985a0c92be6d44bae37f1dc72.scope.
Dec  5 01:12:18 compute-0 podman[192100]: 2025-12-05 01:12:18.130838756 +0000 UTC m=+0.053868522 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:12:18 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:12:18 compute-0 podman[192100]: 2025-12-05 01:12:18.265106306 +0000 UTC m=+0.188136012 container init f4472a8da98ef3edfd66fc4b145e047e3fee7ba985a0c92be6d44bae37f1dc72 (image=quay.io/ceph/ceph:v18, name=focused_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:12:18 compute-0 podman[192100]: 2025-12-05 01:12:18.275812397 +0000 UTC m=+0.198842113 container start f4472a8da98ef3edfd66fc4b145e047e3fee7ba985a0c92be6d44bae37f1dc72 (image=quay.io/ceph/ceph:v18, name=focused_jemison, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:12:18 compute-0 podman[192100]: 2025-12-05 01:12:18.286783284 +0000 UTC m=+0.209813070 container attach f4472a8da98ef3edfd66fc4b145e047e3fee7ba985a0c92be6d44bae37f1dc72 (image=quay.io/ceph/ceph:v18, name=focused_jemison, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  5 01:12:18 compute-0 focused_jemison[192116]: AQByMTJp2/MWEhAAdppAJmw8nfxov6zCgPjqyQ==
Dec  5 01:12:18 compute-0 systemd[1]: libpod-f4472a8da98ef3edfd66fc4b145e047e3fee7ba985a0c92be6d44bae37f1dc72.scope: Deactivated successfully.
Dec  5 01:12:18 compute-0 podman[192100]: 2025-12-05 01:12:18.31062107 +0000 UTC m=+0.233650746 container died f4472a8da98ef3edfd66fc4b145e047e3fee7ba985a0c92be6d44bae37f1dc72 (image=quay.io/ceph/ceph:v18, name=focused_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  5 01:12:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5cc90c81018f9f5c081b5dd83e905babed498b81df64c3b1ddbde2fcdefb572-merged.mount: Deactivated successfully.
Dec  5 01:12:18 compute-0 podman[192100]: 2025-12-05 01:12:18.362742394 +0000 UTC m=+0.285772080 container remove f4472a8da98ef3edfd66fc4b145e047e3fee7ba985a0c92be6d44bae37f1dc72 (image=quay.io/ceph/ceph:v18, name=focused_jemison, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  5 01:12:18 compute-0 systemd[1]: libpod-conmon-f4472a8da98ef3edfd66fc4b145e047e3fee7ba985a0c92be6d44bae37f1dc72.scope: Deactivated successfully.
Dec  5 01:12:18 compute-0 podman[192133]: 2025-12-05 01:12:18.48245974 +0000 UTC m=+0.081258704 container create 0c5c71c433e2622982fb707915bc4f2acbb78eb8dc505a4bcba4d267b560b5b1 (image=quay.io/ceph/ceph:v18, name=ecstatic_swanson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  5 01:12:18 compute-0 podman[192133]: 2025-12-05 01:12:18.446037182 +0000 UTC m=+0.044836196 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:12:18 compute-0 systemd[1]: Started libpod-conmon-0c5c71c433e2622982fb707915bc4f2acbb78eb8dc505a4bcba4d267b560b5b1.scope.
Dec  5 01:12:19 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:12:19 compute-0 podman[192133]: 2025-12-05 01:12:19.202821372 +0000 UTC m=+0.801620296 container init 0c5c71c433e2622982fb707915bc4f2acbb78eb8dc505a4bcba4d267b560b5b1 (image=quay.io/ceph/ceph:v18, name=ecstatic_swanson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:12:19 compute-0 podman[192133]: 2025-12-05 01:12:19.21231402 +0000 UTC m=+0.811112944 container start 0c5c71c433e2622982fb707915bc4f2acbb78eb8dc505a4bcba4d267b560b5b1 (image=quay.io/ceph/ceph:v18, name=ecstatic_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Dec  5 01:12:19 compute-0 podman[192133]: 2025-12-05 01:12:19.217827929 +0000 UTC m=+0.816626873 container attach 0c5c71c433e2622982fb707915bc4f2acbb78eb8dc505a4bcba4d267b560b5b1 (image=quay.io/ceph/ceph:v18, name=ecstatic_swanson, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Dec  5 01:12:19 compute-0 ecstatic_swanson[192148]: AQBzMTJphPXxDRAAHMI6rK1a3oVQzlpOqU7aqg==
Dec  5 01:12:19 compute-0 systemd[1]: libpod-0c5c71c433e2622982fb707915bc4f2acbb78eb8dc505a4bcba4d267b560b5b1.scope: Deactivated successfully.
Dec  5 01:12:19 compute-0 podman[192133]: 2025-12-05 01:12:19.239007463 +0000 UTC m=+0.837806407 container died 0c5c71c433e2622982fb707915bc4f2acbb78eb8dc505a4bcba4d267b560b5b1 (image=quay.io/ceph/ceph:v18, name=ecstatic_swanson, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:12:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-024e248e67405574eb33c918e2051ceed320aa0d8909a795c5de36ac91e88a8a-merged.mount: Deactivated successfully.
Dec  5 01:12:19 compute-0 podman[192133]: 2025-12-05 01:12:19.29678221 +0000 UTC m=+0.895581134 container remove 0c5c71c433e2622982fb707915bc4f2acbb78eb8dc505a4bcba4d267b560b5b1 (image=quay.io/ceph/ceph:v18, name=ecstatic_swanson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  5 01:12:19 compute-0 systemd[1]: libpod-conmon-0c5c71c433e2622982fb707915bc4f2acbb78eb8dc505a4bcba4d267b560b5b1.scope: Deactivated successfully.
Dec  5 01:12:19 compute-0 podman[192167]: 2025-12-05 01:12:19.396833033 +0000 UTC m=+0.071810868 container create e34530ed950e0c0a36df95a80ddeba558da1079f074a000eada1ffce0557a5a0 (image=quay.io/ceph/ceph:v18, name=gifted_raman, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:12:19 compute-0 systemd[1]: Started libpod-conmon-e34530ed950e0c0a36df95a80ddeba558da1079f074a000eada1ffce0557a5a0.scope.
Dec  5 01:12:19 compute-0 podman[192167]: 2025-12-05 01:12:19.365413661 +0000 UTC m=+0.040391366 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:12:19 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:12:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd807a7d2cf47bca87331cf0f3fe77227cae1f18116eb9310b6c495f926a0e30/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:19 compute-0 podman[192167]: 2025-12-05 01:12:19.529590532 +0000 UTC m=+0.204568147 container init e34530ed950e0c0a36df95a80ddeba558da1079f074a000eada1ffce0557a5a0 (image=quay.io/ceph/ceph:v18, name=gifted_raman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:12:19 compute-0 podman[192167]: 2025-12-05 01:12:19.53616583 +0000 UTC m=+0.211143445 container start e34530ed950e0c0a36df95a80ddeba558da1079f074a000eada1ffce0557a5a0 (image=quay.io/ceph/ceph:v18, name=gifted_raman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  5 01:12:19 compute-0 podman[192167]: 2025-12-05 01:12:19.541093064 +0000 UTC m=+0.216070699 container attach e34530ed950e0c0a36df95a80ddeba558da1079f074a000eada1ffce0557a5a0 (image=quay.io/ceph/ceph:v18, name=gifted_raman, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:12:19 compute-0 gifted_raman[192183]: /usr/bin/monmaptool: monmap file /tmp/monmap
Dec  5 01:12:19 compute-0 gifted_raman[192183]: setting min_mon_release = pacific
Dec  5 01:12:19 compute-0 gifted_raman[192183]: /usr/bin/monmaptool: set fsid to cbd280d3-cbd8-528b-ace6-2b3a887cdcee
Dec  5 01:12:19 compute-0 gifted_raman[192183]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Dec  5 01:12:19 compute-0 systemd[1]: libpod-e34530ed950e0c0a36df95a80ddeba558da1079f074a000eada1ffce0557a5a0.scope: Deactivated successfully.
Dec  5 01:12:19 compute-0 podman[192167]: 2025-12-05 01:12:19.56932305 +0000 UTC m=+0.244300675 container died e34530ed950e0c0a36df95a80ddeba558da1079f074a000eada1ffce0557a5a0 (image=quay.io/ceph/ceph:v18, name=gifted_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  5 01:12:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd807a7d2cf47bca87331cf0f3fe77227cae1f18116eb9310b6c495f926a0e30-merged.mount: Deactivated successfully.
Dec  5 01:12:19 compute-0 podman[192167]: 2025-12-05 01:12:19.637402495 +0000 UTC m=+0.312380110 container remove e34530ed950e0c0a36df95a80ddeba558da1079f074a000eada1ffce0557a5a0 (image=quay.io/ceph/ceph:v18, name=gifted_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:12:19 compute-0 systemd[1]: libpod-conmon-e34530ed950e0c0a36df95a80ddeba558da1079f074a000eada1ffce0557a5a0.scope: Deactivated successfully.
Dec  5 01:12:19 compute-0 podman[192202]: 2025-12-05 01:12:19.734314303 +0000 UTC m=+0.057045998 container create 22f08eb77ff4cdf992fd230b797575e42359447cd2eb564068b3c397f84b1139 (image=quay.io/ceph/ceph:v18, name=sleepy_sutherland, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:12:19 compute-0 systemd[1]: Started libpod-conmon-22f08eb77ff4cdf992fd230b797575e42359447cd2eb564068b3c397f84b1139.scope.
Dec  5 01:12:19 compute-0 podman[192202]: 2025-12-05 01:12:19.7098726 +0000 UTC m=+0.032604275 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:12:19 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:12:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3edbff07925cb80c2d3729dfe44f056b5f1fabe7bf4daaf51ec7875faca9ea36/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3edbff07925cb80c2d3729dfe44f056b5f1fabe7bf4daaf51ec7875faca9ea36/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3edbff07925cb80c2d3729dfe44f056b5f1fabe7bf4daaf51ec7875faca9ea36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3edbff07925cb80c2d3729dfe44f056b5f1fabe7bf4daaf51ec7875faca9ea36/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:19 compute-0 podman[192202]: 2025-12-05 01:12:19.866856027 +0000 UTC m=+0.189587702 container init 22f08eb77ff4cdf992fd230b797575e42359447cd2eb564068b3c397f84b1139 (image=quay.io/ceph/ceph:v18, name=sleepy_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:12:19 compute-0 podman[192202]: 2025-12-05 01:12:19.882528282 +0000 UTC m=+0.205259937 container start 22f08eb77ff4cdf992fd230b797575e42359447cd2eb564068b3c397f84b1139 (image=quay.io/ceph/ceph:v18, name=sleepy_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:12:19 compute-0 podman[192202]: 2025-12-05 01:12:19.888265057 +0000 UTC m=+0.210996712 container attach 22f08eb77ff4cdf992fd230b797575e42359447cd2eb564068b3c397f84b1139 (image=quay.io/ceph/ceph:v18, name=sleepy_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  5 01:12:19 compute-0 podman[192216]: 2025-12-05 01:12:19.912537166 +0000 UTC m=+0.127784316 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125)
Dec  5 01:12:20 compute-0 systemd[1]: libpod-22f08eb77ff4cdf992fd230b797575e42359447cd2eb564068b3c397f84b1139.scope: Deactivated successfully.
Dec  5 01:12:20 compute-0 conmon[192224]: conmon 22f08eb77ff4cdf992fd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-22f08eb77ff4cdf992fd230b797575e42359447cd2eb564068b3c397f84b1139.scope/container/memory.events
Dec  5 01:12:20 compute-0 podman[192202]: 2025-12-05 01:12:20.004573661 +0000 UTC m=+0.327305336 container died 22f08eb77ff4cdf992fd230b797575e42359447cd2eb564068b3c397f84b1139 (image=quay.io/ceph/ceph:v18, name=sleepy_sutherland, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  5 01:12:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-3edbff07925cb80c2d3729dfe44f056b5f1fabe7bf4daaf51ec7875faca9ea36-merged.mount: Deactivated successfully.
Dec  5 01:12:20 compute-0 podman[192202]: 2025-12-05 01:12:20.05838954 +0000 UTC m=+0.381121195 container remove 22f08eb77ff4cdf992fd230b797575e42359447cd2eb564068b3c397f84b1139 (image=quay.io/ceph/ceph:v18, name=sleepy_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:12:20 compute-0 systemd[1]: libpod-conmon-22f08eb77ff4cdf992fd230b797575e42359447cd2eb564068b3c397f84b1139.scope: Deactivated successfully.
Dec  5 01:12:20 compute-0 systemd[1]: Reloading.
Dec  5 01:12:20 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:12:20 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:12:20 compute-0 systemd[1]: Reloading.
Dec  5 01:12:20 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:12:20 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:12:20 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Dec  5 01:12:20 compute-0 systemd[1]: Reloading.
Dec  5 01:12:21 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:12:21 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:12:21 compute-0 systemd[1]: Reached target Ceph cluster cbd280d3-cbd8-528b-ace6-2b3a887cdcee.
Dec  5 01:12:21 compute-0 systemd[1]: Reloading.
Dec  5 01:12:21 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:12:21 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:12:21 compute-0 systemd[1]: Reloading.
Dec  5 01:12:21 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:12:21 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:12:22 compute-0 systemd[1]: Created slice Slice /system/ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee.
Dec  5 01:12:22 compute-0 systemd[1]: Reached target System Time Set.
Dec  5 01:12:22 compute-0 systemd[1]: Reached target System Time Synchronized.
Dec  5 01:12:22 compute-0 systemd[1]: Starting Ceph mon.compute-0 for cbd280d3-cbd8-528b-ace6-2b3a887cdcee...
Dec  5 01:12:22 compute-0 podman[192514]: 2025-12-05 01:12:22.533316946 +0000 UTC m=+0.082119888 container create 34f99c2940f04f7996eb5f6501f7e5baf464ccc95c9e67a651bd8354fdd5f175 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  5 01:12:22 compute-0 podman[192514]: 2025-12-05 01:12:22.507604899 +0000 UTC m=+0.056407821 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce88672616984df0293eb4b55333de5b5da26ac5e1d7ec352ccbdf8541f386b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce88672616984df0293eb4b55333de5b5da26ac5e1d7ec352ccbdf8541f386b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce88672616984df0293eb4b55333de5b5da26ac5e1d7ec352ccbdf8541f386b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce88672616984df0293eb4b55333de5b5da26ac5e1d7ec352ccbdf8541f386b4/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:22 compute-0 podman[192514]: 2025-12-05 01:12:22.696279425 +0000 UTC m=+0.245082427 container init 34f99c2940f04f7996eb5f6501f7e5baf464ccc95c9e67a651bd8354fdd5f175 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  5 01:12:22 compute-0 podman[192514]: 2025-12-05 01:12:22.713606994 +0000 UTC m=+0.262409926 container start 34f99c2940f04f7996eb5f6501f7e5baf464ccc95c9e67a651bd8354fdd5f175 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:12:22 compute-0 bash[192514]: 34f99c2940f04f7996eb5f6501f7e5baf464ccc95c9e67a651bd8354fdd5f175
Dec  5 01:12:22 compute-0 systemd[1]: Started Ceph mon.compute-0 for cbd280d3-cbd8-528b-ace6-2b3a887cdcee.
Dec  5 01:12:22 compute-0 ceph-mon[192533]: set uid:gid to 167:167 (ceph:ceph)
Dec  5 01:12:22 compute-0 ceph-mon[192533]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Dec  5 01:12:22 compute-0 ceph-mon[192533]: pidfile_write: ignore empty --pid-file
Dec  5 01:12:22 compute-0 ceph-mon[192533]: load: jerasure load: lrc 
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: RocksDB version: 7.9.2
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Git sha 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Compile date 2025-05-06 23:30:25
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: DB SUMMARY
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: DB Session ID:  MXAEOJG0GZXDO313546X
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: CURRENT file:  CURRENT
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: IDENTITY file:  IDENTITY
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                         Options.error_if_exists: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                       Options.create_if_missing: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                         Options.paranoid_checks: 1
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                                     Options.env: 0x559d75418c40
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                                      Options.fs: PosixFileSystem
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                                Options.info_log: 0x559d75f78e80
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                Options.max_file_opening_threads: 16
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                              Options.statistics: (nil)
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                               Options.use_fsync: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                       Options.max_log_file_size: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                         Options.allow_fallocate: 1
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                        Options.use_direct_reads: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:          Options.create_missing_column_families: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                              Options.db_log_dir: 
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                                 Options.wal_dir: 
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                   Options.advise_random_on_open: 1
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                    Options.write_buffer_manager: 0x559d75f88b40
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                            Options.rate_limiter: (nil)
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                  Options.unordered_write: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                               Options.row_cache: None
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                              Options.wal_filter: None
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.allow_ingest_behind: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.two_write_queues: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.manual_wal_flush: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.wal_compression: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.atomic_flush: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                 Options.log_readahead_size: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.allow_data_in_errors: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.db_host_id: __hostname__
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.max_background_jobs: 2
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.max_background_compactions: -1
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.max_subcompactions: 1
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.max_total_wal_size: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                          Options.max_open_files: -1
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                          Options.bytes_per_sync: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:       Options.compaction_readahead_size: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                  Options.max_background_flushes: -1
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Compression algorithms supported:
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: #011kZSTD supported: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: #011kXpressCompression supported: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: #011kBZip2Compression supported: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: #011kLZ4Compression supported: 1
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: #011kZlibCompression supported: 1
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: #011kSnappyCompression supported: 1
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:           Options.merge_operator: 
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:        Options.compaction_filter: None
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x559d75f78a80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x559d75f711f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:        Options.write_buffer_size: 33554432
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:  Options.max_write_buffer_number: 2
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:          Options.compression: NoCompression
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.num_levels: 7
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d2a3e37e-222f-447f-af23-2a52f135922f
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897142769007, "job": 1, "event": "recovery_started", "wal_files": [4]}
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897142771406, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "MXAEOJG0GZXDO313546X", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897142771527, "job": 1, "event": "recovery_finished"}
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x559d75f9ae00
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: DB pointer 0x559d76024000
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 01:12:22 compute-0 ceph-mon[192533]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x559d75f711f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,0.95 KB,0.000181794%)#012#012** File Read Latency Histogram By Level [default] **
Dec  5 01:12:22 compute-0 ceph-mon[192533]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee
Dec  5 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@-1(???) e0 preinit fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee
Dec  5 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Dec  5 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(probing) e0 win_standalone_election
Dec  5 01:12:22 compute-0 ceph-mon[192533]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Dec  5 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  5 01:12:22 compute-0 ceph-mon[192533]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  5 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec  5 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec  5 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec  5 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec  5 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  5 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Dec  5 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(probing) e1 win_standalone_election
Dec  5 01:12:22 compute-0 ceph-mon[192533]: paxos.0).electionLogic(2) init, last seen epoch 2
Dec  5 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  5 01:12:22 compute-0 ceph-mon[192533]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  5 01:12:22 compute-0 ceph-mon[192533]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec  5 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  5 01:12:22 compute-0 ceph-mon[192533]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-12-05T01:12:19.941718Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025,kernel_version=5.14.0-645.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,os=Linux}
Dec  5 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec  5 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec  5 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec  5 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec  5 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  5 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Dec  5 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).mds e1 new map
Dec  5 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Dec  5 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec  5 01:12:22 compute-0 ceph-mon[192533]: log_channel(cluster) log [DBG] : fsmap 
Dec  5 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec  5 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec  5 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Dec  5 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec  5 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  5 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  5 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  5 01:12:22 compute-0 ceph-mon[192533]: mkfs cbd280d3-cbd8-528b-ace6-2b3a887cdcee
Dec  5 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Dec  5 01:12:22 compute-0 podman[192534]: 2025-12-05 01:12:22.868395861 +0000 UTC m=+0.086367182 container create 87557651c5dec6ae6170b79c3b16e0456307a95c180a8f60f651452023dcc9fc (image=quay.io/ceph/ceph:v18, name=eloquent_swanson, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  5 01:12:22 compute-0 ceph-mon[192533]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec  5 01:12:22 compute-0 ceph-mon[192533]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec  5 01:12:22 compute-0 ceph-mon[192533]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  5 01:12:22 compute-0 systemd[1]: Started libpod-conmon-87557651c5dec6ae6170b79c3b16e0456307a95c180a8f60f651452023dcc9fc.scope.
Dec  5 01:12:22 compute-0 podman[192534]: 2025-12-05 01:12:22.83624292 +0000 UTC m=+0.054214341 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:12:22 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78c6bd03e609017150ebdfd4cacf89c5a8d1b7051e3095084652707a8e386f4d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78c6bd03e609017150ebdfd4cacf89c5a8d1b7051e3095084652707a8e386f4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78c6bd03e609017150ebdfd4cacf89c5a8d1b7051e3095084652707a8e386f4d/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:22 compute-0 podman[192534]: 2025-12-05 01:12:22.999459705 +0000 UTC m=+0.217431056 container init 87557651c5dec6ae6170b79c3b16e0456307a95c180a8f60f651452023dcc9fc (image=quay.io/ceph/ceph:v18, name=eloquent_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  5 01:12:23 compute-0 podman[192534]: 2025-12-05 01:12:23.016436325 +0000 UTC m=+0.234407646 container start 87557651c5dec6ae6170b79c3b16e0456307a95c180a8f60f651452023dcc9fc (image=quay.io/ceph/ceph:v18, name=eloquent_swanson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:12:23 compute-0 podman[192534]: 2025-12-05 01:12:23.024164015 +0000 UTC m=+0.242135386 container attach 87557651c5dec6ae6170b79c3b16e0456307a95c180a8f60f651452023dcc9fc (image=quay.io/ceph/ceph:v18, name=eloquent_swanson, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:12:23 compute-0 podman[192586]: 2025-12-05 01:12:23.050365425 +0000 UTC m=+0.109033907 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, release=1214.1726694543, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release-0.7.12=, vcs-type=git, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, container_name=kepler, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  5 01:12:23 compute-0 ceph-mon[192533]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Dec  5 01:12:23 compute-0 ceph-mon[192533]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1598696244' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec  5 01:12:23 compute-0 eloquent_swanson[192589]:  cluster:
Dec  5 01:12:23 compute-0 eloquent_swanson[192589]:    id:     cbd280d3-cbd8-528b-ace6-2b3a887cdcee
Dec  5 01:12:23 compute-0 eloquent_swanson[192589]:    health: HEALTH_OK
Dec  5 01:12:23 compute-0 eloquent_swanson[192589]: 
Dec  5 01:12:23 compute-0 eloquent_swanson[192589]:  services:
Dec  5 01:12:23 compute-0 eloquent_swanson[192589]:    mon: 1 daemons, quorum compute-0 (age 0.629197s)
Dec  5 01:12:23 compute-0 eloquent_swanson[192589]:    mgr: no daemons active
Dec  5 01:12:23 compute-0 eloquent_swanson[192589]:    osd: 0 osds: 0 up, 0 in
Dec  5 01:12:23 compute-0 eloquent_swanson[192589]: 
Dec  5 01:12:23 compute-0 eloquent_swanson[192589]:  data:
Dec  5 01:12:23 compute-0 eloquent_swanson[192589]:    pools:   0 pools, 0 pgs
Dec  5 01:12:23 compute-0 eloquent_swanson[192589]:    objects: 0 objects, 0 B
Dec  5 01:12:23 compute-0 eloquent_swanson[192589]:    usage:   0 B used, 0 B / 0 B avail
Dec  5 01:12:23 compute-0 eloquent_swanson[192589]:    pgs:     
Dec  5 01:12:23 compute-0 eloquent_swanson[192589]: 
Dec  5 01:12:23 compute-0 systemd[1]: libpod-87557651c5dec6ae6170b79c3b16e0456307a95c180a8f60f651452023dcc9fc.scope: Deactivated successfully.
Dec  5 01:12:23 compute-0 podman[192534]: 2025-12-05 01:12:23.496951654 +0000 UTC m=+0.714922995 container died 87557651c5dec6ae6170b79c3b16e0456307a95c180a8f60f651452023dcc9fc (image=quay.io/ceph/ceph:v18, name=eloquent_swanson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  5 01:12:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-78c6bd03e609017150ebdfd4cacf89c5a8d1b7051e3095084652707a8e386f4d-merged.mount: Deactivated successfully.
Dec  5 01:12:23 compute-0 podman[192534]: 2025-12-05 01:12:23.576249415 +0000 UTC m=+0.794220746 container remove 87557651c5dec6ae6170b79c3b16e0456307a95c180a8f60f651452023dcc9fc (image=quay.io/ceph/ceph:v18, name=eloquent_swanson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:12:23 compute-0 systemd[1]: libpod-conmon-87557651c5dec6ae6170b79c3b16e0456307a95c180a8f60f651452023dcc9fc.scope: Deactivated successfully.
Dec  5 01:12:23 compute-0 podman[192644]: 2025-12-05 01:12:23.714074922 +0000 UTC m=+0.097052223 container create 522061e67fffa5f50605e047153adbd68a865b4e8d4d8244ad127671c8127de1 (image=quay.io/ceph/ceph:v18, name=strange_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  5 01:12:23 compute-0 podman[192644]: 2025-12-05 01:12:23.663984784 +0000 UTC m=+0.046962135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:12:23 compute-0 systemd[1]: Started libpod-conmon-522061e67fffa5f50605e047153adbd68a865b4e8d4d8244ad127671c8127de1.scope.
Dec  5 01:12:23 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:12:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1e970a9a1469609fce052b3fd1b15408570c460d2cf068895eee2864ed5523f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1e970a9a1469609fce052b3fd1b15408570c460d2cf068895eee2864ed5523f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1e970a9a1469609fce052b3fd1b15408570c460d2cf068895eee2864ed5523f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1e970a9a1469609fce052b3fd1b15408570c460d2cf068895eee2864ed5523f/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:23 compute-0 podman[192644]: 2025-12-05 01:12:23.860019299 +0000 UTC m=+0.242996580 container init 522061e67fffa5f50605e047153adbd68a865b4e8d4d8244ad127671c8127de1 (image=quay.io/ceph/ceph:v18, name=strange_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  5 01:12:23 compute-0 podman[192644]: 2025-12-05 01:12:23.877217575 +0000 UTC m=+0.260194836 container start 522061e67fffa5f50605e047153adbd68a865b4e8d4d8244ad127671c8127de1 (image=quay.io/ceph/ceph:v18, name=strange_shaw, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  5 01:12:23 compute-0 podman[192644]: 2025-12-05 01:12:23.881602384 +0000 UTC m=+0.264579635 container attach 522061e67fffa5f50605e047153adbd68a865b4e8d4d8244ad127671c8127de1 (image=quay.io/ceph/ceph:v18, name=strange_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  5 01:12:23 compute-0 ceph-mon[192533]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  5 01:12:24 compute-0 ceph-mon[192533]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Dec  5 01:12:24 compute-0 ceph-mon[192533]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2633477333' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  5 01:12:24 compute-0 ceph-mon[192533]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2633477333' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec  5 01:12:24 compute-0 strange_shaw[192660]: 
Dec  5 01:12:24 compute-0 strange_shaw[192660]: [global]
Dec  5 01:12:24 compute-0 strange_shaw[192660]: #011fsid = cbd280d3-cbd8-528b-ace6-2b3a887cdcee
Dec  5 01:12:24 compute-0 strange_shaw[192660]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec  5 01:12:24 compute-0 strange_shaw[192660]: #011osd_crush_chooseleaf_type = 0
Dec  5 01:12:24 compute-0 systemd[1]: libpod-522061e67fffa5f50605e047153adbd68a865b4e8d4d8244ad127671c8127de1.scope: Deactivated successfully.
Dec  5 01:12:24 compute-0 podman[192686]: 2025-12-05 01:12:24.48886948 +0000 UTC m=+0.057423618 container died 522061e67fffa5f50605e047153adbd68a865b4e8d4d8244ad127671c8127de1 (image=quay.io/ceph/ceph:v18, name=strange_shaw, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:12:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1e970a9a1469609fce052b3fd1b15408570c460d2cf068895eee2864ed5523f-merged.mount: Deactivated successfully.
Dec  5 01:12:24 compute-0 podman[192686]: 2025-12-05 01:12:24.5674416 +0000 UTC m=+0.135995698 container remove 522061e67fffa5f50605e047153adbd68a865b4e8d4d8244ad127671c8127de1 (image=quay.io/ceph/ceph:v18, name=strange_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  5 01:12:24 compute-0 systemd[1]: libpod-conmon-522061e67fffa5f50605e047153adbd68a865b4e8d4d8244ad127671c8127de1.scope: Deactivated successfully.
Dec  5 01:12:24 compute-0 podman[192701]: 2025-12-05 01:12:24.676197909 +0000 UTC m=+0.064155100 container create c309a3963a5c7d3f55b3448c7e6606c27b852232fb99763b78f52a1c1367c6ad (image=quay.io/ceph/ceph:v18, name=funny_pasteur, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  5 01:12:24 compute-0 systemd[1]: Started libpod-conmon-c309a3963a5c7d3f55b3448c7e6606c27b852232fb99763b78f52a1c1367c6ad.scope.
Dec  5 01:12:24 compute-0 podman[192701]: 2025-12-05 01:12:24.656535376 +0000 UTC m=+0.044492587 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:12:24 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:12:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cc97dfe20f62abc0cb3b0ba6f67b619f3dae8c3910c858911f491c9917cdcf8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cc97dfe20f62abc0cb3b0ba6f67b619f3dae8c3910c858911f491c9917cdcf8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cc97dfe20f62abc0cb3b0ba6f67b619f3dae8c3910c858911f491c9917cdcf8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cc97dfe20f62abc0cb3b0ba6f67b619f3dae8c3910c858911f491c9917cdcf8/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:24 compute-0 podman[192701]: 2025-12-05 01:12:24.835591701 +0000 UTC m=+0.223548902 container init c309a3963a5c7d3f55b3448c7e6606c27b852232fb99763b78f52a1c1367c6ad (image=quay.io/ceph/ceph:v18, name=funny_pasteur, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  5 01:12:24 compute-0 podman[192701]: 2025-12-05 01:12:24.851694578 +0000 UTC m=+0.239651769 container start c309a3963a5c7d3f55b3448c7e6606c27b852232fb99763b78f52a1c1367c6ad (image=quay.io/ceph/ceph:v18, name=funny_pasteur, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  5 01:12:24 compute-0 podman[192701]: 2025-12-05 01:12:24.855947843 +0000 UTC m=+0.243905034 container attach c309a3963a5c7d3f55b3448c7e6606c27b852232fb99763b78f52a1c1367c6ad (image=quay.io/ceph/ceph:v18, name=funny_pasteur, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:12:24 compute-0 ceph-mon[192533]: from='client.? 192.168.122.100:0/2633477333' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  5 01:12:24 compute-0 ceph-mon[192533]: from='client.? 192.168.122.100:0/2633477333' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec  5 01:12:25 compute-0 ceph-mon[192533]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:12:25 compute-0 ceph-mon[192533]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1559321731' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:12:25 compute-0 systemd[1]: libpod-c309a3963a5c7d3f55b3448c7e6606c27b852232fb99763b78f52a1c1367c6ad.scope: Deactivated successfully.
Dec  5 01:12:25 compute-0 podman[192701]: 2025-12-05 01:12:25.32609566 +0000 UTC m=+0.714052921 container died c309a3963a5c7d3f55b3448c7e6606c27b852232fb99763b78f52a1c1367c6ad (image=quay.io/ceph/ceph:v18, name=funny_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  5 01:12:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cc97dfe20f62abc0cb3b0ba6f67b619f3dae8c3910c858911f491c9917cdcf8-merged.mount: Deactivated successfully.
Dec  5 01:12:25 compute-0 podman[192701]: 2025-12-05 01:12:25.419815741 +0000 UTC m=+0.807772942 container remove c309a3963a5c7d3f55b3448c7e6606c27b852232fb99763b78f52a1c1367c6ad (image=quay.io/ceph/ceph:v18, name=funny_pasteur, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:12:25 compute-0 systemd[1]: libpod-conmon-c309a3963a5c7d3f55b3448c7e6606c27b852232fb99763b78f52a1c1367c6ad.scope: Deactivated successfully.
Dec  5 01:12:25 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for cbd280d3-cbd8-528b-ace6-2b3a887cdcee...
Dec  5 01:12:25 compute-0 ceph-mon[192533]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec  5 01:12:25 compute-0 ceph-mon[192533]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec  5 01:12:25 compute-0 ceph-mon[192533]: mon.compute-0@0(leader) e1 shutdown
Dec  5 01:12:25 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0[192529]: 2025-12-05T01:12:25.798+0000 7fd0ac7f5640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec  5 01:12:25 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0[192529]: 2025-12-05T01:12:25.798+0000 7fd0ac7f5640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec  5 01:12:25 compute-0 ceph-mon[192533]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec  5 01:12:25 compute-0 ceph-mon[192533]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec  5 01:12:25 compute-0 podman[192781]: 2025-12-05 01:12:25.974321626 +0000 UTC m=+0.246281378 container died 34f99c2940f04f7996eb5f6501f7e5baf464ccc95c9e67a651bd8354fdd5f175 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:12:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce88672616984df0293eb4b55333de5b5da26ac5e1d7ec352ccbdf8541f386b4-merged.mount: Deactivated successfully.
Dec  5 01:12:26 compute-0 podman[192781]: 2025-12-05 01:12:26.029851742 +0000 UTC m=+0.301811454 container remove 34f99c2940f04f7996eb5f6501f7e5baf464ccc95c9e67a651bd8354fdd5f175 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:12:26 compute-0 bash[192781]: ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0
Dec  5 01:12:26 compute-0 podman[192803]: 2025-12-05 01:12:26.165551611 +0000 UTC m=+0.120239381 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, config_id=edpm, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., architecture=x86_64, version=9.6, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  5 01:12:26 compute-0 systemd[1]: ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee@mon.compute-0.service: Deactivated successfully.
Dec  5 01:12:26 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for cbd280d3-cbd8-528b-ace6-2b3a887cdcee.
Dec  5 01:12:26 compute-0 systemd[1]: ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee@mon.compute-0.service: Consumed 2.088s CPU time.
Dec  5 01:12:26 compute-0 systemd[1]: Starting Ceph mon.compute-0 for cbd280d3-cbd8-528b-ace6-2b3a887cdcee...
Dec  5 01:12:26 compute-0 podman[192895]: 2025-12-05 01:12:26.741071506 +0000 UTC m=+0.082011434 container create aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Dec  5 01:12:26 compute-0 podman[192895]: 2025-12-05 01:12:26.707143496 +0000 UTC m=+0.048083454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19ce083d781d282b541ced7ec033de49bbb6fca722f3765d693518efaa94c656/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19ce083d781d282b541ced7ec033de49bbb6fca722f3765d693518efaa94c656/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19ce083d781d282b541ced7ec033de49bbb6fca722f3765d693518efaa94c656/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19ce083d781d282b541ced7ec033de49bbb6fca722f3765d693518efaa94c656/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:26 compute-0 podman[192895]: 2025-12-05 01:12:26.862385896 +0000 UTC m=+0.203325844 container init aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:12:26 compute-0 podman[192895]: 2025-12-05 01:12:26.87655795 +0000 UTC m=+0.217497878 container start aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:12:26 compute-0 bash[192895]: aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9
Dec  5 01:12:26 compute-0 systemd[1]: Started Ceph mon.compute-0 for cbd280d3-cbd8-528b-ace6-2b3a887cdcee.
Dec  5 01:12:26 compute-0 ceph-mon[192914]: set uid:gid to 167:167 (ceph:ceph)
Dec  5 01:12:26 compute-0 ceph-mon[192914]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Dec  5 01:12:26 compute-0 ceph-mon[192914]: pidfile_write: ignore empty --pid-file
Dec  5 01:12:26 compute-0 ceph-mon[192914]: load: jerasure load: lrc 
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: RocksDB version: 7.9.2
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Git sha 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Compile date 2025-05-06 23:30:25
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: DB SUMMARY
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: DB Session ID:  4QDKSXZ9659NG2VXPQ9P
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: CURRENT file:  CURRENT
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: IDENTITY file:  IDENTITY
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 54564 ; 
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                         Options.error_if_exists: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                       Options.create_if_missing: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                         Options.paranoid_checks: 1
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                                     Options.env: 0x5646352cfc40
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                                      Options.fs: PosixFileSystem
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                                Options.info_log: 0x5646377a5040
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                Options.max_file_opening_threads: 16
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                              Options.statistics: (nil)
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                               Options.use_fsync: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                       Options.max_log_file_size: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                         Options.allow_fallocate: 1
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                        Options.use_direct_reads: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:          Options.create_missing_column_families: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                              Options.db_log_dir: 
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                                 Options.wal_dir: 
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                   Options.advise_random_on_open: 1
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                    Options.write_buffer_manager: 0x5646377b4b40
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                            Options.rate_limiter: (nil)
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                  Options.unordered_write: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                               Options.row_cache: None
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                              Options.wal_filter: None
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.allow_ingest_behind: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.two_write_queues: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.manual_wal_flush: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.wal_compression: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.atomic_flush: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                 Options.log_readahead_size: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.allow_data_in_errors: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.db_host_id: __hostname__
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.max_background_jobs: 2
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.max_background_compactions: -1
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.max_subcompactions: 1
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.max_total_wal_size: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                          Options.max_open_files: -1
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                          Options.bytes_per_sync: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:       Options.compaction_readahead_size: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                  Options.max_background_flushes: -1
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Compression algorithms supported:
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: #011kZSTD supported: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: #011kXpressCompression supported: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: #011kBZip2Compression supported: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: #011kLZ4Compression supported: 1
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: #011kZlibCompression supported: 1
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: #011kSnappyCompression supported: 1
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:           Options.merge_operator: 
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:        Options.compaction_filter: None
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5646377a4c40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56463779d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:        Options.write_buffer_size: 33554432
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:  Options.max_write_buffer_number: 2
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:          Options.compression: NoCompression
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.num_levels: 7
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d2a3e37e-222f-447f-af23-2a52f135922f
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897146941467, "job": 1, "event": "recovery_started", "wal_files": [9]}
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897146946777, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 54153, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 137, "table_properties": {"data_size": 52695, "index_size": 164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 3023, "raw_average_key_size": 30, "raw_value_size": 50297, "raw_average_value_size": 502, "num_data_blocks": 8, "num_entries": 100, "num_filter_entries": 100, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897146, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897146947081, "job": 1, "event": "recovery_finished"}
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5646377c6e00
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: DB pointer 0x5646378ce000
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 01:12:26 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   54.78 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     10.4      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0   54.78 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     10.4      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     10.4      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.4      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 2.28 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 2.28 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56463779d1f0#2 capacity: 512.00 MB usage: 1.73 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(2,0.95 KB,0.000181794%)#012#012** File Read Latency Histogram By Level [default] **
Dec  5 01:12:26 compute-0 ceph-mon[192914]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee
Dec  5 01:12:26 compute-0 ceph-mon[192914]: mon.compute-0@-1(???) e1 preinit fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee
Dec  5 01:12:26 compute-0 ceph-mon[192914]: mon.compute-0@-1(???).mds e1 new map
Dec  5 01:12:26 compute-0 ceph-mon[192914]: mon.compute-0@-1(???).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Dec  5 01:12:26 compute-0 ceph-mon[192914]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec  5 01:12:26 compute-0 ceph-mon[192914]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  5 01:12:26 compute-0 ceph-mon[192914]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  5 01:12:26 compute-0 ceph-mon[192914]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  5 01:12:26 compute-0 ceph-mon[192914]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Dec  5 01:12:26 compute-0 ceph-mon[192914]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Dec  5 01:12:26 compute-0 ceph-mon[192914]: mon.compute-0@0(probing) e1 win_standalone_election
Dec  5 01:12:26 compute-0 ceph-mon[192914]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Dec  5 01:12:26 compute-0 ceph-mon[192914]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  5 01:12:26 compute-0 ceph-mon[192914]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  5 01:12:26 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Dec  5 01:12:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  5 01:12:26 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : fsmap 
Dec  5 01:12:26 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec  5 01:12:26 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec  5 01:12:26 compute-0 podman[192915]: 2025-12-05 01:12:26.98902683 +0000 UTC m=+0.069539107 container create 75f3b56afab374b6945e02396a0fa8d5256c512c94e7cdf5e8ac3e78eaf8e3fe (image=quay.io/ceph/ceph:v18, name=jovial_curie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:12:27 compute-0 ceph-mon[192914]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  5 01:12:27 compute-0 podman[192915]: 2025-12-05 01:12:26.965175823 +0000 UTC m=+0.045688130 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:12:27 compute-0 systemd[1]: Started libpod-conmon-75f3b56afab374b6945e02396a0fa8d5256c512c94e7cdf5e8ac3e78eaf8e3fe.scope.
Dec  5 01:12:27 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:12:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6339b02d6396312a2b050a73d887beb13d99908b4053628f14d44ba0123f964f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6339b02d6396312a2b050a73d887beb13d99908b4053628f14d44ba0123f964f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6339b02d6396312a2b050a73d887beb13d99908b4053628f14d44ba0123f964f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:27 compute-0 podman[192915]: 2025-12-05 01:12:27.163316525 +0000 UTC m=+0.243828842 container init 75f3b56afab374b6945e02396a0fa8d5256c512c94e7cdf5e8ac3e78eaf8e3fe (image=quay.io/ceph/ceph:v18, name=jovial_curie, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  5 01:12:27 compute-0 podman[192915]: 2025-12-05 01:12:27.180520522 +0000 UTC m=+0.261032829 container start 75f3b56afab374b6945e02396a0fa8d5256c512c94e7cdf5e8ac3e78eaf8e3fe (image=quay.io/ceph/ceph:v18, name=jovial_curie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  5 01:12:27 compute-0 podman[192915]: 2025-12-05 01:12:27.189710651 +0000 UTC m=+0.270222968 container attach 75f3b56afab374b6945e02396a0fa8d5256c512c94e7cdf5e8ac3e78eaf8e3fe (image=quay.io/ceph/ceph:v18, name=jovial_curie, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:12:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Dec  5 01:12:27 compute-0 systemd[1]: libpod-75f3b56afab374b6945e02396a0fa8d5256c512c94e7cdf5e8ac3e78eaf8e3fe.scope: Deactivated successfully.
Dec  5 01:12:27 compute-0 podman[192915]: 2025-12-05 01:12:27.676000486 +0000 UTC m=+0.756512813 container died 75f3b56afab374b6945e02396a0fa8d5256c512c94e7cdf5e8ac3e78eaf8e3fe (image=quay.io/ceph/ceph:v18, name=jovial_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  5 01:12:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-6339b02d6396312a2b050a73d887beb13d99908b4053628f14d44ba0123f964f-merged.mount: Deactivated successfully.
Dec  5 01:12:27 compute-0 podman[192915]: 2025-12-05 01:12:27.752392948 +0000 UTC m=+0.832905215 container remove 75f3b56afab374b6945e02396a0fa8d5256c512c94e7cdf5e8ac3e78eaf8e3fe (image=quay.io/ceph/ceph:v18, name=jovial_curie, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  5 01:12:27 compute-0 systemd[1]: libpod-conmon-75f3b56afab374b6945e02396a0fa8d5256c512c94e7cdf5e8ac3e78eaf8e3fe.scope: Deactivated successfully.
Dec  5 01:12:27 compute-0 podman[193004]: 2025-12-05 01:12:27.860431327 +0000 UTC m=+0.062550107 container create 1e2d4cc4d463ca799332e59ab207e15453d06e82fac277bedf2d9b9617430b61 (image=quay.io/ceph/ceph:v18, name=festive_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  5 01:12:27 compute-0 systemd[1]: Started libpod-conmon-1e2d4cc4d463ca799332e59ab207e15453d06e82fac277bedf2d9b9617430b61.scope.
Dec  5 01:12:27 compute-0 podman[193004]: 2025-12-05 01:12:27.837509046 +0000 UTC m=+0.039627866 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:12:27 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:12:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/617e7f731a11569899a3ea61defb32337fb1685e34b2e73de1bc171a9b620988/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/617e7f731a11569899a3ea61defb32337fb1685e34b2e73de1bc171a9b620988/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/617e7f731a11569899a3ea61defb32337fb1685e34b2e73de1bc171a9b620988/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:28 compute-0 podman[193004]: 2025-12-05 01:12:28.041457456 +0000 UTC m=+0.243576306 container init 1e2d4cc4d463ca799332e59ab207e15453d06e82fac277bedf2d9b9617430b61 (image=quay.io/ceph/ceph:v18, name=festive_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:12:28 compute-0 podman[193004]: 2025-12-05 01:12:28.059064893 +0000 UTC m=+0.261183703 container start 1e2d4cc4d463ca799332e59ab207e15453d06e82fac277bedf2d9b9617430b61 (image=quay.io/ceph/ceph:v18, name=festive_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  5 01:12:28 compute-0 podman[193004]: 2025-12-05 01:12:28.06632813 +0000 UTC m=+0.268446980 container attach 1e2d4cc4d463ca799332e59ab207e15453d06e82fac277bedf2d9b9617430b61 (image=quay.io/ceph/ceph:v18, name=festive_neumann, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  5 01:12:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Dec  5 01:12:28 compute-0 systemd[1]: libpod-1e2d4cc4d463ca799332e59ab207e15453d06e82fac277bedf2d9b9617430b61.scope: Deactivated successfully.
Dec  5 01:12:28 compute-0 podman[193004]: 2025-12-05 01:12:28.617164215 +0000 UTC m=+0.819282985 container died 1e2d4cc4d463ca799332e59ab207e15453d06e82fac277bedf2d9b9617430b61 (image=quay.io/ceph/ceph:v18, name=festive_neumann, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:12:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-617e7f731a11569899a3ea61defb32337fb1685e34b2e73de1bc171a9b620988-merged.mount: Deactivated successfully.
Dec  5 01:12:28 compute-0 podman[193004]: 2025-12-05 01:12:28.691425928 +0000 UTC m=+0.893544738 container remove 1e2d4cc4d463ca799332e59ab207e15453d06e82fac277bedf2d9b9617430b61 (image=quay.io/ceph/ceph:v18, name=festive_neumann, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  5 01:12:28 compute-0 systemd[1]: libpod-conmon-1e2d4cc4d463ca799332e59ab207e15453d06e82fac277bedf2d9b9617430b61.scope: Deactivated successfully.
Dec  5 01:12:28 compute-0 systemd[1]: Reloading.
Dec  5 01:12:28 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:12:28 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:12:29 compute-0 systemd[1]: Reloading.
Dec  5 01:12:29 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:12:29 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:12:29 compute-0 systemd[1]: Starting Ceph mgr.compute-0.afshmv for cbd280d3-cbd8-528b-ace6-2b3a887cdcee...
Dec  5 01:12:29 compute-0 podman[158197]: time="2025-12-05T01:12:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:12:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:12:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 20380 "" "Go-http-client/1.1"
Dec  5 01:12:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:12:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3463 "" "Go-http-client/1.1"
Dec  5 01:12:30 compute-0 podman[193178]: 2025-12-05 01:12:30.069130244 +0000 UTC m=+0.078415497 container create 08717604c330d387a7f8ede377aa8d6af954338591c6c50fbdef8fe4a8f58c24 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  5 01:12:30 compute-0 podman[193178]: 2025-12-05 01:12:30.044246569 +0000 UTC m=+0.053531912 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/959de572a3ea1e2fee8fdcec547d2287ddc96a35bffb0a51d24d7cee808aee66/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/959de572a3ea1e2fee8fdcec547d2287ddc96a35bffb0a51d24d7cee808aee66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/959de572a3ea1e2fee8fdcec547d2287ddc96a35bffb0a51d24d7cee808aee66/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/959de572a3ea1e2fee8fdcec547d2287ddc96a35bffb0a51d24d7cee808aee66/merged/var/lib/ceph/mgr/ceph-compute-0.afshmv supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:30 compute-0 podman[193178]: 2025-12-05 01:12:30.192431677 +0000 UTC m=+0.201717010 container init 08717604c330d387a7f8ede377aa8d6af954338591c6c50fbdef8fe4a8f58c24 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec  5 01:12:30 compute-0 podman[193178]: 2025-12-05 01:12:30.212331537 +0000 UTC m=+0.221616820 container start 08717604c330d387a7f8ede377aa8d6af954338591c6c50fbdef8fe4a8f58c24 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:12:30 compute-0 bash[193178]: 08717604c330d387a7f8ede377aa8d6af954338591c6c50fbdef8fe4a8f58c24
Dec  5 01:12:30 compute-0 systemd[1]: Started Ceph mgr.compute-0.afshmv for cbd280d3-cbd8-528b-ace6-2b3a887cdcee.
Dec  5 01:12:30 compute-0 podman[193190]: 2025-12-05 01:12:30.275769717 +0000 UTC m=+0.137798977 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 01:12:30 compute-0 ceph-mgr[193209]: set uid:gid to 167:167 (ceph:ceph)
Dec  5 01:12:30 compute-0 ceph-mgr[193209]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Dec  5 01:12:30 compute-0 ceph-mgr[193209]: pidfile_write: ignore empty --pid-file
Dec  5 01:12:30 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'alerts'
Dec  5 01:12:30 compute-0 podman[193214]: 2025-12-05 01:12:30.425862697 +0000 UTC m=+0.140736987 container create 6ece579c5264561ca83d11a8a733f1652bd25dd2e97dc7ed917c45134e32ae43 (image=quay.io/ceph/ceph:v18, name=confident_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:12:30 compute-0 podman[193214]: 2025-12-05 01:12:30.33673569 +0000 UTC m=+0.051609990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:12:30 compute-0 systemd[1]: Started libpod-conmon-6ece579c5264561ca83d11a8a733f1652bd25dd2e97dc7ed917c45134e32ae43.scope.
Dec  5 01:12:30 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efc280c8b9d433f96b0dbef142aa5f19ee03aecc39310366b0805cd11595af05/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efc280c8b9d433f96b0dbef142aa5f19ee03aecc39310366b0805cd11595af05/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efc280c8b9d433f96b0dbef142aa5f19ee03aecc39310366b0805cd11595af05/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:30 compute-0 podman[193214]: 2025-12-05 01:12:30.624484672 +0000 UTC m=+0.339358972 container init 6ece579c5264561ca83d11a8a733f1652bd25dd2e97dc7ed917c45134e32ae43 (image=quay.io/ceph/ceph:v18, name=confident_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  5 01:12:30 compute-0 podman[193214]: 2025-12-05 01:12:30.645852432 +0000 UTC m=+0.360726722 container start 6ece579c5264561ca83d11a8a733f1652bd25dd2e97dc7ed917c45134e32ae43 (image=quay.io/ceph/ceph:v18, name=confident_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  5 01:12:30 compute-0 podman[193214]: 2025-12-05 01:12:30.652976845 +0000 UTC m=+0.367851105 container attach 6ece579c5264561ca83d11a8a733f1652bd25dd2e97dc7ed917c45134e32ae43 (image=quay.io/ceph/ceph:v18, name=confident_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  5 01:12:30 compute-0 ceph-mgr[193209]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  5 01:12:30 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'balancer'
Dec  5 01:12:30 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:30.711+0000 7fe0056c6140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  5 01:12:30 compute-0 ceph-mgr[193209]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  5 01:12:30 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'cephadm'
Dec  5 01:12:30 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:30.966+0000 7fe0056c6140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  5 01:12:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec  5 01:12:31 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/200958537' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  5 01:12:31 compute-0 confident_elion[193259]: 
Dec  5 01:12:31 compute-0 confident_elion[193259]: {
Dec  5 01:12:31 compute-0 confident_elion[193259]:    "fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:12:31 compute-0 confident_elion[193259]:    "health": {
Dec  5 01:12:31 compute-0 confident_elion[193259]:        "status": "HEALTH_OK",
Dec  5 01:12:31 compute-0 confident_elion[193259]:        "checks": {},
Dec  5 01:12:31 compute-0 confident_elion[193259]:        "mutes": []
Dec  5 01:12:31 compute-0 confident_elion[193259]:    },
Dec  5 01:12:31 compute-0 confident_elion[193259]:    "election_epoch": 5,
Dec  5 01:12:31 compute-0 confident_elion[193259]:    "quorum": [
Dec  5 01:12:31 compute-0 confident_elion[193259]:        0
Dec  5 01:12:31 compute-0 confident_elion[193259]:    ],
Dec  5 01:12:31 compute-0 confident_elion[193259]:    "quorum_names": [
Dec  5 01:12:31 compute-0 confident_elion[193259]:        "compute-0"
Dec  5 01:12:31 compute-0 confident_elion[193259]:    ],
Dec  5 01:12:31 compute-0 confident_elion[193259]:    "quorum_age": 4,
Dec  5 01:12:31 compute-0 confident_elion[193259]:    "monmap": {
Dec  5 01:12:31 compute-0 confident_elion[193259]:        "epoch": 1,
Dec  5 01:12:31 compute-0 confident_elion[193259]:        "min_mon_release_name": "reef",
Dec  5 01:12:31 compute-0 confident_elion[193259]:        "num_mons": 1
Dec  5 01:12:31 compute-0 confident_elion[193259]:    },
Dec  5 01:12:31 compute-0 confident_elion[193259]:    "osdmap": {
Dec  5 01:12:31 compute-0 confident_elion[193259]:        "epoch": 1,
Dec  5 01:12:31 compute-0 confident_elion[193259]:        "num_osds": 0,
Dec  5 01:12:31 compute-0 confident_elion[193259]:        "num_up_osds": 0,
Dec  5 01:12:31 compute-0 confident_elion[193259]:        "osd_up_since": 0,
Dec  5 01:12:31 compute-0 confident_elion[193259]:        "num_in_osds": 0,
Dec  5 01:12:31 compute-0 confident_elion[193259]:        "osd_in_since": 0,
Dec  5 01:12:31 compute-0 confident_elion[193259]:        "num_remapped_pgs": 0
Dec  5 01:12:31 compute-0 confident_elion[193259]:    },
Dec  5 01:12:31 compute-0 confident_elion[193259]:    "pgmap": {
Dec  5 01:12:31 compute-0 confident_elion[193259]:        "pgs_by_state": [],
Dec  5 01:12:31 compute-0 confident_elion[193259]:        "num_pgs": 0,
Dec  5 01:12:31 compute-0 confident_elion[193259]:        "num_pools": 0,
Dec  5 01:12:31 compute-0 confident_elion[193259]:        "num_objects": 0,
Dec  5 01:12:31 compute-0 confident_elion[193259]:        "data_bytes": 0,
Dec  5 01:12:31 compute-0 confident_elion[193259]:        "bytes_used": 0,
Dec  5 01:12:31 compute-0 confident_elion[193259]:        "bytes_avail": 0,
Dec  5 01:12:31 compute-0 confident_elion[193259]:        "bytes_total": 0
Dec  5 01:12:31 compute-0 confident_elion[193259]:    },
Dec  5 01:12:31 compute-0 confident_elion[193259]:    "fsmap": {
Dec  5 01:12:31 compute-0 confident_elion[193259]:        "epoch": 1,
Dec  5 01:12:31 compute-0 confident_elion[193259]:        "by_rank": [],
Dec  5 01:12:31 compute-0 confident_elion[193259]:        "up:standby": 0
Dec  5 01:12:31 compute-0 confident_elion[193259]:    },
Dec  5 01:12:31 compute-0 confident_elion[193259]:    "mgrmap": {
Dec  5 01:12:31 compute-0 confident_elion[193259]:        "available": false,
Dec  5 01:12:31 compute-0 confident_elion[193259]:        "num_standbys": 0,
Dec  5 01:12:31 compute-0 confident_elion[193259]:        "modules": [
Dec  5 01:12:31 compute-0 confident_elion[193259]:            "iostat",
Dec  5 01:12:31 compute-0 confident_elion[193259]:            "nfs",
Dec  5 01:12:31 compute-0 confident_elion[193259]:            "restful"
Dec  5 01:12:31 compute-0 confident_elion[193259]:        ],
Dec  5 01:12:31 compute-0 confident_elion[193259]:        "services": {}
Dec  5 01:12:31 compute-0 confident_elion[193259]:    },
Dec  5 01:12:31 compute-0 confident_elion[193259]:    "servicemap": {
Dec  5 01:12:31 compute-0 confident_elion[193259]:        "epoch": 1,
Dec  5 01:12:31 compute-0 confident_elion[193259]:        "modified": "2025-12-05T01:12:22.836369+0000",
Dec  5 01:12:31 compute-0 confident_elion[193259]:        "services": {}
Dec  5 01:12:31 compute-0 confident_elion[193259]:    },
Dec  5 01:12:31 compute-0 confident_elion[193259]:    "progress_events": {}
Dec  5 01:12:31 compute-0 confident_elion[193259]: }
Dec  5 01:12:31 compute-0 systemd[1]: libpod-6ece579c5264561ca83d11a8a733f1652bd25dd2e97dc7ed917c45134e32ae43.scope: Deactivated successfully.
Dec  5 01:12:31 compute-0 podman[193214]: 2025-12-05 01:12:31.16210983 +0000 UTC m=+0.876984100 container died 6ece579c5264561ca83d11a8a733f1652bd25dd2e97dc7ed917c45134e32ae43 (image=quay.io/ceph/ceph:v18, name=confident_elion, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:12:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-efc280c8b9d433f96b0dbef142aa5f19ee03aecc39310366b0805cd11595af05-merged.mount: Deactivated successfully.
Dec  5 01:12:31 compute-0 podman[193214]: 2025-12-05 01:12:31.25985242 +0000 UTC m=+0.974726680 container remove 6ece579c5264561ca83d11a8a733f1652bd25dd2e97dc7ed917c45134e32ae43 (image=quay.io/ceph/ceph:v18, name=confident_elion, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Dec  5 01:12:31 compute-0 systemd[1]: libpod-conmon-6ece579c5264561ca83d11a8a733f1652bd25dd2e97dc7ed917c45134e32ae43.scope: Deactivated successfully.
Dec  5 01:12:31 compute-0 openstack_network_exporter[160350]: ERROR   01:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:12:31 compute-0 openstack_network_exporter[160350]: ERROR   01:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:12:31 compute-0 openstack_network_exporter[160350]: ERROR   01:12:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:12:31 compute-0 openstack_network_exporter[160350]: ERROR   01:12:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:12:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:12:31 compute-0 openstack_network_exporter[160350]: ERROR   01:12:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:12:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:12:33 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'crash'
Dec  5 01:12:33 compute-0 ceph-mgr[193209]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  5 01:12:33 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'dashboard'
Dec  5 01:12:33 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:33.380+0000 7fe0056c6140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  5 01:12:33 compute-0 podman[193311]: 2025-12-05 01:12:33.442132581 +0000 UTC m=+0.131024364 container create 89254c45f5f256d0d879c2cd958ec9c7714e22173e4a725181069f67e0308331 (image=quay.io/ceph/ceph:v18, name=tender_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Dec  5 01:12:33 compute-0 podman[193311]: 2025-12-05 01:12:33.400442171 +0000 UTC m=+0.089333994 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:12:33 compute-0 systemd[1]: Started libpod-conmon-89254c45f5f256d0d879c2cd958ec9c7714e22173e4a725181069f67e0308331.scope.
Dec  5 01:12:33 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:12:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb4803719d18f7bcf9ba1524cc64198b54fbc4e0b07af8aec95c2e7b3e90edaf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb4803719d18f7bcf9ba1524cc64198b54fbc4e0b07af8aec95c2e7b3e90edaf/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb4803719d18f7bcf9ba1524cc64198b54fbc4e0b07af8aec95c2e7b3e90edaf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:33 compute-0 podman[193311]: 2025-12-05 01:12:33.608557704 +0000 UTC m=+0.297449467 container init 89254c45f5f256d0d879c2cd958ec9c7714e22173e4a725181069f67e0308331 (image=quay.io/ceph/ceph:v18, name=tender_gagarin, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:12:33 compute-0 podman[193311]: 2025-12-05 01:12:33.61876091 +0000 UTC m=+0.307652663 container start 89254c45f5f256d0d879c2cd958ec9c7714e22173e4a725181069f67e0308331 (image=quay.io/ceph/ceph:v18, name=tender_gagarin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:12:33 compute-0 podman[193311]: 2025-12-05 01:12:33.624563418 +0000 UTC m=+0.313455201 container attach 89254c45f5f256d0d879c2cd958ec9c7714e22173e4a725181069f67e0308331 (image=quay.io/ceph/ceph:v18, name=tender_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  5 01:12:34 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec  5 01:12:34 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3845166885' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  5 01:12:34 compute-0 tender_gagarin[193328]: 
Dec  5 01:12:34 compute-0 tender_gagarin[193328]: {
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:    "fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:    "health": {
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:        "status": "HEALTH_OK",
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:        "checks": {},
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:        "mutes": []
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:    },
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:    "election_epoch": 5,
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:    "quorum": [
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:        0
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:    ],
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:    "quorum_names": [
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:        "compute-0"
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:    ],
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:    "quorum_age": 7,
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:    "monmap": {
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:        "epoch": 1,
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:        "min_mon_release_name": "reef",
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:        "num_mons": 1
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:    },
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:    "osdmap": {
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:        "epoch": 1,
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:        "num_osds": 0,
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:        "num_up_osds": 0,
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:        "osd_up_since": 0,
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:        "num_in_osds": 0,
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:        "osd_in_since": 0,
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:        "num_remapped_pgs": 0
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:    },
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:    "pgmap": {
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:        "pgs_by_state": [],
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:        "num_pgs": 0,
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:        "num_pools": 0,
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:        "num_objects": 0,
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:        "data_bytes": 0,
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:        "bytes_used": 0,
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:        "bytes_avail": 0,
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:        "bytes_total": 0
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:    },
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:    "fsmap": {
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:        "epoch": 1,
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:        "by_rank": [],
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:        "up:standby": 0
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:    },
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:    "mgrmap": {
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:        "available": false,
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:        "num_standbys": 0,
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:        "modules": [
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:            "iostat",
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:            "nfs",
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:            "restful"
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:        ],
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:        "services": {}
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:    },
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:    "servicemap": {
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:        "epoch": 1,
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:        "modified": "2025-12-05T01:12:22.836369+0000",
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:        "services": {}
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:    },
Dec  5 01:12:34 compute-0 tender_gagarin[193328]:    "progress_events": {}
Dec  5 01:12:34 compute-0 tender_gagarin[193328]: }
Dec  5 01:12:34 compute-0 systemd[1]: libpod-89254c45f5f256d0d879c2cd958ec9c7714e22173e4a725181069f67e0308331.scope: Deactivated successfully.
Dec  5 01:12:34 compute-0 podman[193311]: 2025-12-05 01:12:34.056035457 +0000 UTC m=+0.744927280 container died 89254c45f5f256d0d879c2cd958ec9c7714e22173e4a725181069f67e0308331 (image=quay.io/ceph/ceph:v18, name=tender_gagarin, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  5 01:12:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb4803719d18f7bcf9ba1524cc64198b54fbc4e0b07af8aec95c2e7b3e90edaf-merged.mount: Deactivated successfully.
Dec  5 01:12:34 compute-0 podman[193311]: 2025-12-05 01:12:34.144792693 +0000 UTC m=+0.833684446 container remove 89254c45f5f256d0d879c2cd958ec9c7714e22173e4a725181069f67e0308331 (image=quay.io/ceph/ceph:v18, name=tender_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:12:34 compute-0 systemd[1]: libpod-conmon-89254c45f5f256d0d879c2cd958ec9c7714e22173e4a725181069f67e0308331.scope: Deactivated successfully.
Dec  5 01:12:34 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'devicehealth'
Dec  5 01:12:35 compute-0 ceph-mgr[193209]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  5 01:12:35 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'diskprediction_local'
Dec  5 01:12:35 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:35.187+0000 7fe0056c6140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  5 01:12:35 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  5 01:12:35 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  5 01:12:35 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]:  from numpy import show_config as show_numpy_config
Dec  5 01:12:35 compute-0 ceph-mgr[193209]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  5 01:12:35 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'influx'
Dec  5 01:12:35 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:35.744+0000 7fe0056c6140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  5 01:12:35 compute-0 ceph-mgr[193209]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  5 01:12:35 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'insights'
Dec  5 01:12:35 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:35.994+0000 7fe0056c6140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  5 01:12:36 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'iostat'
Dec  5 01:12:36 compute-0 podman[193365]: 2025-12-05 01:12:36.310818753 +0000 UTC m=+0.104753431 container create c955be14b80ce1c8bff98f40e8eaa1a66bba0f67f1e9b2371502193dcca2df73 (image=quay.io/ceph/ceph:v18, name=distracted_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  5 01:12:36 compute-0 podman[193365]: 2025-12-05 01:12:36.274768976 +0000 UTC m=+0.068703664 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:12:36 compute-0 systemd[1]: Started libpod-conmon-c955be14b80ce1c8bff98f40e8eaa1a66bba0f67f1e9b2371502193dcca2df73.scope.
Dec  5 01:12:36 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:12:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b55ffd125a483e120a99ffe6bdd0511f48d6b39a57f13938c8d65b9ca6398dac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b55ffd125a483e120a99ffe6bdd0511f48d6b39a57f13938c8d65b9ca6398dac/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b55ffd125a483e120a99ffe6bdd0511f48d6b39a57f13938c8d65b9ca6398dac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:36 compute-0 podman[193365]: 2025-12-05 01:12:36.520330914 +0000 UTC m=+0.314265612 container init c955be14b80ce1c8bff98f40e8eaa1a66bba0f67f1e9b2371502193dcca2df73 (image=quay.io/ceph/ceph:v18, name=distracted_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  5 01:12:36 compute-0 ceph-mgr[193209]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  5 01:12:36 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'k8sevents'
Dec  5 01:12:36 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:36.524+0000 7fe0056c6140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  5 01:12:36 compute-0 podman[193365]: 2025-12-05 01:12:36.537135979 +0000 UTC m=+0.331070657 container start c955be14b80ce1c8bff98f40e8eaa1a66bba0f67f1e9b2371502193dcca2df73 (image=quay.io/ceph/ceph:v18, name=distracted_villani, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  5 01:12:36 compute-0 podman[193365]: 2025-12-05 01:12:36.551000535 +0000 UTC m=+0.344935223 container attach c955be14b80ce1c8bff98f40e8eaa1a66bba0f67f1e9b2371502193dcca2df73 (image=quay.io/ceph/ceph:v18, name=distracted_villani, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  5 01:12:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec  5 01:12:36 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/940052462' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  5 01:12:37 compute-0 distracted_villani[193382]: 
Dec  5 01:12:37 compute-0 distracted_villani[193382]: {
Dec  5 01:12:37 compute-0 distracted_villani[193382]:    "fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:12:37 compute-0 distracted_villani[193382]:    "health": {
Dec  5 01:12:37 compute-0 distracted_villani[193382]:        "status": "HEALTH_OK",
Dec  5 01:12:37 compute-0 distracted_villani[193382]:        "checks": {},
Dec  5 01:12:37 compute-0 distracted_villani[193382]:        "mutes": []
Dec  5 01:12:37 compute-0 distracted_villani[193382]:    },
Dec  5 01:12:37 compute-0 distracted_villani[193382]:    "election_epoch": 5,
Dec  5 01:12:37 compute-0 distracted_villani[193382]:    "quorum": [
Dec  5 01:12:37 compute-0 distracted_villani[193382]:        0
Dec  5 01:12:37 compute-0 distracted_villani[193382]:    ],
Dec  5 01:12:37 compute-0 distracted_villani[193382]:    "quorum_names": [
Dec  5 01:12:37 compute-0 distracted_villani[193382]:        "compute-0"
Dec  5 01:12:37 compute-0 distracted_villani[193382]:    ],
Dec  5 01:12:37 compute-0 distracted_villani[193382]:    "quorum_age": 10,
Dec  5 01:12:37 compute-0 distracted_villani[193382]:    "monmap": {
Dec  5 01:12:37 compute-0 distracted_villani[193382]:        "epoch": 1,
Dec  5 01:12:37 compute-0 distracted_villani[193382]:        "min_mon_release_name": "reef",
Dec  5 01:12:37 compute-0 distracted_villani[193382]:        "num_mons": 1
Dec  5 01:12:37 compute-0 distracted_villani[193382]:    },
Dec  5 01:12:37 compute-0 distracted_villani[193382]:    "osdmap": {
Dec  5 01:12:37 compute-0 distracted_villani[193382]:        "epoch": 1,
Dec  5 01:12:37 compute-0 distracted_villani[193382]:        "num_osds": 0,
Dec  5 01:12:37 compute-0 distracted_villani[193382]:        "num_up_osds": 0,
Dec  5 01:12:37 compute-0 distracted_villani[193382]:        "osd_up_since": 0,
Dec  5 01:12:37 compute-0 distracted_villani[193382]:        "num_in_osds": 0,
Dec  5 01:12:37 compute-0 distracted_villani[193382]:        "osd_in_since": 0,
Dec  5 01:12:37 compute-0 distracted_villani[193382]:        "num_remapped_pgs": 0
Dec  5 01:12:37 compute-0 distracted_villani[193382]:    },
Dec  5 01:12:37 compute-0 distracted_villani[193382]:    "pgmap": {
Dec  5 01:12:37 compute-0 distracted_villani[193382]:        "pgs_by_state": [],
Dec  5 01:12:37 compute-0 distracted_villani[193382]:        "num_pgs": 0,
Dec  5 01:12:37 compute-0 distracted_villani[193382]:        "num_pools": 0,
Dec  5 01:12:37 compute-0 distracted_villani[193382]:        "num_objects": 0,
Dec  5 01:12:37 compute-0 distracted_villani[193382]:        "data_bytes": 0,
Dec  5 01:12:37 compute-0 distracted_villani[193382]:        "bytes_used": 0,
Dec  5 01:12:37 compute-0 distracted_villani[193382]:        "bytes_avail": 0,
Dec  5 01:12:37 compute-0 distracted_villani[193382]:        "bytes_total": 0
Dec  5 01:12:37 compute-0 distracted_villani[193382]:    },
Dec  5 01:12:37 compute-0 distracted_villani[193382]:    "fsmap": {
Dec  5 01:12:37 compute-0 distracted_villani[193382]:        "epoch": 1,
Dec  5 01:12:37 compute-0 distracted_villani[193382]:        "by_rank": [],
Dec  5 01:12:37 compute-0 distracted_villani[193382]:        "up:standby": 0
Dec  5 01:12:37 compute-0 distracted_villani[193382]:    },
Dec  5 01:12:37 compute-0 distracted_villani[193382]:    "mgrmap": {
Dec  5 01:12:37 compute-0 distracted_villani[193382]:        "available": false,
Dec  5 01:12:37 compute-0 distracted_villani[193382]:        "num_standbys": 0,
Dec  5 01:12:37 compute-0 distracted_villani[193382]:        "modules": [
Dec  5 01:12:37 compute-0 distracted_villani[193382]:            "iostat",
Dec  5 01:12:37 compute-0 distracted_villani[193382]:            "nfs",
Dec  5 01:12:37 compute-0 distracted_villani[193382]:            "restful"
Dec  5 01:12:37 compute-0 distracted_villani[193382]:        ],
Dec  5 01:12:37 compute-0 distracted_villani[193382]:        "services": {}
Dec  5 01:12:37 compute-0 distracted_villani[193382]:    },
Dec  5 01:12:37 compute-0 distracted_villani[193382]:    "servicemap": {
Dec  5 01:12:37 compute-0 distracted_villani[193382]:        "epoch": 1,
Dec  5 01:12:37 compute-0 distracted_villani[193382]:        "modified": "2025-12-05T01:12:22.836369+0000",
Dec  5 01:12:37 compute-0 distracted_villani[193382]:        "services": {}
Dec  5 01:12:37 compute-0 distracted_villani[193382]:    },
Dec  5 01:12:37 compute-0 distracted_villani[193382]:    "progress_events": {}
Dec  5 01:12:37 compute-0 distracted_villani[193382]: }
Dec  5 01:12:37 compute-0 systemd[1]: libpod-c955be14b80ce1c8bff98f40e8eaa1a66bba0f67f1e9b2371502193dcca2df73.scope: Deactivated successfully.
Dec  5 01:12:37 compute-0 conmon[193382]: conmon c955be14b80ce1c8bff9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c955be14b80ce1c8bff98f40e8eaa1a66bba0f67f1e9b2371502193dcca2df73.scope/container/memory.events
Dec  5 01:12:37 compute-0 podman[193408]: 2025-12-05 01:12:37.117965988 +0000 UTC m=+0.052436412 container died c955be14b80ce1c8bff98f40e8eaa1a66bba0f67f1e9b2371502193dcca2df73 (image=quay.io/ceph/ceph:v18, name=distracted_villani, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:12:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-b55ffd125a483e120a99ffe6bdd0511f48d6b39a57f13938c8d65b9ca6398dac-merged.mount: Deactivated successfully.
Dec  5 01:12:37 compute-0 podman[193408]: 2025-12-05 01:12:37.201701769 +0000 UTC m=+0.136172113 container remove c955be14b80ce1c8bff98f40e8eaa1a66bba0f67f1e9b2371502193dcca2df73 (image=quay.io/ceph/ceph:v18, name=distracted_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:12:37 compute-0 systemd[1]: libpod-conmon-c955be14b80ce1c8bff98f40e8eaa1a66bba0f67f1e9b2371502193dcca2df73.scope: Deactivated successfully.
Dec  5 01:12:38 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'localpool'
Dec  5 01:12:38 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'mds_autoscaler'
Dec  5 01:12:39 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'mirroring'
Dec  5 01:12:39 compute-0 podman[193421]: 2025-12-05 01:12:39.297215217 +0000 UTC m=+0.044087307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:12:39 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'nfs'
Dec  5 01:12:39 compute-0 podman[193421]: 2025-12-05 01:12:39.78638462 +0000 UTC m=+0.533256660 container create 2f848e3419703a14b432a0dee4130b7d5f755a5270875262d19218529e2b4e36 (image=quay.io/ceph/ceph:v18, name=heuristic_benz, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Dec  5 01:12:39 compute-0 systemd[1]: Started libpod-conmon-2f848e3419703a14b432a0dee4130b7d5f755a5270875262d19218529e2b4e36.scope.
Dec  5 01:12:39 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:12:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3d7fc77e248b7d93be8769172a6d1c0946ceaa731778e4a59af22f086ec0bcd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3d7fc77e248b7d93be8769172a6d1c0946ceaa731778e4a59af22f086ec0bcd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3d7fc77e248b7d93be8769172a6d1c0946ceaa731778e4a59af22f086ec0bcd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:39 compute-0 podman[193421]: 2025-12-05 01:12:39.898762687 +0000 UTC m=+0.645634757 container init 2f848e3419703a14b432a0dee4130b7d5f755a5270875262d19218529e2b4e36 (image=quay.io/ceph/ceph:v18, name=heuristic_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  5 01:12:39 compute-0 podman[193421]: 2025-12-05 01:12:39.906696593 +0000 UTC m=+0.653568653 container start 2f848e3419703a14b432a0dee4130b7d5f755a5270875262d19218529e2b4e36 (image=quay.io/ceph/ceph:v18, name=heuristic_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  5 01:12:39 compute-0 podman[193421]: 2025-12-05 01:12:39.920649531 +0000 UTC m=+0.667521581 container attach 2f848e3419703a14b432a0dee4130b7d5f755a5270875262d19218529e2b4e36 (image=quay.io/ceph/ceph:v18, name=heuristic_benz, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:12:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec  5 01:12:40 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1185180056' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  5 01:12:40 compute-0 heuristic_benz[193436]: 
Dec  5 01:12:40 compute-0 heuristic_benz[193436]: {
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:    "fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:    "health": {
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:        "status": "HEALTH_OK",
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:        "checks": {},
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:        "mutes": []
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:    },
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:    "election_epoch": 5,
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:    "quorum": [
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:        0
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:    ],
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:    "quorum_names": [
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:        "compute-0"
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:    ],
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:    "quorum_age": 13,
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:    "monmap": {
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:        "epoch": 1,
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:        "min_mon_release_name": "reef",
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:        "num_mons": 1
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:    },
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:    "osdmap": {
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:        "epoch": 1,
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:        "num_osds": 0,
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:        "num_up_osds": 0,
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:        "osd_up_since": 0,
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:        "num_in_osds": 0,
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:        "osd_in_since": 0,
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:        "num_remapped_pgs": 0
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:    },
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:    "pgmap": {
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:        "pgs_by_state": [],
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:        "num_pgs": 0,
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:        "num_pools": 0,
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:        "num_objects": 0,
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:        "data_bytes": 0,
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:        "bytes_used": 0,
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:        "bytes_avail": 0,
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:        "bytes_total": 0
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:    },
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:    "fsmap": {
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:        "epoch": 1,
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:        "by_rank": [],
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:        "up:standby": 0
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:    },
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:    "mgrmap": {
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:        "available": false,
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:        "num_standbys": 0,
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:        "modules": [
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:            "iostat",
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:            "nfs",
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:            "restful"
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:        ],
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:        "services": {}
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:    },
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:    "servicemap": {
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:        "epoch": 1,
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:        "modified": "2025-12-05T01:12:22.836369+0000",
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:        "services": {}
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:    },
Dec  5 01:12:40 compute-0 heuristic_benz[193436]:    "progress_events": {}
Dec  5 01:12:40 compute-0 heuristic_benz[193436]: }
Dec  5 01:12:40 compute-0 systemd[1]: libpod-2f848e3419703a14b432a0dee4130b7d5f755a5270875262d19218529e2b4e36.scope: Deactivated successfully.
Dec  5 01:12:40 compute-0 podman[193421]: 2025-12-05 01:12:40.328591162 +0000 UTC m=+1.075463242 container died 2f848e3419703a14b432a0dee4130b7d5f755a5270875262d19218529e2b4e36 (image=quay.io/ceph/ceph:v18, name=heuristic_benz, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:12:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3d7fc77e248b7d93be8769172a6d1c0946ceaa731778e4a59af22f086ec0bcd-merged.mount: Deactivated successfully.
Dec  5 01:12:40 compute-0 podman[193421]: 2025-12-05 01:12:40.405405855 +0000 UTC m=+1.152277905 container remove 2f848e3419703a14b432a0dee4130b7d5f755a5270875262d19218529e2b4e36 (image=quay.io/ceph/ceph:v18, name=heuristic_benz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:12:40 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:40.416+0000 7fe0056c6140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  5 01:12:40 compute-0 ceph-mgr[193209]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  5 01:12:40 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'orchestrator'
Dec  5 01:12:40 compute-0 systemd[1]: libpod-conmon-2f848e3419703a14b432a0dee4130b7d5f755a5270875262d19218529e2b4e36.scope: Deactivated successfully.
Dec  5 01:12:41 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:41.093+0000 7fe0056c6140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  5 01:12:41 compute-0 ceph-mgr[193209]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  5 01:12:41 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'osd_perf_query'
Dec  5 01:12:41 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:41.369+0000 7fe0056c6140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  5 01:12:41 compute-0 ceph-mgr[193209]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  5 01:12:41 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'osd_support'
Dec  5 01:12:41 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:41.605+0000 7fe0056c6140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  5 01:12:41 compute-0 ceph-mgr[193209]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  5 01:12:41 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'pg_autoscaler'
Dec  5 01:12:41 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:41.891+0000 7fe0056c6140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  5 01:12:41 compute-0 ceph-mgr[193209]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  5 01:12:41 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'progress'
Dec  5 01:12:42 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:42.133+0000 7fe0056c6140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  5 01:12:42 compute-0 ceph-mgr[193209]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  5 01:12:42 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'prometheus'
Dec  5 01:12:42 compute-0 podman[193473]: 2025-12-05 01:12:42.538445991 +0000 UTC m=+0.075252301 container create 8f8f1cbda74ae7356000fbcec7db2d12cd5d68204e13fbadf8c3be02733fa21e (image=quay.io/ceph/ceph:v18, name=stupefied_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.541 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.542 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.542 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f83151a5f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.543 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f83151a6690>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.544 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.544 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8316c39160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee59a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f941a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee79e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f942c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee6300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee74d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.552 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.552 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee76b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.552 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.552 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.552 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.552 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8314f94050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8314f940e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f831506dc10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8314ee7950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8314ee7a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8314f94170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8314ee79b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8314f94200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8314f94290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8314ee7ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8314f94320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8314ee59d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8314ee7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8314ee7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8314ee74a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8314ee7500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8314ee7560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8314ee75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8314f945f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8314ee7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8314ee7680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8314ee76e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.560 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8314ee7f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.560 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8314ee7740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.560 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8314ee7f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.560 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:12:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:12:42.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:12:42 compute-0 systemd[1]: Started libpod-conmon-8f8f1cbda74ae7356000fbcec7db2d12cd5d68204e13fbadf8c3be02733fa21e.scope.
Dec  5 01:12:42 compute-0 podman[193473]: 2025-12-05 01:12:42.516182098 +0000 UTC m=+0.052988408 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:12:42 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c717510ea68f12fb2231c7daf7a659197621a92da4c682a1e1efebd6f8b032/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c717510ea68f12fb2231c7daf7a659197621a92da4c682a1e1efebd6f8b032/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70c717510ea68f12fb2231c7daf7a659197621a92da4c682a1e1efebd6f8b032/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:42 compute-0 podman[193473]: 2025-12-05 01:12:42.677694197 +0000 UTC m=+0.214500527 container init 8f8f1cbda74ae7356000fbcec7db2d12cd5d68204e13fbadf8c3be02733fa21e (image=quay.io/ceph/ceph:v18, name=stupefied_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:12:42 compute-0 podman[193473]: 2025-12-05 01:12:42.6848225 +0000 UTC m=+0.221628810 container start 8f8f1cbda74ae7356000fbcec7db2d12cd5d68204e13fbadf8c3be02733fa21e (image=quay.io/ceph/ceph:v18, name=stupefied_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  5 01:12:42 compute-0 podman[193473]: 2025-12-05 01:12:42.699994281 +0000 UTC m=+0.236800621 container attach 8f8f1cbda74ae7356000fbcec7db2d12cd5d68204e13fbadf8c3be02733fa21e (image=quay.io/ceph/ceph:v18, name=stupefied_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:12:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec  5 01:12:43 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3291617161' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]: 
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]: {
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:    "fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:    "health": {
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:        "status": "HEALTH_OK",
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:        "checks": {},
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:        "mutes": []
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:    },
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:    "election_epoch": 5,
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:    "quorum": [
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:        0
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:    ],
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:    "quorum_names": [
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:        "compute-0"
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:    ],
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:    "quorum_age": 16,
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:    "monmap": {
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:        "epoch": 1,
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:        "min_mon_release_name": "reef",
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:        "num_mons": 1
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:    },
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:    "osdmap": {
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:        "epoch": 1,
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:        "num_osds": 0,
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:        "num_up_osds": 0,
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:        "osd_up_since": 0,
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:        "num_in_osds": 0,
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:        "osd_in_since": 0,
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:        "num_remapped_pgs": 0
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:    },
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:    "pgmap": {
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:        "pgs_by_state": [],
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:        "num_pgs": 0,
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:        "num_pools": 0,
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:        "num_objects": 0,
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:        "data_bytes": 0,
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:        "bytes_used": 0,
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:        "bytes_avail": 0,
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:        "bytes_total": 0
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:    },
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:    "fsmap": {
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:        "epoch": 1,
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:        "by_rank": [],
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:        "up:standby": 0
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:    },
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:    "mgrmap": {
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:        "available": false,
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:        "num_standbys": 0,
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:        "modules": [
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:            "iostat",
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:            "nfs",
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:            "restful"
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:        ],
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:        "services": {}
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:    },
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:    "servicemap": {
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:        "epoch": 1,
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:        "modified": "2025-12-05T01:12:22.836369+0000",
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:        "services": {}
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:    },
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]:    "progress_events": {}
Dec  5 01:12:43 compute-0 stupefied_poitras[193489]: }
Dec  5 01:12:43 compute-0 systemd[1]: libpod-8f8f1cbda74ae7356000fbcec7db2d12cd5d68204e13fbadf8c3be02733fa21e.scope: Deactivated successfully.
Dec  5 01:12:43 compute-0 podman[193473]: 2025-12-05 01:12:43.118163779 +0000 UTC m=+0.654970159 container died 8f8f1cbda74ae7356000fbcec7db2d12cd5d68204e13fbadf8c3be02733fa21e (image=quay.io/ceph/ceph:v18, name=stupefied_poitras, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  5 01:12:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-70c717510ea68f12fb2231c7daf7a659197621a92da4c682a1e1efebd6f8b032-merged.mount: Deactivated successfully.
Dec  5 01:12:43 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:43.190+0000 7fe0056c6140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  5 01:12:43 compute-0 ceph-mgr[193209]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  5 01:12:43 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'rbd_support'
Dec  5 01:12:43 compute-0 podman[193473]: 2025-12-05 01:12:43.191871167 +0000 UTC m=+0.728677477 container remove 8f8f1cbda74ae7356000fbcec7db2d12cd5d68204e13fbadf8c3be02733fa21e (image=quay.io/ceph/ceph:v18, name=stupefied_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  5 01:12:43 compute-0 systemd[1]: libpod-conmon-8f8f1cbda74ae7356000fbcec7db2d12cd5d68204e13fbadf8c3be02733fa21e.scope: Deactivated successfully.
Dec  5 01:12:43 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:43.493+0000 7fe0056c6140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  5 01:12:43 compute-0 ceph-mgr[193209]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  5 01:12:43 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'restful'
Dec  5 01:12:44 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'rgw'
Dec  5 01:12:45 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:45.025+0000 7fe0056c6140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  5 01:12:45 compute-0 ceph-mgr[193209]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  5 01:12:45 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'rook'
Dec  5 01:12:45 compute-0 podman[193528]: 2025-12-05 01:12:45.266080899 +0000 UTC m=+0.036379138 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:12:47 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:47.307+0000 7fe0056c6140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  5 01:12:47 compute-0 ceph-mgr[193209]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  5 01:12:47 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'selftest'
Dec  5 01:12:47 compute-0 podman[193528]: 2025-12-05 01:12:47.501329045 +0000 UTC m=+2.271627294 container create 8544cce5a2fad0e7200f0c533662eb4d7b046ba11f4d4ecf9ed81638c2eb838b (image=quay.io/ceph/ceph:v18, name=heuristic_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:12:47 compute-0 systemd[1]: Started libpod-conmon-8544cce5a2fad0e7200f0c533662eb4d7b046ba11f4d4ecf9ed81638c2eb838b.scope.
Dec  5 01:12:47 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:12:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b073a8a48e3a7119d3e13c57e5cd9c9701e7c3f4aaacf804d1f57e3ebc83b78f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b073a8a48e3a7119d3e13c57e5cd9c9701e7c3f4aaacf804d1f57e3ebc83b78f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b073a8a48e3a7119d3e13c57e5cd9c9701e7c3f4aaacf804d1f57e3ebc83b78f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:47 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:47.610+0000 7fe0056c6140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  5 01:12:47 compute-0 ceph-mgr[193209]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  5 01:12:47 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'snap_schedule'
Dec  5 01:12:47 compute-0 podman[193528]: 2025-12-05 01:12:47.626834178 +0000 UTC m=+2.397132407 container init 8544cce5a2fad0e7200f0c533662eb4d7b046ba11f4d4ecf9ed81638c2eb838b (image=quay.io/ceph/ceph:v18, name=heuristic_kilby, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:12:47 compute-0 podman[193528]: 2025-12-05 01:12:47.644290852 +0000 UTC m=+2.414589061 container start 8544cce5a2fad0e7200f0c533662eb4d7b046ba11f4d4ecf9ed81638c2eb838b (image=quay.io/ceph/ceph:v18, name=heuristic_kilby, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:12:47 compute-0 podman[193528]: 2025-12-05 01:12:47.650356266 +0000 UTC m=+2.420654495 container attach 8544cce5a2fad0e7200f0c533662eb4d7b046ba11f4d4ecf9ed81638c2eb838b (image=quay.io/ceph/ceph:v18, name=heuristic_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:12:47 compute-0 podman[193542]: 2025-12-05 01:12:47.707944588 +0000 UTC m=+0.137854559 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Dec  5 01:12:47 compute-0 podman[193545]: 2025-12-05 01:12:47.716241413 +0000 UTC m=+0.143786070 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 01:12:47 compute-0 podman[193546]: 2025-12-05 01:12:47.751050167 +0000 UTC m=+0.166175207 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  5 01:12:47 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:47.904+0000 7fe0056c6140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  5 01:12:47 compute-0 ceph-mgr[193209]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  5 01:12:47 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'stats'
Dec  5 01:12:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec  5 01:12:48 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3146128602' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]: 
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]: {
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:    "fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:    "health": {
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:        "status": "HEALTH_OK",
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:        "checks": {},
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:        "mutes": []
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:    },
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:    "election_epoch": 5,
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:    "quorum": [
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:        0
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:    ],
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:    "quorum_names": [
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:        "compute-0"
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:    ],
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:    "quorum_age": 21,
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:    "monmap": {
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:        "epoch": 1,
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:        "min_mon_release_name": "reef",
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:        "num_mons": 1
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:    },
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:    "osdmap": {
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:        "epoch": 1,
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:        "num_osds": 0,
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:        "num_up_osds": 0,
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:        "osd_up_since": 0,
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:        "num_in_osds": 0,
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:        "osd_in_since": 0,
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:        "num_remapped_pgs": 0
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:    },
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:    "pgmap": {
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:        "pgs_by_state": [],
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:        "num_pgs": 0,
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:        "num_pools": 0,
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:        "num_objects": 0,
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:        "data_bytes": 0,
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:        "bytes_used": 0,
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:        "bytes_avail": 0,
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:        "bytes_total": 0
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:    },
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:    "fsmap": {
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:        "epoch": 1,
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:        "by_rank": [],
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:        "up:standby": 0
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:    },
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:    "mgrmap": {
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:        "available": false,
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:        "num_standbys": 0,
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:        "modules": [
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:            "iostat",
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:            "nfs",
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:            "restful"
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:        ],
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:        "services": {}
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:    },
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:    "servicemap": {
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:        "epoch": 1,
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:        "modified": "2025-12-05T01:12:22.836369+0000",
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:        "services": {}
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:    },
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]:    "progress_events": {}
Dec  5 01:12:48 compute-0 heuristic_kilby[193547]: }
Dec  5 01:12:48 compute-0 systemd[1]: libpod-8544cce5a2fad0e7200f0c533662eb4d7b046ba11f4d4ecf9ed81638c2eb838b.scope: Deactivated successfully.
Dec  5 01:12:48 compute-0 conmon[193547]: conmon 8544cce5a2fad0e7200f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8544cce5a2fad0e7200f0c533662eb4d7b046ba11f4d4ecf9ed81638c2eb838b.scope/container/memory.events
Dec  5 01:12:48 compute-0 podman[193635]: 2025-12-05 01:12:48.163807568 +0000 UTC m=+0.054799957 container died 8544cce5a2fad0e7200f0c533662eb4d7b046ba11f4d4ecf9ed81638c2eb838b (image=quay.io/ceph/ceph:v18, name=heuristic_kilby, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  5 01:12:48 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'status'
Dec  5 01:12:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-b073a8a48e3a7119d3e13c57e5cd9c9701e7c3f4aaacf804d1f57e3ebc83b78f-merged.mount: Deactivated successfully.
Dec  5 01:12:48 compute-0 podman[193635]: 2025-12-05 01:12:48.237949179 +0000 UTC m=+0.128941538 container remove 8544cce5a2fad0e7200f0c533662eb4d7b046ba11f4d4ecf9ed81638c2eb838b (image=quay.io/ceph/ceph:v18, name=heuristic_kilby, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:12:48 compute-0 systemd[1]: libpod-conmon-8544cce5a2fad0e7200f0c533662eb4d7b046ba11f4d4ecf9ed81638c2eb838b.scope: Deactivated successfully.
Dec  5 01:12:48 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:48.447+0000 7fe0056c6140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec  5 01:12:48 compute-0 ceph-mgr[193209]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec  5 01:12:48 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'telegraf'
Dec  5 01:12:48 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:48.699+0000 7fe0056c6140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  5 01:12:48 compute-0 ceph-mgr[193209]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  5 01:12:48 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'telemetry'
Dec  5 01:12:49 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:49.320+0000 7fe0056c6140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  5 01:12:49 compute-0 ceph-mgr[193209]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  5 01:12:49 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'test_orchestrator'
Dec  5 01:12:50 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:50.004+0000 7fe0056c6140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  5 01:12:50 compute-0 ceph-mgr[193209]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  5 01:12:50 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'volumes'
Dec  5 01:12:50 compute-0 podman[193651]: 2025-12-05 01:12:50.381690995 +0000 UTC m=+0.090469523 container create 2aab2c0ec96c2cc1a4e4213d01cee9387a54476ca416535c43667e9671b7cb5b (image=quay.io/ceph/ceph:v18, name=hopeful_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  5 01:12:50 compute-0 podman[193651]: 2025-12-05 01:12:50.35054682 +0000 UTC m=+0.059325338 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:12:50 compute-0 systemd[1]: Started libpod-conmon-2aab2c0ec96c2cc1a4e4213d01cee9387a54476ca416535c43667e9671b7cb5b.scope.
Dec  5 01:12:50 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:12:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff8dc7fdf53053a3e97dafb1bad8e66dc4dd05390a0a0f85a22601e77800fab5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff8dc7fdf53053a3e97dafb1bad8e66dc4dd05390a0a0f85a22601e77800fab5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff8dc7fdf53053a3e97dafb1bad8e66dc4dd05390a0a0f85a22601e77800fab5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:50 compute-0 podman[193651]: 2025-12-05 01:12:50.568118726 +0000 UTC m=+0.276897254 container init 2aab2c0ec96c2cc1a4e4213d01cee9387a54476ca416535c43667e9671b7cb5b (image=quay.io/ceph/ceph:v18, name=hopeful_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  5 01:12:50 compute-0 podman[193665]: 2025-12-05 01:12:50.569560977 +0000 UTC m=+0.124547042 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  5 01:12:50 compute-0 podman[193651]: 2025-12-05 01:12:50.586295593 +0000 UTC m=+0.295074081 container start 2aab2c0ec96c2cc1a4e4213d01cee9387a54476ca416535c43667e9671b7cb5b (image=quay.io/ceph/ceph:v18, name=hopeful_heyrovsky, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:12:50 compute-0 podman[193651]: 2025-12-05 01:12:50.592362686 +0000 UTC m=+0.301141394 container attach 2aab2c0ec96c2cc1a4e4213d01cee9387a54476ca416535c43667e9671b7cb5b (image=quay.io/ceph/ceph:v18, name=hopeful_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  5 01:12:50 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:50.695+0000 7fe0056c6140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  5 01:12:50 compute-0 ceph-mgr[193209]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  5 01:12:50 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'zabbix'
Dec  5 01:12:50 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:50.926+0000 7fe0056c6140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  5 01:12:50 compute-0 ceph-mgr[193209]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  5 01:12:50 compute-0 ceph-mgr[193209]: ms_deliver_dispatch: unhandled message 0x559dd83f31e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Dec  5 01:12:50 compute-0 ceph-mon[192914]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.afshmv
Dec  5 01:12:50 compute-0 ceph-mgr[193209]: mgr handle_mgr_map Activating!
Dec  5 01:12:50 compute-0 ceph-mgr[193209]: mgr handle_mgr_map I am now activating
Dec  5 01:12:50 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.afshmv(active, starting, since 0.0182515s)
Dec  5 01:12:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Dec  5 01:12:50 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/847079286' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec  5 01:12:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).mds e1 all = 1
Dec  5 01:12:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Dec  5 01:12:50 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/847079286' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec  5 01:12:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Dec  5 01:12:50 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/847079286' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec  5 01:12:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Dec  5 01:12:50 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/847079286' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  5 01:12:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.afshmv", "id": "compute-0.afshmv"} v 0) v1
Dec  5 01:12:50 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/847079286' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "mgr metadata", "who": "compute-0.afshmv", "id": "compute-0.afshmv"}]: dispatch
Dec  5 01:12:50 compute-0 ceph-mgr[193209]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  5 01:12:50 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: balancer
Dec  5 01:12:50 compute-0 ceph-mgr[193209]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  5 01:12:50 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: crash
Dec  5 01:12:50 compute-0 ceph-mgr[193209]: [balancer INFO root] Starting
Dec  5 01:12:50 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:12:50
Dec  5 01:12:50 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 01:12:50 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 01:12:50 compute-0 ceph-mgr[193209]: [balancer INFO root] No pools available
Dec  5 01:12:50 compute-0 ceph-mgr[193209]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  5 01:12:50 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: devicehealth
Dec  5 01:12:50 compute-0 ceph-mon[192914]: log_channel(cluster) log [INF] : Manager daemon compute-0.afshmv is now available
Dec  5 01:12:50 compute-0 ceph-mgr[193209]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  5 01:12:50 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: iostat
Dec  5 01:12:50 compute-0 ceph-mgr[193209]: [devicehealth INFO root] Starting
Dec  5 01:12:50 compute-0 ceph-mgr[193209]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  5 01:12:50 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: nfs
Dec  5 01:12:50 compute-0 ceph-mgr[193209]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  5 01:12:50 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: orchestrator
Dec  5 01:12:50 compute-0 ceph-mgr[193209]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  5 01:12:50 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: pg_autoscaler
Dec  5 01:12:50 compute-0 ceph-mgr[193209]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  5 01:12:50 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: progress
Dec  5 01:12:50 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 01:12:50 compute-0 ceph-mgr[193209]: [progress INFO root] Loading...
Dec  5 01:12:50 compute-0 ceph-mgr[193209]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  5 01:12:50 compute-0 ceph-mgr[193209]: [progress INFO root] No stored events to load
Dec  5 01:12:50 compute-0 ceph-mgr[193209]: [progress INFO root] Loaded [] historic events
Dec  5 01:12:50 compute-0 ceph-mgr[193209]: [progress INFO root] Loaded OSDMap, ready.
Dec  5 01:12:50 compute-0 ceph-mon[192914]: Activating manager daemon compute-0.afshmv
Dec  5 01:12:50 compute-0 ceph-mon[192914]: Manager daemon compute-0.afshmv is now available
Dec  5 01:12:51 compute-0 ceph-mgr[193209]: [rbd_support INFO root] recovery thread starting
Dec  5 01:12:51 compute-0 ceph-mgr[193209]: [rbd_support INFO root] starting setup
Dec  5 01:12:51 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: rbd_support
Dec  5 01:12:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec  5 01:12:51 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2314212956' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  5 01:12:51 compute-0 ceph-mgr[193209]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  5 01:12:51 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: restful
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]: 
Dec  5 01:12:51 compute-0 ceph-mgr[193209]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  5 01:12:51 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: status
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]: {
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:    "fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:    "health": {
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:        "status": "HEALTH_OK",
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:        "checks": {},
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:        "mutes": []
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:    },
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:    "election_epoch": 5,
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:    "quorum": [
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:        0
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:    ],
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:    "quorum_names": [
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:        "compute-0"
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:    ],
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:    "quorum_age": 24,
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:    "monmap": {
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:        "epoch": 1,
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:        "min_mon_release_name": "reef",
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:        "num_mons": 1
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:    },
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:    "osdmap": {
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:        "epoch": 1,
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:        "num_osds": 0,
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:        "num_up_osds": 0,
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:        "osd_up_since": 0,
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:        "num_in_osds": 0,
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:        "osd_in_since": 0,
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:        "num_remapped_pgs": 0
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:    },
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:    "pgmap": {
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:        "pgs_by_state": [],
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:        "num_pgs": 0,
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:        "num_pools": 0,
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:        "num_objects": 0,
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:        "data_bytes": 0,
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:        "bytes_used": 0,
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:        "bytes_avail": 0,
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:        "bytes_total": 0
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:    },
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:    "fsmap": {
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:        "epoch": 1,
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:        "by_rank": [],
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:        "up:standby": 0
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:    },
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:    "mgrmap": {
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:        "available": false,
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:        "num_standbys": 0,
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:        "modules": [
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:            "iostat",
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:            "nfs",
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:            "restful"
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:        ],
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:        "services": {}
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:    },
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:    "servicemap": {
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:        "epoch": 1,
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:        "modified": "2025-12-05T01:12:22.836369+0000",
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:        "services": {}
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:    },
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]:    "progress_events": {}
Dec  5 01:12:51 compute-0 hopeful_heyrovsky[193677]: }
Dec  5 01:12:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.afshmv/mirror_snapshot_schedule"} v 0) v1
Dec  5 01:12:51 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/847079286' entity='mgr.compute-0.afshmv' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.afshmv/mirror_snapshot_schedule"}]: dispatch
Dec  5 01:12:51 compute-0 ceph-mgr[193209]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  5 01:12:51 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 01:12:51 compute-0 ceph-mgr[193209]: [restful INFO root] server_addr: :: server_port: 8003
Dec  5 01:12:51 compute-0 ceph-mgr[193209]: [restful WARNING root] server not running: no certificate configured
Dec  5 01:12:51 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec  5 01:12:51 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: telemetry
Dec  5 01:12:51 compute-0 ceph-mgr[193209]: [rbd_support INFO root] PerfHandler: starting
Dec  5 01:12:51 compute-0 ceph-mgr[193209]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  5 01:12:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Dec  5 01:12:51 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TaskHandler: starting
Dec  5 01:12:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.afshmv/trash_purge_schedule"} v 0) v1
Dec  5 01:12:51 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/847079286' entity='mgr.compute-0.afshmv' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.afshmv/trash_purge_schedule"}]: dispatch
Dec  5 01:12:51 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/847079286' entity='mgr.compute-0.afshmv' 
Dec  5 01:12:51 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 01:12:51 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec  5 01:12:51 compute-0 ceph-mgr[193209]: [rbd_support INFO root] setup complete
Dec  5 01:12:51 compute-0 podman[193651]: 2025-12-05 01:12:51.043089972 +0000 UTC m=+0.751868440 container died 2aab2c0ec96c2cc1a4e4213d01cee9387a54476ca416535c43667e9671b7cb5b (image=quay.io/ceph/ceph:v18, name=hopeful_heyrovsky, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:12:51 compute-0 systemd[1]: libpod-2aab2c0ec96c2cc1a4e4213d01cee9387a54476ca416535c43667e9671b7cb5b.scope: Deactivated successfully.
Dec  5 01:12:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Dec  5 01:12:51 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/847079286' entity='mgr.compute-0.afshmv' 
Dec  5 01:12:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Dec  5 01:12:51 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/847079286' entity='mgr.compute-0.afshmv' 
Dec  5 01:12:51 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: volumes
Dec  5 01:12:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff8dc7fdf53053a3e97dafb1bad8e66dc4dd05390a0a0f85a22601e77800fab5-merged.mount: Deactivated successfully.
Dec  5 01:12:51 compute-0 podman[193651]: 2025-12-05 01:12:51.102614434 +0000 UTC m=+0.811392912 container remove 2aab2c0ec96c2cc1a4e4213d01cee9387a54476ca416535c43667e9671b7cb5b (image=quay.io/ceph/ceph:v18, name=hopeful_heyrovsky, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec  5 01:12:51 compute-0 systemd[1]: libpod-conmon-2aab2c0ec96c2cc1a4e4213d01cee9387a54476ca416535c43667e9671b7cb5b.scope: Deactivated successfully.
Dec  5 01:12:51 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.afshmv(active, since 1.03193s)
Dec  5 01:12:52 compute-0 ceph-mon[192914]: from='mgr.14102 192.168.122.100:0/847079286' entity='mgr.compute-0.afshmv' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.afshmv/mirror_snapshot_schedule"}]: dispatch
Dec  5 01:12:52 compute-0 ceph-mon[192914]: from='mgr.14102 192.168.122.100:0/847079286' entity='mgr.compute-0.afshmv' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.afshmv/trash_purge_schedule"}]: dispatch
Dec  5 01:12:52 compute-0 ceph-mon[192914]: from='mgr.14102 192.168.122.100:0/847079286' entity='mgr.compute-0.afshmv' 
Dec  5 01:12:52 compute-0 ceph-mon[192914]: from='mgr.14102 192.168.122.100:0/847079286' entity='mgr.compute-0.afshmv' 
Dec  5 01:12:52 compute-0 ceph-mon[192914]: from='mgr.14102 192.168.122.100:0/847079286' entity='mgr.compute-0.afshmv' 
Dec  5 01:12:52 compute-0 ceph-mgr[193209]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  5 01:12:53 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.afshmv(active, since 2s)
Dec  5 01:12:53 compute-0 podman[193803]: 2025-12-05 01:12:53.218617141 +0000 UTC m=+0.068867169 container create abb0091ef959dad11561efdcbd219abcdbd1bd69f3ba364a40a2c3c7e0cef17f (image=quay.io/ceph/ceph:v18, name=intelligent_haslett, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:12:53 compute-0 systemd[1]: Started libpod-conmon-abb0091ef959dad11561efdcbd219abcdbd1bd69f3ba364a40a2c3c7e0cef17f.scope.
Dec  5 01:12:53 compute-0 podman[193803]: 2025-12-05 01:12:53.197703457 +0000 UTC m=+0.047953465 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:12:53 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:12:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ee5c3083322b44d89762e3bd5a284f435588d2dfbda7947b123619c97253abf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ee5c3083322b44d89762e3bd5a284f435588d2dfbda7947b123619c97253abf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ee5c3083322b44d89762e3bd5a284f435588d2dfbda7947b123619c97253abf/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:53 compute-0 podman[193803]: 2025-12-05 01:12:53.354075303 +0000 UTC m=+0.204325391 container init abb0091ef959dad11561efdcbd219abcdbd1bd69f3ba364a40a2c3c7e0cef17f (image=quay.io/ceph/ceph:v18, name=intelligent_haslett, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  5 01:12:53 compute-0 podman[193803]: 2025-12-05 01:12:53.370627504 +0000 UTC m=+0.220877532 container start abb0091ef959dad11561efdcbd219abcdbd1bd69f3ba364a40a2c3c7e0cef17f (image=quay.io/ceph/ceph:v18, name=intelligent_haslett, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  5 01:12:53 compute-0 podman[193803]: 2025-12-05 01:12:53.378428716 +0000 UTC m=+0.228678734 container attach abb0091ef959dad11561efdcbd219abcdbd1bd69f3ba364a40a2c3c7e0cef17f (image=quay.io/ceph/ceph:v18, name=intelligent_haslett, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:12:53 compute-0 podman[193817]: 2025-12-05 01:12:53.429418365 +0000 UTC m=+0.136075940 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, maintainer=Red Hat, Inc., config_id=edpm, distribution-scope=public, name=ubi9, release=1214.1726694543, version=9.4, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9)
Dec  5 01:12:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Dec  5 01:12:54 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3287451921' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]: 
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]: {
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:    "fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:    "health": {
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:        "status": "HEALTH_OK",
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:        "checks": {},
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:        "mutes": []
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:    },
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:    "election_epoch": 5,
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:    "quorum": [
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:        0
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:    ],
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:    "quorum_names": [
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:        "compute-0"
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:    ],
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:    "quorum_age": 27,
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:    "monmap": {
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:        "epoch": 1,
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:        "min_mon_release_name": "reef",
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:        "num_mons": 1
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:    },
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:    "osdmap": {
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:        "epoch": 1,
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:        "num_osds": 0,
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:        "num_up_osds": 0,
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:        "osd_up_since": 0,
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:        "num_in_osds": 0,
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:        "osd_in_since": 0,
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:        "num_remapped_pgs": 0
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:    },
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:    "pgmap": {
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:        "pgs_by_state": [],
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:        "num_pgs": 0,
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:        "num_pools": 0,
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:        "num_objects": 0,
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:        "data_bytes": 0,
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:        "bytes_used": 0,
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:        "bytes_avail": 0,
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:        "bytes_total": 0
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:    },
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:    "fsmap": {
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:        "epoch": 1,
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:        "by_rank": [],
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:        "up:standby": 0
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:    },
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:    "mgrmap": {
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:        "available": true,
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:        "num_standbys": 0,
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:        "modules": [
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:            "iostat",
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:            "nfs",
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:            "restful"
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:        ],
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:        "services": {}
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:    },
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:    "servicemap": {
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:        "epoch": 1,
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:        "modified": "2025-12-05T01:12:22.836369+0000",
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:        "services": {}
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:    },
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]:    "progress_events": {}
Dec  5 01:12:54 compute-0 intelligent_haslett[193820]: }
Dec  5 01:12:54 compute-0 systemd[1]: libpod-abb0091ef959dad11561efdcbd219abcdbd1bd69f3ba364a40a2c3c7e0cef17f.scope: Deactivated successfully.
Dec  5 01:12:54 compute-0 podman[193803]: 2025-12-05 01:12:54.057517174 +0000 UTC m=+0.907767212 container died abb0091ef959dad11561efdcbd219abcdbd1bd69f3ba364a40a2c3c7e0cef17f (image=quay.io/ceph/ceph:v18, name=intelligent_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:12:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ee5c3083322b44d89762e3bd5a284f435588d2dfbda7947b123619c97253abf-merged.mount: Deactivated successfully.
Dec  5 01:12:54 compute-0 podman[193803]: 2025-12-05 01:12:54.11856788 +0000 UTC m=+0.968817888 container remove abb0091ef959dad11561efdcbd219abcdbd1bd69f3ba364a40a2c3c7e0cef17f (image=quay.io/ceph/ceph:v18, name=intelligent_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  5 01:12:54 compute-0 systemd[1]: libpod-conmon-abb0091ef959dad11561efdcbd219abcdbd1bd69f3ba364a40a2c3c7e0cef17f.scope: Deactivated successfully.
Dec  5 01:12:54 compute-0 podman[193876]: 2025-12-05 01:12:54.20649004 +0000 UTC m=+0.058032511 container create 0890b2b39217df04b559c51b9097426a7958fe79c71c3fea545921a288089a3f (image=quay.io/ceph/ceph:v18, name=eager_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:12:54 compute-0 systemd[1]: Started libpod-conmon-0890b2b39217df04b559c51b9097426a7958fe79c71c3fea545921a288089a3f.scope.
Dec  5 01:12:54 compute-0 podman[193876]: 2025-12-05 01:12:54.181203251 +0000 UTC m=+0.032745802 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:12:54 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:12:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b533aba22280a5c4d65c01ccf0b681bb8d1576a645cfa168e659bc8f8b05188e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b533aba22280a5c4d65c01ccf0b681bb8d1576a645cfa168e659bc8f8b05188e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b533aba22280a5c4d65c01ccf0b681bb8d1576a645cfa168e659bc8f8b05188e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b533aba22280a5c4d65c01ccf0b681bb8d1576a645cfa168e659bc8f8b05188e/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:54 compute-0 podman[193876]: 2025-12-05 01:12:54.336182188 +0000 UTC m=+0.187724689 container init 0890b2b39217df04b559c51b9097426a7958fe79c71c3fea545921a288089a3f (image=quay.io/ceph/ceph:v18, name=eager_dubinsky, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  5 01:12:54 compute-0 podman[193876]: 2025-12-05 01:12:54.3506636 +0000 UTC m=+0.202206051 container start 0890b2b39217df04b559c51b9097426a7958fe79c71c3fea545921a288089a3f (image=quay.io/ceph/ceph:v18, name=eager_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  5 01:12:54 compute-0 podman[193876]: 2025-12-05 01:12:54.355341833 +0000 UTC m=+0.206884344 container attach 0890b2b39217df04b559c51b9097426a7958fe79c71c3fea545921a288089a3f (image=quay.io/ceph/ceph:v18, name=eager_dubinsky, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  5 01:12:54 compute-0 ceph-mgr[193209]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  5 01:12:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Dec  5 01:12:54 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/888022185' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  5 01:12:55 compute-0 systemd[1]: libpod-0890b2b39217df04b559c51b9097426a7958fe79c71c3fea545921a288089a3f.scope: Deactivated successfully.
Dec  5 01:12:55 compute-0 podman[193876]: 2025-12-05 01:12:55.023860381 +0000 UTC m=+0.875402892 container died 0890b2b39217df04b559c51b9097426a7958fe79c71c3fea545921a288089a3f (image=quay.io/ceph/ceph:v18, name=eager_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:12:55 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/888022185' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  5 01:12:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-b533aba22280a5c4d65c01ccf0b681bb8d1576a645cfa168e659bc8f8b05188e-merged.mount: Deactivated successfully.
Dec  5 01:12:55 compute-0 podman[193876]: 2025-12-05 01:12:55.110833844 +0000 UTC m=+0.962376315 container remove 0890b2b39217df04b559c51b9097426a7958fe79c71c3fea545921a288089a3f (image=quay.io/ceph/ceph:v18, name=eager_dubinsky, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:12:55 compute-0 systemd[1]: libpod-conmon-0890b2b39217df04b559c51b9097426a7958fe79c71c3fea545921a288089a3f.scope: Deactivated successfully.
Dec  5 01:12:55 compute-0 podman[193929]: 2025-12-05 01:12:55.2320187 +0000 UTC m=+0.080012196 container create b8498f3feded28e37d17f60bf59e0b00674ac23d2c5045a1c95a48c17b5da158 (image=quay.io/ceph/ceph:v18, name=eloquent_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  5 01:12:55 compute-0 podman[193929]: 2025-12-05 01:12:55.200115843 +0000 UTC m=+0.048109339 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:12:55 compute-0 systemd[1]: Started libpod-conmon-b8498f3feded28e37d17f60bf59e0b00674ac23d2c5045a1c95a48c17b5da158.scope.
Dec  5 01:12:55 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:12:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd78485c9f516a10148e1f18f4b77d92e945da51fe72970641e62d1712c31af7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd78485c9f516a10148e1f18f4b77d92e945da51fe72970641e62d1712c31af7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd78485c9f516a10148e1f18f4b77d92e945da51fe72970641e62d1712c31af7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:55 compute-0 podman[193929]: 2025-12-05 01:12:55.402462977 +0000 UTC m=+0.250456473 container init b8498f3feded28e37d17f60bf59e0b00674ac23d2c5045a1c95a48c17b5da158 (image=quay.io/ceph/ceph:v18, name=eloquent_noether, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:12:55 compute-0 podman[193929]: 2025-12-05 01:12:55.417155755 +0000 UTC m=+0.265149251 container start b8498f3feded28e37d17f60bf59e0b00674ac23d2c5045a1c95a48c17b5da158 (image=quay.io/ceph/ceph:v18, name=eloquent_noether, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:12:55 compute-0 podman[193929]: 2025-12-05 01:12:55.423948318 +0000 UTC m=+0.271941814 container attach b8498f3feded28e37d17f60bf59e0b00674ac23d2c5045a1c95a48c17b5da158 (image=quay.io/ceph/ceph:v18, name=eloquent_noether, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  5 01:12:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Dec  5 01:12:56 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3035572452' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Dec  5 01:12:56 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3035572452' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec  5 01:12:56 compute-0 ceph-mgr[193209]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec  5 01:12:56 compute-0 ceph-mgr[193209]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec  5 01:12:56 compute-0 ceph-mgr[193209]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec  5 01:12:56 compute-0 ceph-mgr[193209]: mgr respawn  1: '-n'
Dec  5 01:12:56 compute-0 ceph-mgr[193209]: mgr respawn  2: 'mgr.compute-0.afshmv'
Dec  5 01:12:56 compute-0 ceph-mgr[193209]: mgr respawn  3: '-f'
Dec  5 01:12:56 compute-0 ceph-mgr[193209]: mgr respawn  4: '--setuser'
Dec  5 01:12:56 compute-0 ceph-mgr[193209]: mgr respawn  5: 'ceph'
Dec  5 01:12:56 compute-0 ceph-mgr[193209]: mgr respawn  6: '--setgroup'
Dec  5 01:12:56 compute-0 ceph-mgr[193209]: mgr respawn  7: 'ceph'
Dec  5 01:12:56 compute-0 ceph-mgr[193209]: mgr respawn  8: '--default-log-to-file=false'
Dec  5 01:12:56 compute-0 ceph-mgr[193209]: mgr respawn  9: '--default-log-to-journald=true'
Dec  5 01:12:56 compute-0 ceph-mgr[193209]: mgr respawn  10: '--default-log-to-stderr=false'
Dec  5 01:12:56 compute-0 ceph-mgr[193209]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec  5 01:12:56 compute-0 ceph-mgr[193209]: mgr respawn  exe_path /proc/self/exe
Dec  5 01:12:56 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.afshmv(active, since 5s)
Dec  5 01:12:56 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3035572452' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Dec  5 01:12:56 compute-0 systemd[1]: libpod-b8498f3feded28e37d17f60bf59e0b00674ac23d2c5045a1c95a48c17b5da158.scope: Deactivated successfully.
Dec  5 01:12:56 compute-0 podman[193929]: 2025-12-05 01:12:56.150474826 +0000 UTC m=+0.998468332 container died b8498f3feded28e37d17f60bf59e0b00674ac23d2c5045a1c95a48c17b5da158 (image=quay.io/ceph/ceph:v18, name=eloquent_noether, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:12:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd78485c9f516a10148e1f18f4b77d92e945da51fe72970641e62d1712c31af7-merged.mount: Deactivated successfully.
Dec  5 01:12:56 compute-0 podman[193929]: 2025-12-05 01:12:56.231384667 +0000 UTC m=+1.079378163 container remove b8498f3feded28e37d17f60bf59e0b00674ac23d2c5045a1c95a48c17b5da158 (image=quay.io/ceph/ceph:v18, name=eloquent_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:12:56 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: ignoring --setuser ceph since I am not root
Dec  5 01:12:56 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: ignoring --setgroup ceph since I am not root
Dec  5 01:12:56 compute-0 systemd[1]: libpod-conmon-b8498f3feded28e37d17f60bf59e0b00674ac23d2c5045a1c95a48c17b5da158.scope: Deactivated successfully.
Dec  5 01:12:56 compute-0 ceph-mgr[193209]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Dec  5 01:12:56 compute-0 ceph-mgr[193209]: pidfile_write: ignore empty --pid-file
Dec  5 01:12:56 compute-0 podman[193981]: 2025-12-05 01:12:56.339223523 +0000 UTC m=+0.103256407 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, release=1755695350, config_id=edpm, io.openshift.tags=minimal rhel9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, vcs-type=git, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.buildah.version=1.33.7, container_name=openstack_network_exporter)
Dec  5 01:12:56 compute-0 podman[193999]: 2025-12-05 01:12:56.358260224 +0000 UTC m=+0.076520757 container create 02f0861951b2bb2e9400108227d03e4f338aca6635427f4539bc27377e789b07 (image=quay.io/ceph/ceph:v18, name=vigilant_johnson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  5 01:12:56 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'alerts'
Dec  5 01:12:56 compute-0 systemd[1]: Started libpod-conmon-02f0861951b2bb2e9400108227d03e4f338aca6635427f4539bc27377e789b07.scope.
Dec  5 01:12:56 compute-0 podman[193999]: 2025-12-05 01:12:56.332249095 +0000 UTC m=+0.050509668 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:12:56 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:12:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5838bc62e781fa19a2baeedfe342d7c2ee7115f5ba110af470bf3cf7cc14ac27/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5838bc62e781fa19a2baeedfe342d7c2ee7115f5ba110af470bf3cf7cc14ac27/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5838bc62e781fa19a2baeedfe342d7c2ee7115f5ba110af470bf3cf7cc14ac27/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:56 compute-0 podman[193999]: 2025-12-05 01:12:56.488135887 +0000 UTC m=+0.206396410 container init 02f0861951b2bb2e9400108227d03e4f338aca6635427f4539bc27377e789b07 (image=quay.io/ceph/ceph:v18, name=vigilant_johnson, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  5 01:12:56 compute-0 podman[193999]: 2025-12-05 01:12:56.501843007 +0000 UTC m=+0.220103570 container start 02f0861951b2bb2e9400108227d03e4f338aca6635427f4539bc27377e789b07 (image=quay.io/ceph/ceph:v18, name=vigilant_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  5 01:12:56 compute-0 podman[193999]: 2025-12-05 01:12:56.507672343 +0000 UTC m=+0.225932896 container attach 02f0861951b2bb2e9400108227d03e4f338aca6635427f4539bc27377e789b07 (image=quay.io/ceph/ceph:v18, name=vigilant_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  5 01:12:56 compute-0 ceph-mgr[193209]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  5 01:12:56 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'balancer'
Dec  5 01:12:56 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:56.676+0000 7f1b6c4d6140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  5 01:12:56 compute-0 ceph-mgr[193209]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  5 01:12:56 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:56.928+0000 7f1b6c4d6140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  5 01:12:56 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'cephadm'
Dec  5 01:12:57 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3035572452' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec  5 01:12:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Dec  5 01:12:57 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1586185734' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec  5 01:12:57 compute-0 vigilant_johnson[194044]: {
Dec  5 01:12:57 compute-0 vigilant_johnson[194044]:    "epoch": 5,
Dec  5 01:12:57 compute-0 vigilant_johnson[194044]:    "available": true,
Dec  5 01:12:57 compute-0 vigilant_johnson[194044]:    "active_name": "compute-0.afshmv",
Dec  5 01:12:57 compute-0 vigilant_johnson[194044]:    "num_standby": 0
Dec  5 01:12:57 compute-0 vigilant_johnson[194044]: }
Dec  5 01:12:57 compute-0 systemd[1]: libpod-02f0861951b2bb2e9400108227d03e4f338aca6635427f4539bc27377e789b07.scope: Deactivated successfully.
Dec  5 01:12:57 compute-0 podman[193999]: 2025-12-05 01:12:57.1916368 +0000 UTC m=+0.909897363 container died 02f0861951b2bb2e9400108227d03e4f338aca6635427f4539bc27377e789b07 (image=quay.io/ceph/ceph:v18, name=vigilant_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  5 01:12:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-5838bc62e781fa19a2baeedfe342d7c2ee7115f5ba110af470bf3cf7cc14ac27-merged.mount: Deactivated successfully.
Dec  5 01:12:57 compute-0 podman[193999]: 2025-12-05 01:12:57.284317355 +0000 UTC m=+1.002577908 container remove 02f0861951b2bb2e9400108227d03e4f338aca6635427f4539bc27377e789b07 (image=quay.io/ceph/ceph:v18, name=vigilant_johnson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  5 01:12:57 compute-0 systemd[1]: libpod-conmon-02f0861951b2bb2e9400108227d03e4f338aca6635427f4539bc27377e789b07.scope: Deactivated successfully.
Dec  5 01:12:57 compute-0 podman[194080]: 2025-12-05 01:12:57.421223908 +0000 UTC m=+0.088693683 container create bdd72895c284df98646becd65cd92d92079ea5119489616b3c4f455c2a678942 (image=quay.io/ceph/ceph:v18, name=vibrant_matsumoto, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  5 01:12:57 compute-0 podman[194080]: 2025-12-05 01:12:57.382567029 +0000 UTC m=+0.050036834 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:12:57 compute-0 systemd[1]: Started libpod-conmon-bdd72895c284df98646becd65cd92d92079ea5119489616b3c4f455c2a678942.scope.
Dec  5 01:12:57 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:12:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c7c6f9a43b8975fcdf4863bda44137cf1009e1dd35d81bf183e1d6a44f7b226/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c7c6f9a43b8975fcdf4863bda44137cf1009e1dd35d81bf183e1d6a44f7b226/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c7c6f9a43b8975fcdf4863bda44137cf1009e1dd35d81bf183e1d6a44f7b226/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:12:57 compute-0 podman[194080]: 2025-12-05 01:12:57.586679563 +0000 UTC m=+0.254149318 container init bdd72895c284df98646becd65cd92d92079ea5119489616b3c4f455c2a678942 (image=quay.io/ceph/ceph:v18, name=vibrant_matsumoto, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:12:57 compute-0 podman[194080]: 2025-12-05 01:12:57.600616259 +0000 UTC m=+0.268085994 container start bdd72895c284df98646becd65cd92d92079ea5119489616b3c4f455c2a678942 (image=quay.io/ceph/ceph:v18, name=vibrant_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Dec  5 01:12:57 compute-0 podman[194080]: 2025-12-05 01:12:57.605372774 +0000 UTC m=+0.272842539 container attach bdd72895c284df98646becd65cd92d92079ea5119489616b3c4f455c2a678942 (image=quay.io/ceph/ceph:v18, name=vibrant_matsumoto, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  5 01:12:58 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'crash'
Dec  5 01:12:59 compute-0 ceph-mgr[193209]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  5 01:12:59 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:12:59.227+0000 7f1b6c4d6140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  5 01:12:59 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'dashboard'
Dec  5 01:12:59 compute-0 podman[158197]: time="2025-12-05T01:12:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:12:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:12:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 23487 "" "Go-http-client/1.1"
Dec  5 01:12:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:12:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4353 "" "Go-http-client/1.1"
Dec  5 01:13:00 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'devicehealth'
Dec  5 01:13:00 compute-0 podman[194132]: 2025-12-05 01:13:00.736785723 +0000 UTC m=+0.141168564 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 01:13:00 compute-0 ceph-mgr[193209]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  5 01:13:00 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:00.975+0000 7f1b6c4d6140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  5 01:13:00 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'diskprediction_local'
Dec  5 01:13:01 compute-0 openstack_network_exporter[160350]: ERROR   01:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:13:01 compute-0 openstack_network_exporter[160350]: ERROR   01:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:13:01 compute-0 openstack_network_exporter[160350]: ERROR   01:13:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:13:01 compute-0 openstack_network_exporter[160350]: ERROR   01:13:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:13:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:13:01 compute-0 openstack_network_exporter[160350]: ERROR   01:13:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:13:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:13:01 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  5 01:13:01 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  5 01:13:01 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]:  from numpy import show_config as show_numpy_config
Dec  5 01:13:01 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:01.491+0000 7f1b6c4d6140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  5 01:13:01 compute-0 ceph-mgr[193209]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  5 01:13:01 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'influx'
Dec  5 01:13:01 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:01.735+0000 7f1b6c4d6140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  5 01:13:01 compute-0 ceph-mgr[193209]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  5 01:13:01 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'insights'
Dec  5 01:13:01 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'iostat'
Dec  5 01:13:02 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:02.243+0000 7f1b6c4d6140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  5 01:13:02 compute-0 ceph-mgr[193209]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  5 01:13:02 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'k8sevents'
Dec  5 01:13:04 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'localpool'
Dec  5 01:13:04 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'mds_autoscaler'
Dec  5 01:13:04 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'mirroring'
Dec  5 01:13:05 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'nfs'
Dec  5 01:13:05 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:05.916+0000 7f1b6c4d6140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  5 01:13:05 compute-0 ceph-mgr[193209]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  5 01:13:05 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'orchestrator'
Dec  5 01:13:06 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:06.563+0000 7f1b6c4d6140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  5 01:13:06 compute-0 ceph-mgr[193209]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  5 01:13:06 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'osd_perf_query'
Dec  5 01:13:06 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:06.826+0000 7f1b6c4d6140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  5 01:13:06 compute-0 ceph-mgr[193209]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  5 01:13:06 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'osd_support'
Dec  5 01:13:07 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:07.058+0000 7f1b6c4d6140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  5 01:13:07 compute-0 ceph-mgr[193209]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  5 01:13:07 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'pg_autoscaler'
Dec  5 01:13:07 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:07.331+0000 7f1b6c4d6140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  5 01:13:07 compute-0 ceph-mgr[193209]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  5 01:13:07 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'progress'
Dec  5 01:13:07 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:07.564+0000 7f1b6c4d6140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  5 01:13:07 compute-0 ceph-mgr[193209]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  5 01:13:07 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'prometheus'
Dec  5 01:13:08 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:08.570+0000 7f1b6c4d6140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  5 01:13:08 compute-0 ceph-mgr[193209]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  5 01:13:08 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'rbd_support'
Dec  5 01:13:08 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:08.873+0000 7f1b6c4d6140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  5 01:13:08 compute-0 ceph-mgr[193209]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  5 01:13:08 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'restful'
Dec  5 01:13:09 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'rgw'
Dec  5 01:13:10 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:10.312+0000 7f1b6c4d6140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  5 01:13:10 compute-0 ceph-mgr[193209]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  5 01:13:10 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'rook'
Dec  5 01:13:12 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:12.406+0000 7f1b6c4d6140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  5 01:13:12 compute-0 ceph-mgr[193209]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  5 01:13:12 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'selftest'
Dec  5 01:13:12 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:12.646+0000 7f1b6c4d6140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  5 01:13:12 compute-0 ceph-mgr[193209]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  5 01:13:12 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'snap_schedule'
Dec  5 01:13:12 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:12.920+0000 7f1b6c4d6140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  5 01:13:12 compute-0 ceph-mgr[193209]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  5 01:13:12 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'stats'
Dec  5 01:13:13 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'status'
Dec  5 01:13:13 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:13.413+0000 7f1b6c4d6140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec  5 01:13:13 compute-0 ceph-mgr[193209]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec  5 01:13:13 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'telegraf'
Dec  5 01:13:13 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:13.653+0000 7f1b6c4d6140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  5 01:13:13 compute-0 ceph-mgr[193209]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  5 01:13:13 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'telemetry'
Dec  5 01:13:14 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:14.277+0000 7f1b6c4d6140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  5 01:13:14 compute-0 ceph-mgr[193209]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  5 01:13:14 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'test_orchestrator'
Dec  5 01:13:14 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:14.972+0000 7f1b6c4d6140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  5 01:13:14 compute-0 ceph-mgr[193209]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  5 01:13:14 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'volumes'
Dec  5 01:13:15 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:15.735+0000 7f1b6c4d6140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  5 01:13:15 compute-0 ceph-mgr[193209]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  5 01:13:15 compute-0 ceph-mgr[193209]: mgr[py] Loading python module 'zabbix'
Dec  5 01:13:15 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T01:13:15.979+0000 7f1b6c4d6140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  5 01:13:15 compute-0 ceph-mgr[193209]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  5 01:13:15 compute-0 ceph-mgr[193209]: ms_deliver_dispatch: unhandled message 0x55c881e5d1e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Dec  5 01:13:15 compute-0 ceph-mon[192914]: log_channel(cluster) log [INF] : Active manager daemon compute-0.afshmv restarted
Dec  5 01:13:15 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Dec  5 01:13:15 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  5 01:13:15 compute-0 ceph-mon[192914]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.afshmv
Dec  5 01:13:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec  5 01:13:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec  5 01:13:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: mgr handle_mgr_map Activating!
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: mgr handle_mgr_map I am now activating
Dec  5 01:13:16 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Dec  5 01:13:16 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.afshmv(active, starting, since 0.0244957s)
Dec  5 01:13:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Dec  5 01:13:16 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  5 01:13:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.afshmv", "id": "compute-0.afshmv"} v 0) v1
Dec  5 01:13:16 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "mgr metadata", "who": "compute-0.afshmv", "id": "compute-0.afshmv"}]: dispatch
Dec  5 01:13:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Dec  5 01:13:16 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec  5 01:13:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).mds e1 all = 1
Dec  5 01:13:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Dec  5 01:13:16 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec  5 01:13:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Dec  5 01:13:16 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec  5 01:13:16 compute-0 ceph-mon[192914]: Active manager daemon compute-0.afshmv restarted
Dec  5 01:13:16 compute-0 ceph-mon[192914]: Activating manager daemon compute-0.afshmv
Dec  5 01:13:16 compute-0 ceph-mon[192914]: log_channel(cluster) log [INF] : Manager daemon compute-0.afshmv is now available
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: balancer
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Starting
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:13:16
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: [balancer INFO root] No pools available
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Dec  5 01:13:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Dec  5 01:13:16 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Dec  5 01:13:16 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: cephadm
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: crash
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: devicehealth
Dec  5 01:13:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: iostat
Dec  5 01:13:16 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: nfs
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: [devicehealth INFO root] Starting
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: orchestrator
Dec  5 01:13:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: pg_autoscaler
Dec  5 01:13:16 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: progress
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: [progress INFO root] Loading...
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: [progress INFO root] No stored events to load
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: [progress INFO root] Loaded [] historic events
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: [progress INFO root] Loaded OSDMap, ready.
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] recovery thread starting
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] starting setup
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: rbd_support
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: restful
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: status
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: [restful INFO root] server_addr: :: server_port: 8003
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: [restful WARNING root] server not running: no certificate configured
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: telemetry
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  5 01:13:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.afshmv/mirror_snapshot_schedule"} v 0) v1
Dec  5 01:13:16 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.afshmv/mirror_snapshot_schedule"}]: dispatch
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] PerfHandler: starting
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TaskHandler: starting
Dec  5 01:13:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.afshmv/trash_purge_schedule"} v 0) v1
Dec  5 01:13:16 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.afshmv/trash_purge_schedule"}]: dispatch
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] setup complete
Dec  5 01:13:16 compute-0 ceph-mgr[193209]: mgr load Constructed class from module: volumes
Dec  5 01:13:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019927801 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:13:17 compute-0 ceph-mon[192914]: Manager daemon compute-0.afshmv is now available
Dec  5 01:13:17 compute-0 ceph-mon[192914]: Found migration_current of "None". Setting to last migration.
Dec  5 01:13:17 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:17 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:17 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.afshmv/mirror_snapshot_schedule"}]: dispatch
Dec  5 01:13:17 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.afshmv/trash_purge_schedule"}]: dispatch
Dec  5 01:13:17 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.afshmv(active, since 1.09915s)
Dec  5 01:13:17 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Dec  5 01:13:17 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Dec  5 01:13:17 compute-0 vibrant_matsumoto[194097]: {
Dec  5 01:13:17 compute-0 vibrant_matsumoto[194097]:    "mgrmap_epoch": 7,
Dec  5 01:13:17 compute-0 vibrant_matsumoto[194097]:    "initialized": true
Dec  5 01:13:17 compute-0 vibrant_matsumoto[194097]: }
Dec  5 01:13:17 compute-0 systemd[1]: libpod-bdd72895c284df98646becd65cd92d92079ea5119489616b3c4f455c2a678942.scope: Deactivated successfully.
Dec  5 01:13:17 compute-0 podman[194268]: 2025-12-05 01:13:17.190039167 +0000 UTC m=+0.043419846 container died bdd72895c284df98646becd65cd92d92079ea5119489616b3c4f455c2a678942 (image=quay.io/ceph/ceph:v18, name=vibrant_matsumoto, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  5 01:13:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c7c6f9a43b8975fcdf4863bda44137cf1009e1dd35d81bf183e1d6a44f7b226-merged.mount: Deactivated successfully.
Dec  5 01:13:17 compute-0 podman[194268]: 2025-12-05 01:13:17.262291861 +0000 UTC m=+0.115672520 container remove bdd72895c284df98646becd65cd92d92079ea5119489616b3c4f455c2a678942 (image=quay.io/ceph/ceph:v18, name=vibrant_matsumoto, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  5 01:13:17 compute-0 systemd[1]: libpod-conmon-bdd72895c284df98646becd65cd92d92079ea5119489616b3c4f455c2a678942.scope: Deactivated successfully.
Dec  5 01:13:17 compute-0 podman[194283]: 2025-12-05 01:13:17.379173774 +0000 UTC m=+0.066688427 container create f66ad9fcafb72a9055bfe59da04343908b1d27c4c7882e788971a71e76ffc6ed (image=quay.io/ceph/ceph:v18, name=peaceful_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:13:17 compute-0 systemd[1]: Started libpod-conmon-f66ad9fcafb72a9055bfe59da04343908b1d27c4c7882e788971a71e76ffc6ed.scope.
Dec  5 01:13:17 compute-0 podman[194283]: 2025-12-05 01:13:17.360200845 +0000 UTC m=+0.047715488 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:13:17 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:13:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3076ddb13ec2579139929d1d2d50d6dfc7a57b69c114aa08daf8aeb9289ef9e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3076ddb13ec2579139929d1d2d50d6dfc7a57b69c114aa08daf8aeb9289ef9e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3076ddb13ec2579139929d1d2d50d6dfc7a57b69c114aa08daf8aeb9289ef9e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:17 compute-0 podman[194283]: 2025-12-05 01:13:17.488007699 +0000 UTC m=+0.175522402 container init f66ad9fcafb72a9055bfe59da04343908b1d27c4c7882e788971a71e76ffc6ed (image=quay.io/ceph/ceph:v18, name=peaceful_shaw, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  5 01:13:17 compute-0 podman[194283]: 2025-12-05 01:13:17.507591076 +0000 UTC m=+0.195105719 container start f66ad9fcafb72a9055bfe59da04343908b1d27c4c7882e788971a71e76ffc6ed (image=quay.io/ceph/ceph:v18, name=peaceful_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:13:17 compute-0 podman[194283]: 2025-12-05 01:13:17.512987209 +0000 UTC m=+0.200501922 container attach f66ad9fcafb72a9055bfe59da04343908b1d27c4c7882e788971a71e76ffc6ed (image=quay.io/ceph/ceph:v18, name=peaceful_shaw, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  5 01:13:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Dec  5 01:13:17 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Dec  5 01:13:17 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:18 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Dec  5 01:13:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Dec  5 01:13:18 compute-0 ceph-mgr[193209]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  5 01:13:18 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Dec  5 01:13:18 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  5 01:13:18 compute-0 systemd[1]: libpod-f66ad9fcafb72a9055bfe59da04343908b1d27c4c7882e788971a71e76ffc6ed.scope: Deactivated successfully.
Dec  5 01:13:18 compute-0 podman[194283]: 2025-12-05 01:13:18.073216699 +0000 UTC m=+0.760731322 container died f66ad9fcafb72a9055bfe59da04343908b1d27c4c7882e788971a71e76ffc6ed (image=quay.io/ceph/ceph:v18, name=peaceful_shaw, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:13:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3076ddb13ec2579139929d1d2d50d6dfc7a57b69c114aa08daf8aeb9289ef9e-merged.mount: Deactivated successfully.
Dec  5 01:13:18 compute-0 podman[194283]: 2025-12-05 01:13:18.12986839 +0000 UTC m=+0.817383013 container remove f66ad9fcafb72a9055bfe59da04343908b1d27c4c7882e788971a71e76ffc6ed (image=quay.io/ceph/ceph:v18, name=peaceful_shaw, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  5 01:13:18 compute-0 systemd[1]: libpod-conmon-f66ad9fcafb72a9055bfe59da04343908b1d27c4c7882e788971a71e76ffc6ed.scope: Deactivated successfully.
Dec  5 01:13:18 compute-0 podman[194325]: 2025-12-05 01:13:18.196382921 +0000 UTC m=+0.086120900 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec  5 01:13:18 compute-0 podman[194357]: 2025-12-05 01:13:18.183257118 +0000 UTC m=+0.029551741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:13:18 compute-0 ceph-mgr[193209]: [cephadm INFO cherrypy.error] [05/Dec/2025:01:13:18] ENGINE Bus STARTING
Dec  5 01:13:18 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : [05/Dec/2025:01:13:18] ENGINE Bus STARTING
Dec  5 01:13:18 compute-0 ceph-mgr[193209]: [cephadm INFO cherrypy.error] [05/Dec/2025:01:13:18] ENGINE Serving on http://192.168.122.100:8765
Dec  5 01:13:18 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : [05/Dec/2025:01:13:18] ENGINE Serving on http://192.168.122.100:8765
Dec  5 01:13:18 compute-0 ceph-mgr[193209]: [cephadm INFO cherrypy.error] [05/Dec/2025:01:13:18] ENGINE Serving on https://192.168.122.100:7150
Dec  5 01:13:18 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : [05/Dec/2025:01:13:18] ENGINE Serving on https://192.168.122.100:7150
Dec  5 01:13:18 compute-0 ceph-mgr[193209]: [cephadm INFO cherrypy.error] [05/Dec/2025:01:13:18] ENGINE Bus STARTED
Dec  5 01:13:18 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : [05/Dec/2025:01:13:18] ENGINE Bus STARTED
Dec  5 01:13:18 compute-0 ceph-mgr[193209]: [cephadm INFO cherrypy.error] [05/Dec/2025:01:13:18] ENGINE Client ('192.168.122.100', 52268) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  5 01:13:18 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : [05/Dec/2025:01:13:18] ENGINE Client ('192.168.122.100', 52268) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  5 01:13:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Dec  5 01:13:19 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  5 01:13:19 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.afshmv(active, since 3s)
Dec  5 01:13:19 compute-0 podman[194357]: 2025-12-05 01:13:19.116142253 +0000 UTC m=+0.962436866 container create 696f9a56df3196885ddff3a6d0be539f6721e1611fd4c687f7f0b6327f9bbd37 (image=quay.io/ceph/ceph:v18, name=kind_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:13:19 compute-0 podman[194332]: 2025-12-05 01:13:19.13362995 +0000 UTC m=+1.020292771 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  5 01:13:19 compute-0 systemd[1]: Started libpod-conmon-696f9a56df3196885ddff3a6d0be539f6721e1611fd4c687f7f0b6327f9bbd37.scope.
Dec  5 01:13:19 compute-0 podman[194333]: 2025-12-05 01:13:19.178439374 +0000 UTC m=+1.058013584 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec  5 01:13:19 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34f4958dc5dc908d04e32edf57a87df877abc7224d5b7397767c62c28fc32b67/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34f4958dc5dc908d04e32edf57a87df877abc7224d5b7397767c62c28fc32b67/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34f4958dc5dc908d04e32edf57a87df877abc7224d5b7397767c62c28fc32b67/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:19 compute-0 podman[194357]: 2025-12-05 01:13:19.226256824 +0000 UTC m=+1.072551427 container init 696f9a56df3196885ddff3a6d0be539f6721e1611fd4c687f7f0b6327f9bbd37 (image=quay.io/ceph/ceph:v18, name=kind_hawking, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  5 01:13:19 compute-0 podman[194357]: 2025-12-05 01:13:19.238016969 +0000 UTC m=+1.084311572 container start 696f9a56df3196885ddff3a6d0be539f6721e1611fd4c687f7f0b6327f9bbd37 (image=quay.io/ceph/ceph:v18, name=kind_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Dec  5 01:13:19 compute-0 podman[194357]: 2025-12-05 01:13:19.242498886 +0000 UTC m=+1.088793509 container attach 696f9a56df3196885ddff3a6d0be539f6721e1611fd4c687f7f0b6327f9bbd37 (image=quay.io/ceph/ceph:v18, name=kind_hawking, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  5 01:13:19 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Dec  5 01:13:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Dec  5 01:13:19 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:19 compute-0 ceph-mgr[193209]: [cephadm INFO root] Set ssh ssh_user
Dec  5 01:13:19 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Dec  5 01:13:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Dec  5 01:13:19 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:19 compute-0 ceph-mgr[193209]: [cephadm INFO root] Set ssh ssh_config
Dec  5 01:13:19 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Dec  5 01:13:19 compute-0 ceph-mgr[193209]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Dec  5 01:13:19 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Dec  5 01:13:19 compute-0 kind_hawking[194434]: ssh user set to ceph-admin. sudo will be used
Dec  5 01:13:19 compute-0 systemd[1]: libpod-696f9a56df3196885ddff3a6d0be539f6721e1611fd4c687f7f0b6327f9bbd37.scope: Deactivated successfully.
Dec  5 01:13:19 compute-0 podman[194357]: 2025-12-05 01:13:19.79304553 +0000 UTC m=+1.639340143 container died 696f9a56df3196885ddff3a6d0be539f6721e1611fd4c687f7f0b6327f9bbd37 (image=quay.io/ceph/ceph:v18, name=kind_hawking, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:13:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-34f4958dc5dc908d04e32edf57a87df877abc7224d5b7397767c62c28fc32b67-merged.mount: Deactivated successfully.
Dec  5 01:13:19 compute-0 podman[194357]: 2025-12-05 01:13:19.842240189 +0000 UTC m=+1.688534792 container remove 696f9a56df3196885ddff3a6d0be539f6721e1611fd4c687f7f0b6327f9bbd37 (image=quay.io/ceph/ceph:v18, name=kind_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  5 01:13:19 compute-0 systemd[1]: libpod-conmon-696f9a56df3196885ddff3a6d0be539f6721e1611fd4c687f7f0b6327f9bbd37.scope: Deactivated successfully.
Dec  5 01:13:19 compute-0 podman[194474]: 2025-12-05 01:13:19.938216618 +0000 UTC m=+0.071992168 container create 4c8b6331359d314cb487e8e99378cbee4d4c22ee6d9d5988a5320d4aaafaf2ea (image=quay.io/ceph/ceph:v18, name=gifted_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  5 01:13:19 compute-0 systemd[1]: Started libpod-conmon-4c8b6331359d314cb487e8e99378cbee4d4c22ee6d9d5988a5320d4aaafaf2ea.scope.
Dec  5 01:13:19 compute-0 podman[194474]: 2025-12-05 01:13:19.907868955 +0000 UTC m=+0.041644585 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:13:20 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d30a3f266bc92c8ab9a611095b7c9dc4fe062f60d254e433dd1c93e467e8a6de/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d30a3f266bc92c8ab9a611095b7c9dc4fe062f60d254e433dd1c93e467e8a6de/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d30a3f266bc92c8ab9a611095b7c9dc4fe062f60d254e433dd1c93e467e8a6de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d30a3f266bc92c8ab9a611095b7c9dc4fe062f60d254e433dd1c93e467e8a6de/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d30a3f266bc92c8ab9a611095b7c9dc4fe062f60d254e433dd1c93e467e8a6de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:20 compute-0 ceph-mgr[193209]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  5 01:13:20 compute-0 podman[194474]: 2025-12-05 01:13:20.106402701 +0000 UTC m=+0.240178291 container init 4c8b6331359d314cb487e8e99378cbee4d4c22ee6d9d5988a5320d4aaafaf2ea (image=quay.io/ceph/ceph:v18, name=gifted_ishizaka, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  5 01:13:20 compute-0 podman[194474]: 2025-12-05 01:13:20.118270918 +0000 UTC m=+0.252046488 container start 4c8b6331359d314cb487e8e99378cbee4d4c22ee6d9d5988a5320d4aaafaf2ea (image=quay.io/ceph/ceph:v18, name=gifted_ishizaka, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:13:20 compute-0 podman[194474]: 2025-12-05 01:13:20.129757925 +0000 UTC m=+0.263533485 container attach 4c8b6331359d314cb487e8e99378cbee4d4c22ee6d9d5988a5320d4aaafaf2ea (image=quay.io/ceph/ceph:v18, name=gifted_ishizaka, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:13:20 compute-0 ceph-mon[192914]: [05/Dec/2025:01:13:18] ENGINE Bus STARTING
Dec  5 01:13:20 compute-0 ceph-mon[192914]: [05/Dec/2025:01:13:18] ENGINE Serving on http://192.168.122.100:8765
Dec  5 01:13:20 compute-0 ceph-mon[192914]: [05/Dec/2025:01:13:18] ENGINE Serving on https://192.168.122.100:7150
Dec  5 01:13:20 compute-0 ceph-mon[192914]: [05/Dec/2025:01:13:18] ENGINE Bus STARTED
Dec  5 01:13:20 compute-0 ceph-mon[192914]: [05/Dec/2025:01:13:18] ENGINE Client ('192.168.122.100', 52268) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  5 01:13:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:20 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Dec  5 01:13:20 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Dec  5 01:13:20 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:20 compute-0 ceph-mgr[193209]: [cephadm INFO root] Set ssh ssh_identity_key
Dec  5 01:13:20 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Dec  5 01:13:20 compute-0 ceph-mgr[193209]: [cephadm INFO root] Set ssh private key
Dec  5 01:13:20 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Set ssh private key
Dec  5 01:13:20 compute-0 systemd[1]: libpod-4c8b6331359d314cb487e8e99378cbee4d4c22ee6d9d5988a5320d4aaafaf2ea.scope: Deactivated successfully.
Dec  5 01:13:20 compute-0 podman[194474]: 2025-12-05 01:13:20.732416781 +0000 UTC m=+0.866192341 container died 4c8b6331359d314cb487e8e99378cbee4d4c22ee6d9d5988a5320d4aaafaf2ea (image=quay.io/ceph/ceph:v18, name=gifted_ishizaka, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  5 01:13:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-d30a3f266bc92c8ab9a611095b7c9dc4fe062f60d254e433dd1c93e467e8a6de-merged.mount: Deactivated successfully.
Dec  5 01:13:20 compute-0 podman[194474]: 2025-12-05 01:13:20.826237419 +0000 UTC m=+0.960012989 container remove 4c8b6331359d314cb487e8e99378cbee4d4c22ee6d9d5988a5320d4aaafaf2ea (image=quay.io/ceph/ceph:v18, name=gifted_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Dec  5 01:13:20 compute-0 systemd[1]: libpod-conmon-4c8b6331359d314cb487e8e99378cbee4d4c22ee6d9d5988a5320d4aaafaf2ea.scope: Deactivated successfully.
Dec  5 01:13:20 compute-0 podman[194518]: 2025-12-05 01:13:20.875391666 +0000 UTC m=+0.114340272 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  5 01:13:20 compute-0 podman[194547]: 2025-12-05 01:13:20.913107329 +0000 UTC m=+0.059754330 container create 36f20d9f1d82acfc21ff8b197bda2abafa2669ab534daa1e5469e9f0f5d81bcb (image=quay.io/ceph/ceph:v18, name=hungry_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  5 01:13:20 compute-0 systemd[1]: Started libpod-conmon-36f20d9f1d82acfc21ff8b197bda2abafa2669ab534daa1e5469e9f0f5d81bcb.scope.
Dec  5 01:13:20 compute-0 podman[194547]: 2025-12-05 01:13:20.890437334 +0000 UTC m=+0.037084365 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:13:20 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cff8ebfafe842c70c4ffaf7eaac099f2fb1d10c2bcb8fc608d7af065cfff772/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cff8ebfafe842c70c4ffaf7eaac099f2fb1d10c2bcb8fc608d7af065cfff772/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cff8ebfafe842c70c4ffaf7eaac099f2fb1d10c2bcb8fc608d7af065cfff772/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cff8ebfafe842c70c4ffaf7eaac099f2fb1d10c2bcb8fc608d7af065cfff772/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cff8ebfafe842c70c4ffaf7eaac099f2fb1d10c2bcb8fc608d7af065cfff772/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:21 compute-0 podman[194547]: 2025-12-05 01:13:21.030478876 +0000 UTC m=+0.177125867 container init 36f20d9f1d82acfc21ff8b197bda2abafa2669ab534daa1e5469e9f0f5d81bcb (image=quay.io/ceph/ceph:v18, name=hungry_lumiere, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:13:21 compute-0 podman[194547]: 2025-12-05 01:13:21.044294039 +0000 UTC m=+0.190941020 container start 36f20d9f1d82acfc21ff8b197bda2abafa2669ab534daa1e5469e9f0f5d81bcb (image=quay.io/ceph/ceph:v18, name=hungry_lumiere, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:13:21 compute-0 podman[194547]: 2025-12-05 01:13:21.04855431 +0000 UTC m=+0.195201291 container attach 36f20d9f1d82acfc21ff8b197bda2abafa2669ab534daa1e5469e9f0f5d81bcb (image=quay.io/ceph/ceph:v18, name=hungry_lumiere, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:13:21 compute-0 ceph-mon[192914]: Set ssh ssh_user
Dec  5 01:13:21 compute-0 ceph-mon[192914]: Set ssh ssh_config
Dec  5 01:13:21 compute-0 ceph-mon[192914]: ssh user set to ceph-admin. sudo will be used
Dec  5 01:13:21 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:21 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec  5 01:13:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Dec  5 01:13:21 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:21 compute-0 ceph-mgr[193209]: [cephadm INFO root] Set ssh ssh_identity_pub
Dec  5 01:13:21 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Dec  5 01:13:21 compute-0 systemd[1]: libpod-36f20d9f1d82acfc21ff8b197bda2abafa2669ab534daa1e5469e9f0f5d81bcb.scope: Deactivated successfully.
Dec  5 01:13:21 compute-0 conmon[194563]: conmon 36f20d9f1d82acfc21ff <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-36f20d9f1d82acfc21ff8b197bda2abafa2669ab534daa1e5469e9f0f5d81bcb.scope/container/memory.events
Dec  5 01:13:21 compute-0 podman[194547]: 2025-12-05 01:13:21.625607518 +0000 UTC m=+0.772254539 container died 36f20d9f1d82acfc21ff8b197bda2abafa2669ab534daa1e5469e9f0f5d81bcb (image=quay.io/ceph/ceph:v18, name=hungry_lumiere, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  5 01:13:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cff8ebfafe842c70c4ffaf7eaac099f2fb1d10c2bcb8fc608d7af065cfff772-merged.mount: Deactivated successfully.
Dec  5 01:13:21 compute-0 podman[194547]: 2025-12-05 01:13:21.691606525 +0000 UTC m=+0.838253506 container remove 36f20d9f1d82acfc21ff8b197bda2abafa2669ab534daa1e5469e9f0f5d81bcb (image=quay.io/ceph/ceph:v18, name=hungry_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  5 01:13:21 compute-0 systemd[1]: libpod-conmon-36f20d9f1d82acfc21ff8b197bda2abafa2669ab534daa1e5469e9f0f5d81bcb.scope: Deactivated successfully.
Dec  5 01:13:21 compute-0 podman[194599]: 2025-12-05 01:13:21.785162995 +0000 UTC m=+0.061065457 container create 9afcfd2e04dac41780428d45196168eb3a873630c51bef5c3a263f54ac84f117 (image=quay.io/ceph/ceph:v18, name=suspicious_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  5 01:13:21 compute-0 systemd[1]: Started libpod-conmon-9afcfd2e04dac41780428d45196168eb3a873630c51bef5c3a263f54ac84f117.scope.
Dec  5 01:13:21 compute-0 podman[194599]: 2025-12-05 01:13:21.759143155 +0000 UTC m=+0.035045597 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:13:21 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:13:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa8c348735568d5e94e713c4ad056045ddbfd089468b84001738273e629ff2f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa8c348735568d5e94e713c4ad056045ddbfd089468b84001738273e629ff2f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa8c348735568d5e94e713c4ad056045ddbfd089468b84001738273e629ff2f7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:21 compute-0 podman[194599]: 2025-12-05 01:13:21.926751641 +0000 UTC m=+0.202654123 container init 9afcfd2e04dac41780428d45196168eb3a873630c51bef5c3a263f54ac84f117 (image=quay.io/ceph/ceph:v18, name=suspicious_colden, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  5 01:13:21 compute-0 podman[194599]: 2025-12-05 01:13:21.937724313 +0000 UTC m=+0.213626775 container start 9afcfd2e04dac41780428d45196168eb3a873630c51bef5c3a263f54ac84f117 (image=quay.io/ceph/ceph:v18, name=suspicious_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  5 01:13:21 compute-0 podman[194599]: 2025-12-05 01:13:21.945145464 +0000 UTC m=+0.221047936 container attach 9afcfd2e04dac41780428d45196168eb3a873630c51bef5c3a263f54ac84f117 (image=quay.io/ceph/ceph:v18, name=suspicious_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  5 01:13:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053118 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:13:22 compute-0 ceph-mgr[193209]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  5 01:13:22 compute-0 ceph-mon[192914]: Set ssh ssh_identity_key
Dec  5 01:13:22 compute-0 ceph-mon[192914]: Set ssh private key
Dec  5 01:13:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:22 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec  5 01:13:22 compute-0 suspicious_colden[194615]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKqIArDaqP5VTY5pu+bmfkIDqlV1OYvK960Ala0m73bO7YePCy/ROElV++adpRzJ7pOq+eUbor3fKjFH5kWsqF8L0P2BfLW2r2c8+P7vZ9R4Ivrng8Nx6WakK4vWBs1QpVWZzhqHXtJ/pyKGffLpmqmHHQxZUa7q4afZEzDYWP4O1W7Qx6/WSnch9KPY5/tt3+Km2zGb22LAbaE7CGLyflQp1XSgpE+fQxa1BUhiHYOxaan2s/bRP5MtPlhpLfdOczqKJmYOUo7TqTOBb0NASnZQMqY3zIVZk1cx4/wBx4uggKUSPwLZoEpBbumKeSI9aPwk/lecqgDuB5udA14AxnJr3el3Vap09/C/mdPxfnie+g3aOK37H0zlFLZ9buWQ3LfHQBgztWMirSVnxvPDUuzbRi5lsPPnJZ4UcPq9d1GdVqDxUdiQ8RqxNMTtSeywa36Men4QQldL915BwCc9bcppQ3sxEvNnH2EpWWBs8FvWrXP54/oUIWBua9+YBk/mU= zuul@controller
Dec  5 01:13:22 compute-0 systemd[1]: libpod-9afcfd2e04dac41780428d45196168eb3a873630c51bef5c3a263f54ac84f117.scope: Deactivated successfully.
Dec  5 01:13:22 compute-0 podman[194641]: 2025-12-05 01:13:22.566498901 +0000 UTC m=+0.037463256 container died 9afcfd2e04dac41780428d45196168eb3a873630c51bef5c3a263f54ac84f117 (image=quay.io/ceph/ceph:v18, name=suspicious_colden, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:13:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa8c348735568d5e94e713c4ad056045ddbfd089468b84001738273e629ff2f7-merged.mount: Deactivated successfully.
Dec  5 01:13:22 compute-0 podman[194641]: 2025-12-05 01:13:22.636698447 +0000 UTC m=+0.107662762 container remove 9afcfd2e04dac41780428d45196168eb3a873630c51bef5c3a263f54ac84f117 (image=quay.io/ceph/ceph:v18, name=suspicious_colden, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:13:22 compute-0 systemd[1]: libpod-conmon-9afcfd2e04dac41780428d45196168eb3a873630c51bef5c3a263f54ac84f117.scope: Deactivated successfully.
Dec  5 01:13:22 compute-0 podman[194656]: 2025-12-05 01:13:22.764381587 +0000 UTC m=+0.080908961 container create e16a4f4bfb5531f9dbcf77e1ed445813019c63b2756c9ed04e414d85a73694ed (image=quay.io/ceph/ceph:v18, name=nostalgic_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  5 01:13:22 compute-0 podman[194656]: 2025-12-05 01:13:22.723409292 +0000 UTC m=+0.039936726 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:13:22 compute-0 systemd[1]: Started libpod-conmon-e16a4f4bfb5531f9dbcf77e1ed445813019c63b2756c9ed04e414d85a73694ed.scope.
Dec  5 01:13:22 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:13:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cf09b05c803de973d208ab0093d58a388ea8358932541823e71ea3b375b0c5a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cf09b05c803de973d208ab0093d58a388ea8358932541823e71ea3b375b0c5a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cf09b05c803de973d208ab0093d58a388ea8358932541823e71ea3b375b0c5a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:22 compute-0 podman[194656]: 2025-12-05 01:13:22.945668962 +0000 UTC m=+0.262196376 container init e16a4f4bfb5531f9dbcf77e1ed445813019c63b2756c9ed04e414d85a73694ed (image=quay.io/ceph/ceph:v18, name=nostalgic_burnell, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  5 01:13:22 compute-0 podman[194656]: 2025-12-05 01:13:22.954650038 +0000 UTC m=+0.271177392 container start e16a4f4bfb5531f9dbcf77e1ed445813019c63b2756c9ed04e414d85a73694ed (image=quay.io/ceph/ceph:v18, name=nostalgic_burnell, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:13:22 compute-0 podman[194656]: 2025-12-05 01:13:22.959787504 +0000 UTC m=+0.276314858 container attach e16a4f4bfb5531f9dbcf77e1ed445813019c63b2756c9ed04e414d85a73694ed (image=quay.io/ceph/ceph:v18, name=nostalgic_burnell, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:13:23 compute-0 ceph-mon[192914]: Set ssh ssh_identity_pub
Dec  5 01:13:23 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Dec  5 01:13:23 compute-0 podman[194699]: 2025-12-05 01:13:23.703320286 +0000 UTC m=+0.112092229 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.buildah.version=1.29.0, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., name=ubi9, config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, release=1214.1726694543, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  5 01:13:23 compute-0 systemd-logind[792]: New session 28 of user ceph-admin.
Dec  5 01:13:23 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Dec  5 01:13:23 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec  5 01:13:23 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec  5 01:13:23 compute-0 systemd[1]: Starting User Manager for UID 42477...
Dec  5 01:13:23 compute-0 systemd-logind[792]: New session 30 of user ceph-admin.
Dec  5 01:13:23 compute-0 systemd[194721]: Queued start job for default target Main User Target.
Dec  5 01:13:23 compute-0 systemd[194721]: Created slice User Application Slice.
Dec  5 01:13:23 compute-0 systemd[194721]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  5 01:13:23 compute-0 systemd[194721]: Started Daily Cleanup of User's Temporary Directories.
Dec  5 01:13:23 compute-0 systemd[194721]: Reached target Paths.
Dec  5 01:13:23 compute-0 systemd[194721]: Reached target Timers.
Dec  5 01:13:23 compute-0 systemd[194721]: Starting D-Bus User Message Bus Socket...
Dec  5 01:13:23 compute-0 systemd[194721]: Starting Create User's Volatile Files and Directories...
Dec  5 01:13:23 compute-0 systemd[194721]: Listening on D-Bus User Message Bus Socket.
Dec  5 01:13:23 compute-0 systemd[194721]: Reached target Sockets.
Dec  5 01:13:23 compute-0 systemd[194721]: Finished Create User's Volatile Files and Directories.
Dec  5 01:13:23 compute-0 systemd[194721]: Reached target Basic System.
Dec  5 01:13:23 compute-0 systemd[194721]: Reached target Main User Target.
Dec  5 01:13:23 compute-0 systemd[194721]: Startup finished in 168ms.
Dec  5 01:13:23 compute-0 systemd[1]: Started User Manager for UID 42477.
Dec  5 01:13:24 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Dec  5 01:13:24 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Dec  5 01:13:24 compute-0 ceph-mgr[193209]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  5 01:13:24 compute-0 systemd-logind[792]: New session 31 of user ceph-admin.
Dec  5 01:13:24 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Dec  5 01:13:25 compute-0 systemd-logind[792]: New session 32 of user ceph-admin.
Dec  5 01:13:25 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Dec  5 01:13:25 compute-0 ceph-mgr[193209]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Dec  5 01:13:25 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Dec  5 01:13:25 compute-0 systemd-logind[792]: New session 33 of user ceph-admin.
Dec  5 01:13:25 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Dec  5 01:13:26 compute-0 ceph-mgr[193209]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  5 01:13:26 compute-0 systemd-logind[792]: New session 34 of user ceph-admin.
Dec  5 01:13:26 compute-0 systemd[1]: Started Session 34 of User ceph-admin.
Dec  5 01:13:26 compute-0 ceph-mon[192914]: Deploying cephadm binary to compute-0
Dec  5 01:13:26 compute-0 podman[194981]: 2025-12-05 01:13:26.527130028 +0000 UTC m=+0.127614040 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, distribution-scope=public, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible)
Dec  5 01:13:26 compute-0 systemd-logind[792]: New session 35 of user ceph-admin.
Dec  5 01:13:26 compute-0 systemd[1]: Started Session 35 of User ceph-admin.
Dec  5 01:13:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054711 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:13:27 compute-0 systemd-logind[792]: New session 36 of user ceph-admin.
Dec  5 01:13:27 compute-0 systemd[1]: Started Session 36 of User ceph-admin.
Dec  5 01:13:27 compute-0 systemd-logind[792]: New session 37 of user ceph-admin.
Dec  5 01:13:27 compute-0 systemd[1]: Started Session 37 of User ceph-admin.
Dec  5 01:13:28 compute-0 ceph-mgr[193209]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  5 01:13:28 compute-0 systemd-logind[792]: New session 38 of user ceph-admin.
Dec  5 01:13:28 compute-0 systemd[1]: Started Session 38 of User ceph-admin.
Dec  5 01:13:29 compute-0 systemd-logind[792]: New session 39 of user ceph-admin.
Dec  5 01:13:29 compute-0 systemd[1]: Started Session 39 of User ceph-admin.
Dec  5 01:13:29 compute-0 podman[158197]: time="2025-12-05T01:13:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:13:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:13:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 23510 "" "Go-http-client/1.1"
Dec  5 01:13:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:13:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4360 "" "Go-http-client/1.1"
Dec  5 01:13:29 compute-0 systemd-logind[792]: New session 40 of user ceph-admin.
Dec  5 01:13:29 compute-0 systemd[1]: Started Session 40 of User ceph-admin.
Dec  5 01:13:30 compute-0 ceph-mgr[193209]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  5 01:13:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec  5 01:13:30 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:30 compute-0 ceph-mgr[193209]: [cephadm INFO root] Added host compute-0
Dec  5 01:13:30 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Added host compute-0
Dec  5 01:13:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Dec  5 01:13:30 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  5 01:13:30 compute-0 nostalgic_burnell[194673]: Added host 'compute-0' with addr '192.168.122.100'
Dec  5 01:13:30 compute-0 systemd[1]: libpod-e16a4f4bfb5531f9dbcf77e1ed445813019c63b2756c9ed04e414d85a73694ed.scope: Deactivated successfully.
Dec  5 01:13:30 compute-0 podman[195353]: 2025-12-05 01:13:30.636601347 +0000 UTC m=+0.050595579 container died e16a4f4bfb5531f9dbcf77e1ed445813019c63b2756c9ed04e414d85a73694ed (image=quay.io/ceph/ceph:v18, name=nostalgic_burnell, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  5 01:13:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-6cf09b05c803de973d208ab0093d58a388ea8358932541823e71ea3b375b0c5a-merged.mount: Deactivated successfully.
Dec  5 01:13:30 compute-0 podman[195353]: 2025-12-05 01:13:30.699121985 +0000 UTC m=+0.113116147 container remove e16a4f4bfb5531f9dbcf77e1ed445813019c63b2756c9ed04e414d85a73694ed (image=quay.io/ceph/ceph:v18, name=nostalgic_burnell, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:13:30 compute-0 systemd[1]: libpod-conmon-e16a4f4bfb5531f9dbcf77e1ed445813019c63b2756c9ed04e414d85a73694ed.scope: Deactivated successfully.
Dec  5 01:13:30 compute-0 podman[195403]: 2025-12-05 01:13:30.835718799 +0000 UTC m=+0.089506066 container create 1eb9d3729f7ced5625ed72e59b2db2b1e19cf6f28a2132e392695d605ba133cf (image=quay.io/ceph/ceph:v18, name=lucid_cannon, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:13:30 compute-0 podman[195403]: 2025-12-05 01:13:30.801046673 +0000 UTC m=+0.054833960 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:13:30 compute-0 podman[195416]: 2025-12-05 01:13:30.903023853 +0000 UTC m=+0.101546258 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 01:13:30 compute-0 systemd[1]: Started libpod-conmon-1eb9d3729f7ced5625ed72e59b2db2b1e19cf6f28a2132e392695d605ba133cf.scope.
Dec  5 01:13:30 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:13:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc324d499a3b309e54afefe82a97dc3c4651bbba6ffaf74c76f4396cd61a4a90/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc324d499a3b309e54afefe82a97dc3c4651bbba6ffaf74c76f4396cd61a4a90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc324d499a3b309e54afefe82a97dc3c4651bbba6ffaf74c76f4396cd61a4a90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:30 compute-0 podman[195403]: 2025-12-05 01:13:30.990220012 +0000 UTC m=+0.244007339 container init 1eb9d3729f7ced5625ed72e59b2db2b1e19cf6f28a2132e392695d605ba133cf (image=quay.io/ceph/ceph:v18, name=lucid_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:13:31 compute-0 podman[195403]: 2025-12-05 01:13:31.006160676 +0000 UTC m=+0.259947953 container start 1eb9d3729f7ced5625ed72e59b2db2b1e19cf6f28a2132e392695d605ba133cf (image=quay.io/ceph/ceph:v18, name=lucid_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  5 01:13:31 compute-0 podman[195403]: 2025-12-05 01:13:31.013296689 +0000 UTC m=+0.267083976 container attach 1eb9d3729f7ced5625ed72e59b2db2b1e19cf6f28a2132e392695d605ba133cf (image=quay.io/ceph/ceph:v18, name=lucid_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  5 01:13:31 compute-0 podman[195546]: 2025-12-05 01:13:31.383976729 +0000 UTC m=+0.049158919 container create 5cbc8af0b1493a992ec9e03403f223d130a6b1252c46b0e637767b4a172cfaf8 (image=quay.io/ceph/ceph:v18, name=festive_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  5 01:13:31 compute-0 openstack_network_exporter[160350]: ERROR   01:13:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:13:31 compute-0 openstack_network_exporter[160350]: ERROR   01:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:13:31 compute-0 openstack_network_exporter[160350]: ERROR   01:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:13:31 compute-0 openstack_network_exporter[160350]: ERROR   01:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:13:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:13:31 compute-0 openstack_network_exporter[160350]: ERROR   01:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:13:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:13:31 compute-0 systemd[1]: Started libpod-conmon-5cbc8af0b1493a992ec9e03403f223d130a6b1252c46b0e637767b4a172cfaf8.scope.
Dec  5 01:13:31 compute-0 podman[195546]: 2025-12-05 01:13:31.365864234 +0000 UTC m=+0.031046444 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:13:31 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:13:31 compute-0 podman[195546]: 2025-12-05 01:13:31.480483313 +0000 UTC m=+0.145665523 container init 5cbc8af0b1493a992ec9e03403f223d130a6b1252c46b0e637767b4a172cfaf8 (image=quay.io/ceph/ceph:v18, name=festive_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  5 01:13:31 compute-0 podman[195546]: 2025-12-05 01:13:31.489966352 +0000 UTC m=+0.155148542 container start 5cbc8af0b1493a992ec9e03403f223d130a6b1252c46b0e637767b4a172cfaf8 (image=quay.io/ceph/ceph:v18, name=festive_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:13:31 compute-0 podman[195546]: 2025-12-05 01:13:31.494054709 +0000 UTC m=+0.159236899 container attach 5cbc8af0b1493a992ec9e03403f223d130a6b1252c46b0e637767b4a172cfaf8 (image=quay.io/ceph/ceph:v18, name=festive_hermann, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:13:31 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:31 compute-0 ceph-mon[192914]: Added host compute-0
Dec  5 01:13:31 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec  5 01:13:31 compute-0 ceph-mgr[193209]: [cephadm INFO root] Saving service mon spec with placement count:5
Dec  5 01:13:31 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Dec  5 01:13:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Dec  5 01:13:31 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:31 compute-0 lucid_cannon[195473]: Scheduled mon update...
Dec  5 01:13:31 compute-0 systemd[1]: libpod-1eb9d3729f7ced5625ed72e59b2db2b1e19cf6f28a2132e392695d605ba133cf.scope: Deactivated successfully.
Dec  5 01:13:31 compute-0 podman[195403]: 2025-12-05 01:13:31.59220847 +0000 UTC m=+0.845995797 container died 1eb9d3729f7ced5625ed72e59b2db2b1e19cf6f28a2132e392695d605ba133cf (image=quay.io/ceph/ceph:v18, name=lucid_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  5 01:13:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc324d499a3b309e54afefe82a97dc3c4651bbba6ffaf74c76f4396cd61a4a90-merged.mount: Deactivated successfully.
Dec  5 01:13:31 compute-0 podman[195403]: 2025-12-05 01:13:31.669998781 +0000 UTC m=+0.923786048 container remove 1eb9d3729f7ced5625ed72e59b2db2b1e19cf6f28a2132e392695d605ba133cf (image=quay.io/ceph/ceph:v18, name=lucid_cannon, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  5 01:13:31 compute-0 systemd[1]: libpod-conmon-1eb9d3729f7ced5625ed72e59b2db2b1e19cf6f28a2132e392695d605ba133cf.scope: Deactivated successfully.
Dec  5 01:13:31 compute-0 podman[195582]: 2025-12-05 01:13:31.771523998 +0000 UTC m=+0.069879318 container create add6ae3922cf523f76116894e1f37b6048e4e082e9ed156fb0dd628c33fceae4 (image=quay.io/ceph/ceph:v18, name=charming_hugle, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Dec  5 01:13:31 compute-0 systemd[1]: Started libpod-conmon-add6ae3922cf523f76116894e1f37b6048e4e082e9ed156fb0dd628c33fceae4.scope.
Dec  5 01:13:31 compute-0 podman[195582]: 2025-12-05 01:13:31.737568273 +0000 UTC m=+0.035923693 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:13:31 compute-0 festive_hermann[195562]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Dec  5 01:13:31 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:13:31 compute-0 systemd[1]: libpod-5cbc8af0b1493a992ec9e03403f223d130a6b1252c46b0e637767b4a172cfaf8.scope: Deactivated successfully.
Dec  5 01:13:31 compute-0 podman[195546]: 2025-12-05 01:13:31.861502437 +0000 UTC m=+0.526684627 container died 5cbc8af0b1493a992ec9e03403f223d130a6b1252c46b0e637767b4a172cfaf8 (image=quay.io/ceph/ceph:v18, name=festive_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:13:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67181fe6528af9161487c849b7f539cacc99ddd188d85acab9b69fad57bb431c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67181fe6528af9161487c849b7f539cacc99ddd188d85acab9b69fad57bb431c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67181fe6528af9161487c849b7f539cacc99ddd188d85acab9b69fad57bb431c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:31 compute-0 podman[195582]: 2025-12-05 01:13:31.89785504 +0000 UTC m=+0.196210400 container init add6ae3922cf523f76116894e1f37b6048e4e082e9ed156fb0dd628c33fceae4 (image=quay.io/ceph/ceph:v18, name=charming_hugle, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:13:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-db17bd72dad808e3c366a3ffaa512d83bd7ae4b7a533ffd630dd9463a49259f7-merged.mount: Deactivated successfully.
Dec  5 01:13:31 compute-0 podman[195582]: 2025-12-05 01:13:31.920977238 +0000 UTC m=+0.219332588 container start add6ae3922cf523f76116894e1f37b6048e4e082e9ed156fb0dd628c33fceae4 (image=quay.io/ceph/ceph:v18, name=charming_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:13:31 compute-0 podman[195582]: 2025-12-05 01:13:31.928268575 +0000 UTC m=+0.226623935 container attach add6ae3922cf523f76116894e1f37b6048e4e082e9ed156fb0dd628c33fceae4 (image=quay.io/ceph/ceph:v18, name=charming_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:13:31 compute-0 podman[195546]: 2025-12-05 01:13:31.954069439 +0000 UTC m=+0.619251629 container remove 5cbc8af0b1493a992ec9e03403f223d130a6b1252c46b0e637767b4a172cfaf8 (image=quay.io/ceph/ceph:v18, name=festive_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:13:31 compute-0 systemd[1]: libpod-conmon-5cbc8af0b1493a992ec9e03403f223d130a6b1252c46b0e637767b4a172cfaf8.scope: Deactivated successfully.
Dec  5 01:13:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:13:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Dec  5 01:13:32 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:32 compute-0 ceph-mgr[193209]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  5 01:13:32 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec  5 01:13:32 compute-0 ceph-mgr[193209]: [cephadm INFO root] Saving service mgr spec with placement count:2
Dec  5 01:13:32 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Dec  5 01:13:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Dec  5 01:13:32 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:32 compute-0 charming_hugle[195598]: Scheduled mgr update...
Dec  5 01:13:32 compute-0 systemd[1]: libpod-add6ae3922cf523f76116894e1f37b6048e4e082e9ed156fb0dd628c33fceae4.scope: Deactivated successfully.
Dec  5 01:13:32 compute-0 podman[195582]: 2025-12-05 01:13:32.552243878 +0000 UTC m=+0.850599238 container died add6ae3922cf523f76116894e1f37b6048e4e082e9ed156fb0dd628c33fceae4 (image=quay.io/ceph/ceph:v18, name=charming_hugle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:13:32 compute-0 ceph-mon[192914]: Saving service mon spec with placement count:5
Dec  5 01:13:32 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:32 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:32 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-67181fe6528af9161487c849b7f539cacc99ddd188d85acab9b69fad57bb431c-merged.mount: Deactivated successfully.
Dec  5 01:13:32 compute-0 podman[195582]: 2025-12-05 01:13:32.645594002 +0000 UTC m=+0.943949342 container remove add6ae3922cf523f76116894e1f37b6048e4e082e9ed156fb0dd628c33fceae4 (image=quay.io/ceph/ceph:v18, name=charming_hugle, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  5 01:13:32 compute-0 systemd[1]: libpod-conmon-add6ae3922cf523f76116894e1f37b6048e4e082e9ed156fb0dd628c33fceae4.scope: Deactivated successfully.
Dec  5 01:13:32 compute-0 podman[195757]: 2025-12-05 01:13:32.740773888 +0000 UTC m=+0.064377341 container create aed4696ad00b88ac61451f30e1949f6104ce01f4a61bafd8d647d0c5208462e5 (image=quay.io/ceph/ceph:v18, name=zealous_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:13:32 compute-0 systemd[1]: Started libpod-conmon-aed4696ad00b88ac61451f30e1949f6104ce01f4a61bafd8d647d0c5208462e5.scope.
Dec  5 01:13:32 compute-0 podman[195757]: 2025-12-05 01:13:32.71200032 +0000 UTC m=+0.035603863 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:13:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:13:32 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:32 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:13:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05729670b27caae475e3b001bb618d28da705e522f26cbd3b07b79bf28018d7a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05729670b27caae475e3b001bb618d28da705e522f26cbd3b07b79bf28018d7a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05729670b27caae475e3b001bb618d28da705e522f26cbd3b07b79bf28018d7a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:32 compute-0 podman[195757]: 2025-12-05 01:13:32.864034323 +0000 UTC m=+0.187637786 container init aed4696ad00b88ac61451f30e1949f6104ce01f4a61bafd8d647d0c5208462e5 (image=quay.io/ceph/ceph:v18, name=zealous_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  5 01:13:32 compute-0 podman[195757]: 2025-12-05 01:13:32.875049726 +0000 UTC m=+0.198653169 container start aed4696ad00b88ac61451f30e1949f6104ce01f4a61bafd8d647d0c5208462e5 (image=quay.io/ceph/ceph:v18, name=zealous_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  5 01:13:32 compute-0 podman[195757]: 2025-12-05 01:13:32.880259213 +0000 UTC m=+0.203862656 container attach aed4696ad00b88ac61451f30e1949f6104ce01f4a61bafd8d647d0c5208462e5 (image=quay.io/ceph/ceph:v18, name=zealous_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  5 01:13:33 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Dec  5 01:13:33 compute-0 ceph-mgr[193209]: [cephadm INFO root] Saving service crash spec with placement *
Dec  5 01:13:33 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Dec  5 01:13:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Dec  5 01:13:33 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:33 compute-0 zealous_thompson[195786]: Scheduled crash update...
Dec  5 01:13:33 compute-0 systemd[1]: libpod-aed4696ad00b88ac61451f30e1949f6104ce01f4a61bafd8d647d0c5208462e5.scope: Deactivated successfully.
Dec  5 01:13:33 compute-0 podman[195757]: 2025-12-05 01:13:33.500410727 +0000 UTC m=+0.824014200 container died aed4696ad00b88ac61451f30e1949f6104ce01f4a61bafd8d647d0c5208462e5 (image=quay.io/ceph/ceph:v18, name=zealous_thompson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:13:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-05729670b27caae475e3b001bb618d28da705e522f26cbd3b07b79bf28018d7a-merged.mount: Deactivated successfully.
Dec  5 01:13:33 compute-0 podman[195757]: 2025-12-05 01:13:33.587529994 +0000 UTC m=+0.911133437 container remove aed4696ad00b88ac61451f30e1949f6104ce01f4a61bafd8d647d0c5208462e5 (image=quay.io/ceph/ceph:v18, name=zealous_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:13:33 compute-0 systemd[1]: libpod-conmon-aed4696ad00b88ac61451f30e1949f6104ce01f4a61bafd8d647d0c5208462e5.scope: Deactivated successfully.
Dec  5 01:13:33 compute-0 podman[195948]: 2025-12-05 01:13:33.710660605 +0000 UTC m=+0.078023939 container create a1c15d17a9c67c4e38eda550ac51af5ecb46c4f2e9273b3c63088396fcecebb8 (image=quay.io/ceph/ceph:v18, name=quizzical_galois, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  5 01:13:33 compute-0 podman[195948]: 2025-12-05 01:13:33.674740444 +0000 UTC m=+0.042103818 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:13:33 compute-0 systemd[1]: Started libpod-conmon-a1c15d17a9c67c4e38eda550ac51af5ecb46c4f2e9273b3c63088396fcecebb8.scope.
Dec  5 01:13:33 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:13:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17c090e40e9a3d2f05adb3a911484c61dd2ef20dd65bc55af84608eaf12cbbde/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17c090e40e9a3d2f05adb3a911484c61dd2ef20dd65bc55af84608eaf12cbbde/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17c090e40e9a3d2f05adb3a911484c61dd2ef20dd65bc55af84608eaf12cbbde/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:33 compute-0 podman[195948]: 2025-12-05 01:13:33.828125845 +0000 UTC m=+0.195489259 container init a1c15d17a9c67c4e38eda550ac51af5ecb46c4f2e9273b3c63088396fcecebb8 (image=quay.io/ceph/ceph:v18, name=quizzical_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  5 01:13:33 compute-0 ceph-mon[192914]: Saving service mgr spec with placement count:2
Dec  5 01:13:33 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:33 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:33 compute-0 podman[195948]: 2025-12-05 01:13:33.850817871 +0000 UTC m=+0.218181215 container start a1c15d17a9c67c4e38eda550ac51af5ecb46c4f2e9273b3c63088396fcecebb8 (image=quay.io/ceph/ceph:v18, name=quizzical_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  5 01:13:33 compute-0 podman[195948]: 2025-12-05 01:13:33.856858932 +0000 UTC m=+0.224222276 container attach a1c15d17a9c67c4e38eda550ac51af5ecb46c4f2e9273b3c63088396fcecebb8 (image=quay.io/ceph/ceph:v18, name=quizzical_galois, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Dec  5 01:13:34 compute-0 ceph-mgr[193209]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  5 01:13:34 compute-0 podman[196011]: 2025-12-05 01:13:34.084464964 +0000 UTC m=+0.104031989 container exec aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:13:34 compute-0 podman[196011]: 2025-12-05 01:13:34.414729015 +0000 UTC m=+0.434296040 container exec_died aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:13:34 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Dec  5 01:13:34 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1794856051' entity='client.admin' 
Dec  5 01:13:34 compute-0 podman[195948]: 2025-12-05 01:13:34.470030288 +0000 UTC m=+0.837393642 container died a1c15d17a9c67c4e38eda550ac51af5ecb46c4f2e9273b3c63088396fcecebb8 (image=quay.io/ceph/ceph:v18, name=quizzical_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  5 01:13:34 compute-0 systemd[1]: libpod-a1c15d17a9c67c4e38eda550ac51af5ecb46c4f2e9273b3c63088396fcecebb8.scope: Deactivated successfully.
Dec  5 01:13:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-17c090e40e9a3d2f05adb3a911484c61dd2ef20dd65bc55af84608eaf12cbbde-merged.mount: Deactivated successfully.
Dec  5 01:13:34 compute-0 podman[195948]: 2025-12-05 01:13:34.535054236 +0000 UTC m=+0.902417580 container remove a1c15d17a9c67c4e38eda550ac51af5ecb46c4f2e9273b3c63088396fcecebb8 (image=quay.io/ceph/ceph:v18, name=quizzical_galois, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  5 01:13:34 compute-0 systemd[1]: libpod-conmon-a1c15d17a9c67c4e38eda550ac51af5ecb46c4f2e9273b3c63088396fcecebb8.scope: Deactivated successfully.
Dec  5 01:13:34 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:13:34 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:34 compute-0 podman[196087]: 2025-12-05 01:13:34.650414277 +0000 UTC m=+0.069836367 container create c617ea0bf0dd329973f5e5076a565b41c9b3f3704e960c8cd52e5532c54818f5 (image=quay.io/ceph/ceph:v18, name=nostalgic_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Dec  5 01:13:34 compute-0 systemd[1]: Started libpod-conmon-c617ea0bf0dd329973f5e5076a565b41c9b3f3704e960c8cd52e5532c54818f5.scope.
Dec  5 01:13:34 compute-0 podman[196087]: 2025-12-05 01:13:34.625760206 +0000 UTC m=+0.045182306 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:13:34 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:13:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a1a4748f6d194a5044fda5565ae7e7d7716318444826ca3d4d8f44b0626a051/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a1a4748f6d194a5044fda5565ae7e7d7716318444826ca3d4d8f44b0626a051/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a1a4748f6d194a5044fda5565ae7e7d7716318444826ca3d4d8f44b0626a051/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:34 compute-0 podman[196087]: 2025-12-05 01:13:34.779795976 +0000 UTC m=+0.199218076 container init c617ea0bf0dd329973f5e5076a565b41c9b3f3704e960c8cd52e5532c54818f5 (image=quay.io/ceph/ceph:v18, name=nostalgic_carson, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:13:34 compute-0 podman[196087]: 2025-12-05 01:13:34.797863449 +0000 UTC m=+0.217285519 container start c617ea0bf0dd329973f5e5076a565b41c9b3f3704e960c8cd52e5532c54818f5 (image=quay.io/ceph/ceph:v18, name=nostalgic_carson, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  5 01:13:34 compute-0 podman[196087]: 2025-12-05 01:13:34.803660324 +0000 UTC m=+0.223082424 container attach c617ea0bf0dd329973f5e5076a565b41c9b3f3704e960c8cd52e5532c54818f5 (image=quay.io/ceph/ceph:v18, name=nostalgic_carson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:13:34 compute-0 ceph-mon[192914]: Saving service crash spec with placement *
Dec  5 01:13:34 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1794856051' entity='client.admin' 
Dec  5 01:13:34 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:35 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 196241 (sysctl)
Dec  5 01:13:35 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Dec  5 01:13:35 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Dec  5 01:13:35 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec  5 01:13:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Dec  5 01:13:35 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:35 compute-0 podman[196087]: 2025-12-05 01:13:35.474808868 +0000 UTC m=+0.894230978 container died c617ea0bf0dd329973f5e5076a565b41c9b3f3704e960c8cd52e5532c54818f5 (image=quay.io/ceph/ceph:v18, name=nostalgic_carson, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:13:35 compute-0 systemd[1]: libpod-c617ea0bf0dd329973f5e5076a565b41c9b3f3704e960c8cd52e5532c54818f5.scope: Deactivated successfully.
Dec  5 01:13:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a1a4748f6d194a5044fda5565ae7e7d7716318444826ca3d4d8f44b0626a051-merged.mount: Deactivated successfully.
Dec  5 01:13:35 compute-0 podman[196087]: 2025-12-05 01:13:35.554053001 +0000 UTC m=+0.973475071 container remove c617ea0bf0dd329973f5e5076a565b41c9b3f3704e960c8cd52e5532c54818f5 (image=quay.io/ceph/ceph:v18, name=nostalgic_carson, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:13:35 compute-0 systemd[1]: libpod-conmon-c617ea0bf0dd329973f5e5076a565b41c9b3f3704e960c8cd52e5532c54818f5.scope: Deactivated successfully.
Dec  5 01:13:35 compute-0 podman[196262]: 2025-12-05 01:13:35.642799105 +0000 UTC m=+0.064681781 container create bc9b8a74e350a9abd99ff827e6d80b793eb12f8bf492159fc0c5186d81309e10 (image=quay.io/ceph/ceph:v18, name=kind_fermat, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:13:35 compute-0 podman[196262]: 2025-12-05 01:13:35.607083769 +0000 UTC m=+0.028966465 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:13:35 compute-0 systemd[1]: Started libpod-conmon-bc9b8a74e350a9abd99ff827e6d80b793eb12f8bf492159fc0c5186d81309e10.scope.
Dec  5 01:13:35 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:13:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8297a75fb58b8409b5c31a26d3c718da6225825a3e36b10ebec983a29f9753d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8297a75fb58b8409b5c31a26d3c718da6225825a3e36b10ebec983a29f9753d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8297a75fb58b8409b5c31a26d3c718da6225825a3e36b10ebec983a29f9753d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:35 compute-0 podman[196262]: 2025-12-05 01:13:35.821388883 +0000 UTC m=+0.243271569 container init bc9b8a74e350a9abd99ff827e6d80b793eb12f8bf492159fc0c5186d81309e10 (image=quay.io/ceph/ceph:v18, name=kind_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Dec  5 01:13:35 compute-0 podman[196262]: 2025-12-05 01:13:35.852276311 +0000 UTC m=+0.274159017 container start bc9b8a74e350a9abd99ff827e6d80b793eb12f8bf492159fc0c5186d81309e10 (image=quay.io/ceph/ceph:v18, name=kind_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Dec  5 01:13:35 compute-0 podman[196262]: 2025-12-05 01:13:35.863104039 +0000 UTC m=+0.284986705 container attach bc9b8a74e350a9abd99ff827e6d80b793eb12f8bf492159fc0c5186d81309e10 (image=quay.io/ceph/ceph:v18, name=kind_fermat, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  5 01:13:36 compute-0 ceph-mgr[193209]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Dec  5 01:13:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  5 01:13:36 compute-0 ceph-mon[192914]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec  5 01:13:36 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec  5 01:13:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec  5 01:13:36 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:36 compute-0 ceph-mgr[193209]: [cephadm INFO root] Added label _admin to host compute-0
Dec  5 01:13:36 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Dec  5 01:13:36 compute-0 kind_fermat[196290]: Added label _admin to host compute-0
Dec  5 01:13:36 compute-0 systemd[1]: libpod-bc9b8a74e350a9abd99ff827e6d80b793eb12f8bf492159fc0c5186d81309e10.scope: Deactivated successfully.
Dec  5 01:13:36 compute-0 podman[196262]: 2025-12-05 01:13:36.429513764 +0000 UTC m=+0.851396470 container died bc9b8a74e350a9abd99ff827e6d80b793eb12f8bf492159fc0c5186d81309e10 (image=quay.io/ceph/ceph:v18, name=kind_fermat, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:13:36 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:36 compute-0 ceph-mon[192914]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec  5 01:13:36 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8297a75fb58b8409b5c31a26d3c718da6225825a3e36b10ebec983a29f9753d-merged.mount: Deactivated successfully.
Dec  5 01:13:36 compute-0 podman[196262]: 2025-12-05 01:13:36.50045048 +0000 UTC m=+0.922333146 container remove bc9b8a74e350a9abd99ff827e6d80b793eb12f8bf492159fc0c5186d81309e10 (image=quay.io/ceph/ceph:v18, name=kind_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:13:36 compute-0 systemd[1]: libpod-conmon-bc9b8a74e350a9abd99ff827e6d80b793eb12f8bf492159fc0c5186d81309e10.scope: Deactivated successfully.
Dec  5 01:13:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:13:36 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:36 compute-0 podman[196445]: 2025-12-05 01:13:36.583968505 +0000 UTC m=+0.057423684 container create 6e42ca781d993e1bf04966bbc54681fde3391ff0fb663683fbe9a11d53b4d1d9 (image=quay.io/ceph/ceph:v18, name=zealous_napier, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  5 01:13:36 compute-0 systemd[1]: Started libpod-conmon-6e42ca781d993e1bf04966bbc54681fde3391ff0fb663683fbe9a11d53b4d1d9.scope.
Dec  5 01:13:36 compute-0 podman[196445]: 2025-12-05 01:13:36.564223274 +0000 UTC m=+0.037678473 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:13:36 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:13:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bbab318e522938f36eeb6b2b02aa556ed07177e9b263a3cfc0090a2b7d2450e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bbab318e522938f36eeb6b2b02aa556ed07177e9b263a3cfc0090a2b7d2450e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bbab318e522938f36eeb6b2b02aa556ed07177e9b263a3cfc0090a2b7d2450e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:36 compute-0 podman[196445]: 2025-12-05 01:13:36.696808174 +0000 UTC m=+0.170263393 container init 6e42ca781d993e1bf04966bbc54681fde3391ff0fb663683fbe9a11d53b4d1d9 (image=quay.io/ceph/ceph:v18, name=zealous_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:13:36 compute-0 podman[196445]: 2025-12-05 01:13:36.71216458 +0000 UTC m=+0.185619759 container start 6e42ca781d993e1bf04966bbc54681fde3391ff0fb663683fbe9a11d53b4d1d9 (image=quay.io/ceph/ceph:v18, name=zealous_napier, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  5 01:13:36 compute-0 podman[196445]: 2025-12-05 01:13:36.717307436 +0000 UTC m=+0.190762655 container attach 6e42ca781d993e1bf04966bbc54681fde3391ff0fb663683fbe9a11d53b4d1d9 (image=quay.io/ceph/ceph:v18, name=zealous_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:13:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:13:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Dec  5 01:13:37 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2875519369' entity='client.admin' 
Dec  5 01:13:37 compute-0 systemd[1]: libpod-6e42ca781d993e1bf04966bbc54681fde3391ff0fb663683fbe9a11d53b4d1d9.scope: Deactivated successfully.
Dec  5 01:13:37 compute-0 podman[196445]: 2025-12-05 01:13:37.29738033 +0000 UTC m=+0.770835509 container died 6e42ca781d993e1bf04966bbc54681fde3391ff0fb663683fbe9a11d53b4d1d9 (image=quay.io/ceph/ceph:v18, name=zealous_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:13:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bbab318e522938f36eeb6b2b02aa556ed07177e9b263a3cfc0090a2b7d2450e-merged.mount: Deactivated successfully.
Dec  5 01:13:37 compute-0 podman[196445]: 2025-12-05 01:13:37.346131377 +0000 UTC m=+0.819586556 container remove 6e42ca781d993e1bf04966bbc54681fde3391ff0fb663683fbe9a11d53b4d1d9 (image=quay.io/ceph/ceph:v18, name=zealous_napier, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:13:37 compute-0 systemd[1]: libpod-conmon-6e42ca781d993e1bf04966bbc54681fde3391ff0fb663683fbe9a11d53b4d1d9.scope: Deactivated successfully.
Dec  5 01:13:37 compute-0 podman[196634]: 2025-12-05 01:13:37.438431531 +0000 UTC m=+0.061300664 container create b9bd8cc4db5ecfdc7c392b4c479902c86259ef1793af271f6dfc78128289107a (image=quay.io/ceph/ceph:v18, name=angry_pike, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  5 01:13:37 compute-0 podman[196644]: 2025-12-05 01:13:37.472425918 +0000 UTC m=+0.063756534 container create a16c8f1e780eb05be65948bd2314a087f6d70eba10cb8cb6d12eec8f5b5e2305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:13:37 compute-0 systemd[1]: Started libpod-conmon-b9bd8cc4db5ecfdc7c392b4c479902c86259ef1793af271f6dfc78128289107a.scope.
Dec  5 01:13:37 compute-0 podman[196634]: 2025-12-05 01:13:37.408703886 +0000 UTC m=+0.031573099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:13:37 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:13:37 compute-0 systemd[1]: Started libpod-conmon-a16c8f1e780eb05be65948bd2314a087f6d70eba10cb8cb6d12eec8f5b5e2305.scope.
Dec  5 01:13:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0570a5fe07fb665e6b0eb9b5f4e2a7c04ef0ecb540141a0f899e46ee7ad0245/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0570a5fe07fb665e6b0eb9b5f4e2a7c04ef0ecb540141a0f899e46ee7ad0245/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0570a5fe07fb665e6b0eb9b5f4e2a7c04ef0ecb540141a0f899e46ee7ad0245/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:37 compute-0 podman[196644]: 2025-12-05 01:13:37.437244117 +0000 UTC m=+0.028574753 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:13:37 compute-0 podman[196634]: 2025-12-05 01:13:37.568029916 +0000 UTC m=+0.190899079 container init b9bd8cc4db5ecfdc7c392b4c479902c86259ef1793af271f6dfc78128289107a (image=quay.io/ceph/ceph:v18, name=angry_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec  5 01:13:37 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:13:37 compute-0 ceph-mon[192914]: Added label _admin to host compute-0
Dec  5 01:13:37 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:37 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2875519369' entity='client.admin' 
Dec  5 01:13:37 compute-0 podman[196634]: 2025-12-05 01:13:37.584402022 +0000 UTC m=+0.207271155 container start b9bd8cc4db5ecfdc7c392b4c479902c86259ef1793af271f6dfc78128289107a (image=quay.io/ceph/ceph:v18, name=angry_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:13:37 compute-0 podman[196644]: 2025-12-05 01:13:37.586778679 +0000 UTC m=+0.178109355 container init a16c8f1e780eb05be65948bd2314a087f6d70eba10cb8cb6d12eec8f5b5e2305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_elion, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:13:37 compute-0 podman[196634]: 2025-12-05 01:13:37.592250115 +0000 UTC m=+0.215119278 container attach b9bd8cc4db5ecfdc7c392b4c479902c86259ef1793af271f6dfc78128289107a (image=quay.io/ceph/ceph:v18, name=angry_pike, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  5 01:13:37 compute-0 podman[196644]: 2025-12-05 01:13:37.596747173 +0000 UTC m=+0.188077789 container start a16c8f1e780eb05be65948bd2314a087f6d70eba10cb8cb6d12eec8f5b5e2305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_elion, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  5 01:13:37 compute-0 podman[196644]: 2025-12-05 01:13:37.602322301 +0000 UTC m=+0.193652967 container attach a16c8f1e780eb05be65948bd2314a087f6d70eba10cb8cb6d12eec8f5b5e2305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_elion, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  5 01:13:37 compute-0 optimistic_elion[196670]: 167 167
Dec  5 01:13:37 compute-0 systemd[1]: libpod-a16c8f1e780eb05be65948bd2314a087f6d70eba10cb8cb6d12eec8f5b5e2305.scope: Deactivated successfully.
Dec  5 01:13:37 compute-0 conmon[196670]: conmon a16c8f1e780eb05be659 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a16c8f1e780eb05be65948bd2314a087f6d70eba10cb8cb6d12eec8f5b5e2305.scope/container/memory.events
Dec  5 01:13:37 compute-0 podman[196644]: 2025-12-05 01:13:37.608510407 +0000 UTC m=+0.199841073 container died a16c8f1e780eb05be65948bd2314a087f6d70eba10cb8cb6d12eec8f5b5e2305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_elion, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  5 01:13:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-10cf81b12018d5e460b4e208f2da1b4aa5d296b8f98e986330d3ba8a8caee8d7-merged.mount: Deactivated successfully.
Dec  5 01:13:37 compute-0 podman[196644]: 2025-12-05 01:13:37.665995812 +0000 UTC m=+0.257326418 container remove a16c8f1e780eb05be65948bd2314a087f6d70eba10cb8cb6d12eec8f5b5e2305 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_elion, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  5 01:13:37 compute-0 systemd[1]: libpod-conmon-a16c8f1e780eb05be65948bd2314a087f6d70eba10cb8cb6d12eec8f5b5e2305.scope: Deactivated successfully.
Dec  5 01:13:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  5 01:13:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Dec  5 01:13:38 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2686863758' entity='client.admin' 
Dec  5 01:13:38 compute-0 angry_pike[196665]: set mgr/dashboard/cluster/status
Dec  5 01:13:38 compute-0 systemd[1]: libpod-b9bd8cc4db5ecfdc7c392b4c479902c86259ef1793af271f6dfc78128289107a.scope: Deactivated successfully.
Dec  5 01:13:38 compute-0 podman[196634]: 2025-12-05 01:13:38.375834156 +0000 UTC m=+0.998703319 container died b9bd8cc4db5ecfdc7c392b4c479902c86259ef1793af271f6dfc78128289107a (image=quay.io/ceph/ceph:v18, name=angry_pike, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  5 01:13:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0570a5fe07fb665e6b0eb9b5f4e2a7c04ef0ecb540141a0f899e46ee7ad0245-merged.mount: Deactivated successfully.
Dec  5 01:13:38 compute-0 podman[196634]: 2025-12-05 01:13:38.46179089 +0000 UTC m=+1.084660023 container remove b9bd8cc4db5ecfdc7c392b4c479902c86259ef1793af271f6dfc78128289107a (image=quay.io/ceph/ceph:v18, name=angry_pike, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  5 01:13:38 compute-0 systemd[1]: libpod-conmon-b9bd8cc4db5ecfdc7c392b4c479902c86259ef1793af271f6dfc78128289107a.scope: Deactivated successfully.
Dec  5 01:13:38 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2686863758' entity='client.admin' 
Dec  5 01:13:38 compute-0 podman[196728]: 2025-12-05 01:13:38.741648787 +0000 UTC m=+0.095958559 container create 5e8367d4b8ab7803ae2402e4f6658bf0dc68b213eed34d328b8fa4689ac82c7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_carson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  5 01:13:38 compute-0 podman[196728]: 2025-12-05 01:13:38.703438241 +0000 UTC m=+0.057747983 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:13:38 compute-0 systemd[1]: Started libpod-conmon-5e8367d4b8ab7803ae2402e4f6658bf0dc68b213eed34d328b8fa4689ac82c7e.scope.
Dec  5 01:13:38 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:13:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a19e962232743bf785d3221f9b381e53a177abffa9c9caa359352a4ac8cb829/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a19e962232743bf785d3221f9b381e53a177abffa9c9caa359352a4ac8cb829/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a19e962232743bf785d3221f9b381e53a177abffa9c9caa359352a4ac8cb829/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a19e962232743bf785d3221f9b381e53a177abffa9c9caa359352a4ac8cb829/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:38 compute-0 podman[196728]: 2025-12-05 01:13:38.888567485 +0000 UTC m=+0.242877227 container init 5e8367d4b8ab7803ae2402e4f6658bf0dc68b213eed34d328b8fa4689ac82c7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:13:38 compute-0 podman[196728]: 2025-12-05 01:13:38.910535629 +0000 UTC m=+0.264845361 container start 5e8367d4b8ab7803ae2402e4f6658bf0dc68b213eed34d328b8fa4689ac82c7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_carson, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  5 01:13:38 compute-0 podman[196728]: 2025-12-05 01:13:38.914910234 +0000 UTC m=+0.269219996 container attach 5e8367d4b8ab7803ae2402e4f6658bf0dc68b213eed34d328b8fa4689ac82c7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_carson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:13:39 compute-0 python3[196774]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:13:39 compute-0 podman[196775]: 2025-12-05 01:13:39.553695467 +0000 UTC m=+0.060913583 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:13:39 compute-0 podman[196775]: 2025-12-05 01:13:39.706471851 +0000 UTC m=+0.213689947 container create f5a39de83cb29dd11760d031ec67add8447837e51d8f0c3362bdaef94cb5656b (image=quay.io/ceph/ceph:v18, name=epic_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  5 01:13:39 compute-0 systemd[1]: Started libpod-conmon-f5a39de83cb29dd11760d031ec67add8447837e51d8f0c3362bdaef94cb5656b.scope.
Dec  5 01:13:39 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:13:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b4cdb305a326752888db785abfb3593a2ef61ee48c7f00fbc5be9b48f5a2834/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b4cdb305a326752888db785abfb3593a2ef61ee48c7f00fbc5be9b48f5a2834/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:39 compute-0 podman[196775]: 2025-12-05 01:13:39.85203497 +0000 UTC m=+0.359253056 container init f5a39de83cb29dd11760d031ec67add8447837e51d8f0c3362bdaef94cb5656b (image=quay.io/ceph/ceph:v18, name=epic_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  5 01:13:39 compute-0 podman[196775]: 2025-12-05 01:13:39.868363365 +0000 UTC m=+0.375581451 container start f5a39de83cb29dd11760d031ec67add8447837e51d8f0c3362bdaef94cb5656b (image=quay.io/ceph/ceph:v18, name=epic_keldysh, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  5 01:13:39 compute-0 podman[196775]: 2025-12-05 01:13:39.875159448 +0000 UTC m=+0.382377534 container attach f5a39de83cb29dd11760d031ec67add8447837e51d8f0c3362bdaef94cb5656b (image=quay.io/ceph/ceph:v18, name=epic_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:13:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  5 01:13:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Dec  5 01:13:40 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2342597363' entity='client.admin' 
Dec  5 01:13:40 compute-0 systemd[1]: libpod-f5a39de83cb29dd11760d031ec67add8447837e51d8f0c3362bdaef94cb5656b.scope: Deactivated successfully.
Dec  5 01:13:40 compute-0 podman[196775]: 2025-12-05 01:13:40.537401787 +0000 UTC m=+1.044619883 container died f5a39de83cb29dd11760d031ec67add8447837e51d8f0c3362bdaef94cb5656b (image=quay.io/ceph/ceph:v18, name=epic_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:13:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b4cdb305a326752888db785abfb3593a2ef61ee48c7f00fbc5be9b48f5a2834-merged.mount: Deactivated successfully.
Dec  5 01:13:40 compute-0 podman[196775]: 2025-12-05 01:13:40.619615155 +0000 UTC m=+1.126833271 container remove f5a39de83cb29dd11760d031ec67add8447837e51d8f0c3362bdaef94cb5656b (image=quay.io/ceph/ceph:v18, name=epic_keldysh, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:13:40 compute-0 systemd[1]: libpod-conmon-f5a39de83cb29dd11760d031ec67add8447837e51d8f0c3362bdaef94cb5656b.scope: Deactivated successfully.
Dec  5 01:13:41 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2342597363' entity='client.admin' 
Dec  5 01:13:41 compute-0 fervent_carson[196744]: [
Dec  5 01:13:41 compute-0 fervent_carson[196744]:    {
Dec  5 01:13:41 compute-0 fervent_carson[196744]:        "available": false,
Dec  5 01:13:41 compute-0 fervent_carson[196744]:        "ceph_device": false,
Dec  5 01:13:41 compute-0 fervent_carson[196744]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec  5 01:13:41 compute-0 fervent_carson[196744]:        "lsm_data": {},
Dec  5 01:13:41 compute-0 fervent_carson[196744]:        "lvs": [],
Dec  5 01:13:41 compute-0 fervent_carson[196744]:        "path": "/dev/sr0",
Dec  5 01:13:41 compute-0 fervent_carson[196744]:        "rejected_reasons": [
Dec  5 01:13:41 compute-0 fervent_carson[196744]:            "Insufficient space (<5GB)",
Dec  5 01:13:41 compute-0 fervent_carson[196744]:            "Has a FileSystem"
Dec  5 01:13:41 compute-0 fervent_carson[196744]:        ],
Dec  5 01:13:41 compute-0 fervent_carson[196744]:        "sys_api": {
Dec  5 01:13:41 compute-0 fervent_carson[196744]:            "actuators": null,
Dec  5 01:13:41 compute-0 fervent_carson[196744]:            "device_nodes": "sr0",
Dec  5 01:13:41 compute-0 fervent_carson[196744]:            "devname": "sr0",
Dec  5 01:13:41 compute-0 fervent_carson[196744]:            "human_readable_size": "482.00 KB",
Dec  5 01:13:41 compute-0 fervent_carson[196744]:            "id_bus": "ata",
Dec  5 01:13:41 compute-0 fervent_carson[196744]:            "model": "QEMU DVD-ROM",
Dec  5 01:13:41 compute-0 fervent_carson[196744]:            "nr_requests": "2",
Dec  5 01:13:41 compute-0 fervent_carson[196744]:            "parent": "/dev/sr0",
Dec  5 01:13:41 compute-0 fervent_carson[196744]:            "partitions": {},
Dec  5 01:13:41 compute-0 fervent_carson[196744]:            "path": "/dev/sr0",
Dec  5 01:13:41 compute-0 fervent_carson[196744]:            "removable": "1",
Dec  5 01:13:41 compute-0 fervent_carson[196744]:            "rev": "2.5+",
Dec  5 01:13:41 compute-0 fervent_carson[196744]:            "ro": "0",
Dec  5 01:13:41 compute-0 fervent_carson[196744]:            "rotational": "1",
Dec  5 01:13:41 compute-0 fervent_carson[196744]:            "sas_address": "",
Dec  5 01:13:41 compute-0 fervent_carson[196744]:            "sas_device_handle": "",
Dec  5 01:13:41 compute-0 fervent_carson[196744]:            "scheduler_mode": "mq-deadline",
Dec  5 01:13:41 compute-0 fervent_carson[196744]:            "sectors": 0,
Dec  5 01:13:41 compute-0 fervent_carson[196744]:            "sectorsize": "2048",
Dec  5 01:13:41 compute-0 fervent_carson[196744]:            "size": 493568.0,
Dec  5 01:13:41 compute-0 fervent_carson[196744]:            "support_discard": "2048",
Dec  5 01:13:41 compute-0 fervent_carson[196744]:            "type": "disk",
Dec  5 01:13:41 compute-0 fervent_carson[196744]:            "vendor": "QEMU"
Dec  5 01:13:41 compute-0 fervent_carson[196744]:        }
Dec  5 01:13:41 compute-0 fervent_carson[196744]:    }
Dec  5 01:13:41 compute-0 fervent_carson[196744]: ]
Dec  5 01:13:41 compute-0 systemd[1]: libpod-5e8367d4b8ab7803ae2402e4f6658bf0dc68b213eed34d328b8fa4689ac82c7e.scope: Deactivated successfully.
Dec  5 01:13:41 compute-0 podman[196728]: 2025-12-05 01:13:41.381802148 +0000 UTC m=+2.736111900 container died 5e8367d4b8ab7803ae2402e4f6658bf0dc68b213eed34d328b8fa4689ac82c7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_carson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  5 01:13:41 compute-0 systemd[1]: libpod-5e8367d4b8ab7803ae2402e4f6658bf0dc68b213eed34d328b8fa4689ac82c7e.scope: Consumed 2.494s CPU time.
Dec  5 01:13:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a19e962232743bf785d3221f9b381e53a177abffa9c9caa359352a4ac8cb829-merged.mount: Deactivated successfully.
Dec  5 01:13:41 compute-0 podman[196728]: 2025-12-05 01:13:41.462297056 +0000 UTC m=+2.816606788 container remove 5e8367d4b8ab7803ae2402e4f6658bf0dc68b213eed34d328b8fa4689ac82c7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_carson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  5 01:13:41 compute-0 systemd[1]: libpod-conmon-5e8367d4b8ab7803ae2402e4f6658bf0dc68b213eed34d328b8fa4689ac82c7e.scope: Deactivated successfully.
Dec  5 01:13:41 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:13:41 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:41 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:13:41 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:41 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:13:41 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:41 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:13:41 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:41 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec  5 01:13:41 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  5 01:13:41 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:13:41 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:13:41 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 01:13:41 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:13:41 compute-0 ceph-mgr[193209]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec  5 01:13:41 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec  5 01:13:41 compute-0 ansible-async_wrapper.py[198853]: Invoked with j271356092052 30 /home/zuul/.ansible/tmp/ansible-tmp-1764897221.0511286-37008-201520603484948/AnsiballZ_command.py _
Dec  5 01:13:41 compute-0 ansible-async_wrapper.py[198900]: Starting module and watcher
Dec  5 01:13:41 compute-0 ansible-async_wrapper.py[198900]: Start watching 198902 (30)
Dec  5 01:13:41 compute-0 ansible-async_wrapper.py[198902]: Start module (198902)
Dec  5 01:13:41 compute-0 ansible-async_wrapper.py[198853]: Return async_wrapper task started.
Dec  5 01:13:41 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:13:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  5 01:13:42 compute-0 python3[198906]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:13:42 compute-0 podman[198934]: 2025-12-05 01:13:42.238005614 +0000 UTC m=+0.102665661 container create 558e35a6692b8f289c23fc5eff2110f1227e4f4a1a9f19f225b01dd4a55e05e9 (image=quay.io/ceph/ceph:v18, name=distracted_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:13:42 compute-0 podman[198934]: 2025-12-05 01:13:42.199497049 +0000 UTC m=+0.064157196 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:13:42 compute-0 systemd[1]: Started libpod-conmon-558e35a6692b8f289c23fc5eff2110f1227e4f4a1a9f19f225b01dd4a55e05e9.scope.
Dec  5 01:13:42 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:13:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/774b649870037a2e89f0d20d890a7669d3ed97f0e789fc29b0a07200e6b96ac6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/774b649870037a2e89f0d20d890a7669d3ed97f0e789fc29b0a07200e6b96ac6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:42 compute-0 podman[198934]: 2025-12-05 01:13:42.353476168 +0000 UTC m=+0.218136245 container init 558e35a6692b8f289c23fc5eff2110f1227e4f4a1a9f19f225b01dd4a55e05e9 (image=quay.io/ceph/ceph:v18, name=distracted_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  5 01:13:42 compute-0 podman[198934]: 2025-12-05 01:13:42.373159397 +0000 UTC m=+0.237819444 container start 558e35a6692b8f289c23fc5eff2110f1227e4f4a1a9f19f225b01dd4a55e05e9 (image=quay.io/ceph/ceph:v18, name=distracted_rhodes, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  5 01:13:42 compute-0 podman[198934]: 2025-12-05 01:13:42.382335278 +0000 UTC m=+0.246995335 container attach 558e35a6692b8f289c23fc5eff2110f1227e4f4a1a9f19f225b01dd4a55e05e9 (image=quay.io/ceph/ceph:v18, name=distracted_rhodes, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  5 01:13:42 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:42 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:42 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:42 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:42 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  5 01:13:42 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:13:42 compute-0 ceph-mon[192914]: Updating compute-0:/etc/ceph/ceph.conf
Dec  5 01:13:42 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  5 01:13:42 compute-0 distracted_rhodes[198982]: 
Dec  5 01:13:42 compute-0 distracted_rhodes[198982]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  5 01:13:42 compute-0 systemd[1]: libpod-558e35a6692b8f289c23fc5eff2110f1227e4f4a1a9f19f225b01dd4a55e05e9.scope: Deactivated successfully.
Dec  5 01:13:42 compute-0 podman[198934]: 2025-12-05 01:13:42.980051234 +0000 UTC m=+0.844711281 container died 558e35a6692b8f289c23fc5eff2110f1227e4f4a1a9f19f225b01dd4a55e05e9 (image=quay.io/ceph/ceph:v18, name=distracted_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:13:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-774b649870037a2e89f0d20d890a7669d3ed97f0e789fc29b0a07200e6b96ac6-merged.mount: Deactivated successfully.
Dec  5 01:13:43 compute-0 podman[198934]: 2025-12-05 01:13:43.056765215 +0000 UTC m=+0.921425302 container remove 558e35a6692b8f289c23fc5eff2110f1227e4f4a1a9f19f225b01dd4a55e05e9 (image=quay.io/ceph/ceph:v18, name=distracted_rhodes, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  5 01:13:43 compute-0 systemd[1]: libpod-conmon-558e35a6692b8f289c23fc5eff2110f1227e4f4a1a9f19f225b01dd4a55e05e9.scope: Deactivated successfully.
Dec  5 01:13:43 compute-0 ansible-async_wrapper.py[198902]: Module complete (198902)
Dec  5 01:13:43 compute-0 ceph-mgr[193209]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/config/ceph.conf
Dec  5 01:13:43 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/config/ceph.conf
Dec  5 01:13:43 compute-0 python3[199365]: ansible-ansible.legacy.async_status Invoked with jid=j271356092052.198853 mode=status _async_dir=/root/.ansible_async
Dec  5 01:13:43 compute-0 ceph-mon[192914]: Updating compute-0:/var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/config/ceph.conf
Dec  5 01:13:43 compute-0 auditd[704]: Audit daemon rotating log files
Dec  5 01:13:43 compute-0 python3[199509]: ansible-ansible.legacy.async_status Invoked with jid=j271356092052.198853 mode=cleanup _async_dir=/root/.ansible_async
Dec  5 01:13:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  5 01:13:44 compute-0 python3[199660]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  5 01:13:44 compute-0 ceph-mgr[193209]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  5 01:13:44 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  5 01:13:44 compute-0 python3[199835]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:13:44 compute-0 podman[199880]: 2025-12-05 01:13:44.911139871 +0000 UTC m=+0.055693794 container create 49dfd9a40cde8f16e7f8c4cb8792d8670104fde9d5987986f787dadee65ee77e (image=quay.io/ceph/ceph:v18, name=hungry_bell, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  5 01:13:44 compute-0 systemd[1]: Started libpod-conmon-49dfd9a40cde8f16e7f8c4cb8792d8670104fde9d5987986f787dadee65ee77e.scope.
Dec  5 01:13:44 compute-0 podman[199880]: 2025-12-05 01:13:44.890789883 +0000 UTC m=+0.035343826 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:13:44 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:13:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eb181ddf9b5e6a5db4cfd21aa4977f6ff960c081573d2ea370d577e9e159e80/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eb181ddf9b5e6a5db4cfd21aa4977f6ff960c081573d2ea370d577e9e159e80/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eb181ddf9b5e6a5db4cfd21aa4977f6ff960c081573d2ea370d577e9e159e80/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:45 compute-0 podman[199880]: 2025-12-05 01:13:45.026714428 +0000 UTC m=+0.171268381 container init 49dfd9a40cde8f16e7f8c4cb8792d8670104fde9d5987986f787dadee65ee77e (image=quay.io/ceph/ceph:v18, name=hungry_bell, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  5 01:13:45 compute-0 podman[199880]: 2025-12-05 01:13:45.040203581 +0000 UTC m=+0.184757504 container start 49dfd9a40cde8f16e7f8c4cb8792d8670104fde9d5987986f787dadee65ee77e (image=quay.io/ceph/ceph:v18, name=hungry_bell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  5 01:13:45 compute-0 podman[199880]: 2025-12-05 01:13:45.044163064 +0000 UTC m=+0.188716987 container attach 49dfd9a40cde8f16e7f8c4cb8792d8670104fde9d5987986f787dadee65ee77e (image=quay.io/ceph/ceph:v18, name=hungry_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:13:45 compute-0 ceph-mon[192914]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  5 01:13:45 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  5 01:13:45 compute-0 hungry_bell[199926]: 
Dec  5 01:13:45 compute-0 hungry_bell[199926]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  5 01:13:45 compute-0 systemd[1]: libpod-49dfd9a40cde8f16e7f8c4cb8792d8670104fde9d5987986f787dadee65ee77e.scope: Deactivated successfully.
Dec  5 01:13:45 compute-0 podman[199880]: 2025-12-05 01:13:45.691038177 +0000 UTC m=+0.835592100 container died 49dfd9a40cde8f16e7f8c4cb8792d8670104fde9d5987986f787dadee65ee77e (image=quay.io/ceph/ceph:v18, name=hungry_bell, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:13:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-9eb181ddf9b5e6a5db4cfd21aa4977f6ff960c081573d2ea370d577e9e159e80-merged.mount: Deactivated successfully.
Dec  5 01:13:45 compute-0 podman[199880]: 2025-12-05 01:13:45.758767893 +0000 UTC m=+0.903321826 container remove 49dfd9a40cde8f16e7f8c4cb8792d8670104fde9d5987986f787dadee65ee77e (image=quay.io/ceph/ceph:v18, name=hungry_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:13:45 compute-0 systemd[1]: libpod-conmon-49dfd9a40cde8f16e7f8c4cb8792d8670104fde9d5987986f787dadee65ee77e.scope: Deactivated successfully.
Dec  5 01:13:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  5 01:13:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:13:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:13:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:13:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:13:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:13:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:13:46 compute-0 python3[200261]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:13:46 compute-0 podman[200306]: 2025-12-05 01:13:46.314074783 +0000 UTC m=+0.054661345 container create d842367e806cd21eeaa73d8976afbf382236c722a7ea14393a1e81a4cd30aa7f (image=quay.io/ceph/ceph:v18, name=affectionate_beaver, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:13:46 compute-0 systemd[1]: Started libpod-conmon-d842367e806cd21eeaa73d8976afbf382236c722a7ea14393a1e81a4cd30aa7f.scope.
Dec  5 01:13:46 compute-0 podman[200306]: 2025-12-05 01:13:46.29181587 +0000 UTC m=+0.032402442 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:13:46 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:13:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52fbbfdaa6cb772de3e60c28eeea1bd50c98d2c857a66b676a1b822c0853d8a2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52fbbfdaa6cb772de3e60c28eeea1bd50c98d2c857a66b676a1b822c0853d8a2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52fbbfdaa6cb772de3e60c28eeea1bd50c98d2c857a66b676a1b822c0853d8a2/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:46 compute-0 podman[200306]: 2025-12-05 01:13:46.424449322 +0000 UTC m=+0.165035954 container init d842367e806cd21eeaa73d8976afbf382236c722a7ea14393a1e81a4cd30aa7f (image=quay.io/ceph/ceph:v18, name=affectionate_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  5 01:13:46 compute-0 podman[200306]: 2025-12-05 01:13:46.434975581 +0000 UTC m=+0.175562173 container start d842367e806cd21eeaa73d8976afbf382236c722a7ea14393a1e81a4cd30aa7f (image=quay.io/ceph/ceph:v18, name=affectionate_beaver, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:13:46 compute-0 podman[200306]: 2025-12-05 01:13:46.440289712 +0000 UTC m=+0.180876264 container attach d842367e806cd21eeaa73d8976afbf382236c722a7ea14393a1e81a4cd30aa7f (image=quay.io/ceph/ceph:v18, name=affectionate_beaver, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  5 01:13:46 compute-0 ceph-mgr[193209]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/config/ceph.client.admin.keyring
Dec  5 01:13:46 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/config/ceph.client.admin.keyring
Dec  5 01:13:46 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Dec  5 01:13:46 compute-0 ansible-async_wrapper.py[198900]: Done in kid B.
Dec  5 01:13:46 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1903592761' entity='client.admin' 
Dec  5 01:13:46 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:13:47 compute-0 systemd[1]: libpod-d842367e806cd21eeaa73d8976afbf382236c722a7ea14393a1e81a4cd30aa7f.scope: Deactivated successfully.
Dec  5 01:13:47 compute-0 podman[200306]: 2025-12-05 01:13:47.003448105 +0000 UTC m=+0.744034687 container died d842367e806cd21eeaa73d8976afbf382236c722a7ea14393a1e81a4cd30aa7f (image=quay.io/ceph/ceph:v18, name=affectionate_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  5 01:13:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-52fbbfdaa6cb772de3e60c28eeea1bd50c98d2c857a66b676a1b822c0853d8a2-merged.mount: Deactivated successfully.
Dec  5 01:13:47 compute-0 podman[200306]: 2025-12-05 01:13:47.071548341 +0000 UTC m=+0.812134903 container remove d842367e806cd21eeaa73d8976afbf382236c722a7ea14393a1e81a4cd30aa7f (image=quay.io/ceph/ceph:v18, name=affectionate_beaver, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  5 01:13:47 compute-0 systemd[1]: libpod-conmon-d842367e806cd21eeaa73d8976afbf382236c722a7ea14393a1e81a4cd30aa7f.scope: Deactivated successfully.
Dec  5 01:13:47 compute-0 python3[200633]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:13:47 compute-0 podman[200667]: 2025-12-05 01:13:47.53471219 +0000 UTC m=+0.057524757 container create fefff7e29b55c97c894aed57a0523c18ef2955c00c1b2337b5c32d19f90222fb (image=quay.io/ceph/ceph:v18, name=pensive_fermi, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:13:47 compute-0 systemd[1]: Started libpod-conmon-fefff7e29b55c97c894aed57a0523c18ef2955c00c1b2337b5c32d19f90222fb.scope.
Dec  5 01:13:47 compute-0 podman[200667]: 2025-12-05 01:13:47.516215064 +0000 UTC m=+0.039027661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:13:47 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da128f5f01d58a3225a6e40dc4f53634aeedad85166e5262e64b02d2bc0a7891/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da128f5f01d58a3225a6e40dc4f53634aeedad85166e5262e64b02d2bc0a7891/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da128f5f01d58a3225a6e40dc4f53634aeedad85166e5262e64b02d2bc0a7891/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:47 compute-0 podman[200667]: 2025-12-05 01:13:47.635681651 +0000 UTC m=+0.158494278 container init fefff7e29b55c97c894aed57a0523c18ef2955c00c1b2337b5c32d19f90222fb (image=quay.io/ceph/ceph:v18, name=pensive_fermi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:13:47 compute-0 podman[200667]: 2025-12-05 01:13:47.648682391 +0000 UTC m=+0.171494968 container start fefff7e29b55c97c894aed57a0523c18ef2955c00c1b2337b5c32d19f90222fb (image=quay.io/ceph/ceph:v18, name=pensive_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  5 01:13:47 compute-0 podman[200667]: 2025-12-05 01:13:47.654781334 +0000 UTC m=+0.177594001 container attach fefff7e29b55c97c894aed57a0523c18ef2955c00c1b2337b5c32d19f90222fb (image=quay.io/ceph/ceph:v18, name=pensive_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:13:47 compute-0 ceph-mon[192914]: Updating compute-0:/var/lib/ceph/cbd280d3-cbd8-528b-ace6-2b3a887cdcee/config/ceph.client.admin.keyring
Dec  5 01:13:47 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1903592761' entity='client.admin' 
Dec  5 01:13:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  5 01:13:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Dec  5 01:13:48 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4153863721' entity='client.admin' 
Dec  5 01:13:48 compute-0 systemd[1]: libpod-fefff7e29b55c97c894aed57a0523c18ef2955c00c1b2337b5c32d19f90222fb.scope: Deactivated successfully.
Dec  5 01:13:48 compute-0 conmon[200719]: conmon fefff7e29b55c97c894a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fefff7e29b55c97c894aed57a0523c18ef2955c00c1b2337b5c32d19f90222fb.scope/container/memory.events
Dec  5 01:13:48 compute-0 podman[200667]: 2025-12-05 01:13:48.29999045 +0000 UTC m=+0.822803027 container died fefff7e29b55c97c894aed57a0523c18ef2955c00c1b2337b5c32d19f90222fb (image=quay.io/ceph/ceph:v18, name=pensive_fermi, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  5 01:13:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-da128f5f01d58a3225a6e40dc4f53634aeedad85166e5262e64b02d2bc0a7891-merged.mount: Deactivated successfully.
Dec  5 01:13:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:13:48 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:13:48 compute-0 podman[200667]: 2025-12-05 01:13:48.393366166 +0000 UTC m=+0.916178743 container remove fefff7e29b55c97c894aed57a0523c18ef2955c00c1b2337b5c32d19f90222fb (image=quay.io/ceph/ceph:v18, name=pensive_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  5 01:13:48 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 01:13:48 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:48 compute-0 systemd[1]: libpod-conmon-fefff7e29b55c97c894aed57a0523c18ef2955c00c1b2337b5c32d19f90222fb.scope: Deactivated successfully.
Dec  5 01:13:48 compute-0 ceph-mgr[193209]: [progress INFO root] update: starting ev baf65416-19b1-4804-9f83-627cc5d30e70 (Updating crash deployment (+1 -> 1))
Dec  5 01:13:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Dec  5 01:13:48 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  5 01:13:48 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  5 01:13:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:13:48 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:13:48 compute-0 ceph-mgr[193209]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Dec  5 01:13:48 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Dec  5 01:13:48 compute-0 podman[200903]: 2025-12-05 01:13:48.465921119 +0000 UTC m=+0.122392782 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Dec  5 01:13:48 compute-0 python3[201011]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:13:48 compute-0 podman[201053]: 2025-12-05 01:13:48.877677617 +0000 UTC m=+0.066279756 container create 6db1e7e4219c5c9a0a27c8c069e8d12a2b2a854915d2a6fadfc84e9b6c321f44 (image=quay.io/ceph/ceph:v18, name=busy_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:13:48 compute-0 systemd[1]: Started libpod-conmon-6db1e7e4219c5c9a0a27c8c069e8d12a2b2a854915d2a6fadfc84e9b6c321f44.scope.
Dec  5 01:13:48 compute-0 podman[201053]: 2025-12-05 01:13:48.851753739 +0000 UTC m=+0.040355918 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:13:48 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d08eab2ee8f29a2feae9579d3b70ea8d9722d833d31237cd31dc7146fcab4d8b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d08eab2ee8f29a2feae9579d3b70ea8d9722d833d31237cd31dc7146fcab4d8b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d08eab2ee8f29a2feae9579d3b70ea8d9722d833d31237cd31dc7146fcab4d8b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:48 compute-0 podman[201053]: 2025-12-05 01:13:48.973530542 +0000 UTC m=+0.162132731 container init 6db1e7e4219c5c9a0a27c8c069e8d12a2b2a854915d2a6fadfc84e9b6c321f44 (image=quay.io/ceph/ceph:v18, name=busy_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  5 01:13:48 compute-0 podman[201053]: 2025-12-05 01:13:48.995445765 +0000 UTC m=+0.184047904 container start 6db1e7e4219c5c9a0a27c8c069e8d12a2b2a854915d2a6fadfc84e9b6c321f44 (image=quay.io/ceph/ceph:v18, name=busy_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:13:49 compute-0 podman[201053]: 2025-12-05 01:13:49.001197109 +0000 UTC m=+0.189799268 container attach 6db1e7e4219c5c9a0a27c8c069e8d12a2b2a854915d2a6fadfc84e9b6c321f44 (image=quay.io/ceph/ceph:v18, name=busy_chebyshev, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  5 01:13:49 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/4153863721' entity='client.admin' 
Dec  5 01:13:49 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:49 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:49 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:49 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  5 01:13:49 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  5 01:13:49 compute-0 podman[201118]: 2025-12-05 01:13:49.290037422 +0000 UTC m=+0.064095604 container create 5b445a9ede00f0969f256d24c66eef472f754d96a1da560a10a02bd13d985043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_leavitt, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  5 01:13:49 compute-0 systemd[1]: Started libpod-conmon-5b445a9ede00f0969f256d24c66eef472f754d96a1da560a10a02bd13d985043.scope.
Dec  5 01:13:49 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:13:49 compute-0 podman[201118]: 2025-12-05 01:13:49.262436727 +0000 UTC m=+0.036494969 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:13:49 compute-0 podman[201118]: 2025-12-05 01:13:49.387534434 +0000 UTC m=+0.161592616 container init 5b445a9ede00f0969f256d24c66eef472f754d96a1da560a10a02bd13d985043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_leavitt, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  5 01:13:49 compute-0 podman[201118]: 2025-12-05 01:13:49.397212889 +0000 UTC m=+0.171271071 container start 5b445a9ede00f0969f256d24c66eef472f754d96a1da560a10a02bd13d985043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:13:49 compute-0 podman[201118]: 2025-12-05 01:13:49.40180229 +0000 UTC m=+0.175860492 container attach 5b445a9ede00f0969f256d24c66eef472f754d96a1da560a10a02bd13d985043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_leavitt, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Dec  5 01:13:49 compute-0 wonderful_leavitt[201146]: 167 167
Dec  5 01:13:49 compute-0 systemd[1]: libpod-5b445a9ede00f0969f256d24c66eef472f754d96a1da560a10a02bd13d985043.scope: Deactivated successfully.
Dec  5 01:13:49 compute-0 conmon[201146]: conmon 5b445a9ede00f0969f25 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5b445a9ede00f0969f256d24c66eef472f754d96a1da560a10a02bd13d985043.scope/container/memory.events
Dec  5 01:13:49 compute-0 podman[201118]: 2025-12-05 01:13:49.404660161 +0000 UTC m=+0.178718353 container died 5b445a9ede00f0969f256d24c66eef472f754d96a1da560a10a02bd13d985043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_leavitt, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  5 01:13:49 compute-0 podman[201133]: 2025-12-05 01:13:49.427603543 +0000 UTC m=+0.099795938 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  5 01:13:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-36ebd387b60bc9c1456de72fe0d3580f10fadc4c3316514ec4fa8cb4251f51ee-merged.mount: Deactivated successfully.
Dec  5 01:13:49 compute-0 podman[201118]: 2025-12-05 01:13:49.461003333 +0000 UTC m=+0.235061525 container remove 5b445a9ede00f0969f256d24c66eef472f754d96a1da560a10a02bd13d985043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_leavitt, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:13:49 compute-0 systemd[1]: libpod-conmon-5b445a9ede00f0969f256d24c66eef472f754d96a1da560a10a02bd13d985043.scope: Deactivated successfully.
Dec  5 01:13:49 compute-0 podman[201137]: 2025-12-05 01:13:49.479103738 +0000 UTC m=+0.150243483 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  5 01:13:49 compute-0 systemd[1]: Reloading.
Dec  5 01:13:49 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Dec  5 01:13:49 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3277226958' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Dec  5 01:13:49 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:13:49 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:13:49 compute-0 systemd[1]: Reloading.
Dec  5 01:13:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  5 01:13:50 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:13:50 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:13:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Dec  5 01:13:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  5 01:13:50 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3277226958' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec  5 01:13:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Dec  5 01:13:50 compute-0 ceph-mon[192914]: Deploying daemon crash.compute-0 on compute-0
Dec  5 01:13:50 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3277226958' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Dec  5 01:13:50 compute-0 busy_chebyshev[201075]: set require_min_compat_client to mimic
Dec  5 01:13:50 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Dec  5 01:13:50 compute-0 podman[201053]: 2025-12-05 01:13:50.312226107 +0000 UTC m=+1.500828276 container died 6db1e7e4219c5c9a0a27c8c069e8d12a2b2a854915d2a6fadfc84e9b6c321f44 (image=quay.io/ceph/ceph:v18, name=busy_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:13:50 compute-0 systemd[1]: libpod-6db1e7e4219c5c9a0a27c8c069e8d12a2b2a854915d2a6fadfc84e9b6c321f44.scope: Deactivated successfully.
Dec  5 01:13:50 compute-0 systemd[1]: Starting Ceph crash.compute-0 for cbd280d3-cbd8-528b-ace6-2b3a887cdcee...
Dec  5 01:13:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-d08eab2ee8f29a2feae9579d3b70ea8d9722d833d31237cd31dc7146fcab4d8b-merged.mount: Deactivated successfully.
Dec  5 01:13:50 compute-0 podman[201053]: 2025-12-05 01:13:50.418159959 +0000 UTC m=+1.606762118 container remove 6db1e7e4219c5c9a0a27c8c069e8d12a2b2a854915d2a6fadfc84e9b6c321f44 (image=quay.io/ceph/ceph:v18, name=busy_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  5 01:13:50 compute-0 systemd[1]: libpod-conmon-6db1e7e4219c5c9a0a27c8c069e8d12a2b2a854915d2a6fadfc84e9b6c321f44.scope: Deactivated successfully.
Dec  5 01:13:50 compute-0 podman[201349]: 2025-12-05 01:13:50.666504951 +0000 UTC m=+0.044695552 container create f9154648f016c70233aa6bcea5551106336428e4cad77b505cd85ec19f3c3ea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-crash-compute-0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  5 01:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eea18aa1ee2216af756156558fccf6c05ee38f1e4ed9b8d4f313d099a7f1b819/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eea18aa1ee2216af756156558fccf6c05ee38f1e4ed9b8d4f313d099a7f1b819/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eea18aa1ee2216af756156558fccf6c05ee38f1e4ed9b8d4f313d099a7f1b819/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eea18aa1ee2216af756156558fccf6c05ee38f1e4ed9b8d4f313d099a7f1b819/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:50 compute-0 podman[201349]: 2025-12-05 01:13:50.730282714 +0000 UTC m=+0.108473365 container init f9154648f016c70233aa6bcea5551106336428e4cad77b505cd85ec19f3c3ea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-crash-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  5 01:13:50 compute-0 podman[201349]: 2025-12-05 01:13:50.647093159 +0000 UTC m=+0.025283790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:13:50 compute-0 podman[201349]: 2025-12-05 01:13:50.742293346 +0000 UTC m=+0.120483967 container start f9154648f016c70233aa6bcea5551106336428e4cad77b505cd85ec19f3c3ea8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-crash-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:13:50 compute-0 bash[201349]: f9154648f016c70233aa6bcea5551106336428e4cad77b505cd85ec19f3c3ea8
Dec  5 01:13:50 compute-0 systemd[1]: Started Ceph crash.compute-0 for cbd280d3-cbd8-528b-ace6-2b3a887cdcee.
Dec  5 01:13:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:13:50 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:13:50 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Dec  5 01:13:50 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:50 compute-0 ceph-mgr[193209]: [progress INFO root] complete: finished ev baf65416-19b1-4804-9f83-627cc5d30e70 (Updating crash deployment (+1 -> 1))
Dec  5 01:13:50 compute-0 ceph-mgr[193209]: [progress INFO root] Completed event baf65416-19b1-4804-9f83-627cc5d30e70 (Updating crash deployment (+1 -> 1)) in 2 seconds
Dec  5 01:13:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Dec  5 01:13:50 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:50 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 4a560713-e49c-47b7-a4d8-19e9b8203506 does not exist
Dec  5 01:13:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Dec  5 01:13:50 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:50 compute-0 ceph-mgr[193209]: [progress INFO root] update: starting ev 7c268afe-0045-4f9a-81e2-b5f2a6f86b3b (Updating mgr deployment (+1 -> 2))
Dec  5 01:13:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.rknuqb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Dec  5 01:13:50 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.rknuqb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  5 01:13:50 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.rknuqb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec  5 01:13:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec  5 01:13:50 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  5 01:13:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:13:50 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:13:50 compute-0 ceph-mgr[193209]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.rknuqb on compute-0
Dec  5 01:13:50 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.rknuqb on compute-0
Dec  5 01:13:50 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-crash-compute-0[201362]: INFO:ceph-crash:pinging cluster to exercise our key
Dec  5 01:13:51 compute-0 python3[201395]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:13:51 compute-0 podman[201418]: 2025-12-05 01:13:51.085958757 +0000 UTC m=+0.089484786 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec  5 01:13:51 compute-0 podman[201436]: 2025-12-05 01:13:51.13180133 +0000 UTC m=+0.088790875 container create 70171864e5477188a97759292b8b9673361193c8b21e2903911496fadda4e5d4 (image=quay.io/ceph/ceph:v18, name=busy_mcclintock, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Dec  5 01:13:51 compute-0 ceph-mgr[193209]: [progress INFO root] Writing back 1 completed events
Dec  5 01:13:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec  5 01:13:51 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:51 compute-0 podman[201436]: 2025-12-05 01:13:51.094351875 +0000 UTC m=+0.051341460 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:13:51 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-crash-compute-0[201362]: 2025-12-05T01:13:51.188+0000 7f4787f56640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec  5 01:13:51 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-crash-compute-0[201362]: 2025-12-05T01:13:51.188+0000 7f4787f56640 -1 AuthRegistry(0x7f4780066fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec  5 01:13:51 compute-0 systemd[1]: Started libpod-conmon-70171864e5477188a97759292b8b9673361193c8b21e2903911496fadda4e5d4.scope.
Dec  5 01:13:51 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-crash-compute-0[201362]: 2025-12-05T01:13:51.189+0000 7f4787f56640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec  5 01:13:51 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-crash-compute-0[201362]: 2025-12-05T01:13:51.189+0000 7f4787f56640 -1 AuthRegistry(0x7f4787f55000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec  5 01:13:51 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-crash-compute-0[201362]: 2025-12-05T01:13:51.192+0000 7f4785ccb640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Dec  5 01:13:51 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-crash-compute-0[201362]: 2025-12-05T01:13:51.192+0000 7f4787f56640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Dec  5 01:13:51 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-crash-compute-0[201362]: [errno 13] RADOS permission denied (error connecting to the cluster)
Dec  5 01:13:51 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-crash-compute-0[201362]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Dec  5 01:13:51 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:13:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/febce25a46c6b26c2752d74848e930bcae9d09e0d5042f0355df0f9348946542/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/febce25a46c6b26c2752d74848e930bcae9d09e0d5042f0355df0f9348946542/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/febce25a46c6b26c2752d74848e930bcae9d09e0d5042f0355df0f9348946542/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:51 compute-0 podman[201436]: 2025-12-05 01:13:51.265120111 +0000 UTC m=+0.222109666 container init 70171864e5477188a97759292b8b9673361193c8b21e2903911496fadda4e5d4 (image=quay.io/ceph/ceph:v18, name=busy_mcclintock, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  5 01:13:51 compute-0 podman[201436]: 2025-12-05 01:13:51.278116701 +0000 UTC m=+0.235106256 container start 70171864e5477188a97759292b8b9673361193c8b21e2903911496fadda4e5d4 (image=quay.io/ceph/ceph:v18, name=busy_mcclintock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  5 01:13:51 compute-0 podman[201436]: 2025-12-05 01:13:51.285352536 +0000 UTC m=+0.242342121 container attach 70171864e5477188a97759292b8b9673361193c8b21e2903911496fadda4e5d4 (image=quay.io/ceph/ceph:v18, name=busy_mcclintock, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:13:51 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3277226958' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec  5 01:13:51 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:51 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:51 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:51 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:51 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:51 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.rknuqb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  5 01:13:51 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.rknuqb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec  5 01:13:51 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:51 compute-0 podman[201604]: 2025-12-05 01:13:51.81897744 +0000 UTC m=+0.083825765 container create ebdf5729f9eedc37763f9550b79b248294bdbfa8ad7a9a79b19d10804088c40e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lewin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:13:51 compute-0 systemd[1]: Started libpod-conmon-ebdf5729f9eedc37763f9550b79b248294bdbfa8ad7a9a79b19d10804088c40e.scope.
Dec  5 01:13:51 compute-0 podman[201604]: 2025-12-05 01:13:51.785094836 +0000 UTC m=+0.049943251 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:13:51 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14182 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec  5 01:13:51 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:13:51 compute-0 podman[201604]: 2025-12-05 01:13:51.935173684 +0000 UTC m=+0.200022029 container init ebdf5729f9eedc37763f9550b79b248294bdbfa8ad7a9a79b19d10804088c40e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lewin, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:13:51 compute-0 podman[201604]: 2025-12-05 01:13:51.944623172 +0000 UTC m=+0.209471517 container start ebdf5729f9eedc37763f9550b79b248294bdbfa8ad7a9a79b19d10804088c40e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lewin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:13:51 compute-0 interesting_lewin[201620]: 167 167
Dec  5 01:13:51 compute-0 systemd[1]: libpod-ebdf5729f9eedc37763f9550b79b248294bdbfa8ad7a9a79b19d10804088c40e.scope: Deactivated successfully.
Dec  5 01:13:51 compute-0 podman[201604]: 2025-12-05 01:13:51.951274961 +0000 UTC m=+0.216123307 container attach ebdf5729f9eedc37763f9550b79b248294bdbfa8ad7a9a79b19d10804088c40e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:13:51 compute-0 conmon[201620]: conmon ebdf5729f9eedc37763f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ebdf5729f9eedc37763f9550b79b248294bdbfa8ad7a9a79b19d10804088c40e.scope/container/memory.events
Dec  5 01:13:51 compute-0 podman[201604]: 2025-12-05 01:13:51.952416264 +0000 UTC m=+0.217264589 container died ebdf5729f9eedc37763f9550b79b248294bdbfa8ad7a9a79b19d10804088c40e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lewin, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:13:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:13:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-d705ad42b8151979ba719e549c3a7b470225b291c81967d2b0852c8ad8efe50a-merged.mount: Deactivated successfully.
Dec  5 01:13:52 compute-0 podman[201604]: 2025-12-05 01:13:52.0169901 +0000 UTC m=+0.281838425 container remove ebdf5729f9eedc37763f9550b79b248294bdbfa8ad7a9a79b19d10804088c40e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lewin, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:13:52 compute-0 systemd[1]: libpod-conmon-ebdf5729f9eedc37763f9550b79b248294bdbfa8ad7a9a79b19d10804088c40e.scope: Deactivated successfully.
Dec  5 01:13:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  5 01:13:52 compute-0 systemd[1]: Reloading.
Dec  5 01:13:52 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:13:52 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:13:52 compute-0 ceph-mon[192914]: Deploying daemon mgr.compute-0.rknuqb on compute-0
Dec  5 01:13:52 compute-0 systemd[1]: Reloading.
Dec  5 01:13:52 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:13:52 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:13:52 compute-0 systemd[1]: Starting Ceph mgr.compute-0.rknuqb for cbd280d3-cbd8-528b-ace6-2b3a887cdcee...
Dec  5 01:13:53 compute-0 podman[201861]: 2025-12-05 01:13:53.234232341 +0000 UTC m=+0.060564707 container create a45ef3db10c0d1b234f92ce6ab91a85e98752c9713eff097b262fddba4bc74d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-rknuqb, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  5 01:13:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51d9f37dca8c04dbd8de69c338aa22edafb1f21a0a05f1474f40ee315ccf4285/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51d9f37dca8c04dbd8de69c338aa22edafb1f21a0a05f1474f40ee315ccf4285/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51d9f37dca8c04dbd8de69c338aa22edafb1f21a0a05f1474f40ee315ccf4285/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51d9f37dca8c04dbd8de69c338aa22edafb1f21a0a05f1474f40ee315ccf4285/merged/var/lib/ceph/mgr/ceph-compute-0.rknuqb supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:53 compute-0 podman[201861]: 2025-12-05 01:13:53.215097998 +0000 UTC m=+0.041430384 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:13:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec  5 01:13:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec  5 01:13:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec  5 01:13:53 compute-0 podman[201861]: 2025-12-05 01:13:53.343539234 +0000 UTC m=+0.169871650 container init a45ef3db10c0d1b234f92ce6ab91a85e98752c9713eff097b262fddba4bc74d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-rknuqb, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:13:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Dec  5 01:13:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:53 compute-0 ceph-mgr[193209]: [cephadm INFO root] Added host compute-0
Dec  5 01:13:53 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Added host compute-0
Dec  5 01:13:53 compute-0 ceph-mgr[193209]: [cephadm INFO root] Saving service mon spec with placement compute-0
Dec  5 01:13:53 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Dec  5 01:13:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Dec  5 01:13:53 compute-0 podman[201861]: 2025-12-05 01:13:53.367993765 +0000 UTC m=+0.194326141 container start a45ef3db10c0d1b234f92ce6ab91a85e98752c9713eff097b262fddba4bc74d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-rknuqb, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:13:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:53 compute-0 ceph-mgr[193209]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Dec  5 01:13:53 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Dec  5 01:13:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Dec  5 01:13:53 compute-0 bash[201861]: a45ef3db10c0d1b234f92ce6ab91a85e98752c9713eff097b262fddba4bc74d5
Dec  5 01:13:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:53 compute-0 ceph-mgr[193209]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Dec  5 01:13:53 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Dec  5 01:13:53 compute-0 ceph-mgr[193209]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Dec  5 01:13:53 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Dec  5 01:13:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Dec  5 01:13:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:53 compute-0 systemd[1]: Started Ceph mgr.compute-0.rknuqb for cbd280d3-cbd8-528b-ace6-2b3a887cdcee.
Dec  5 01:13:53 compute-0 busy_mcclintock[201516]: Added host 'compute-0' with addr '192.168.122.100'
Dec  5 01:13:53 compute-0 busy_mcclintock[201516]: Scheduled mon update...
Dec  5 01:13:53 compute-0 busy_mcclintock[201516]: Scheduled mgr update...
Dec  5 01:13:53 compute-0 busy_mcclintock[201516]: Scheduled osd.default_drive_group update...
Dec  5 01:13:53 compute-0 systemd[1]: libpod-70171864e5477188a97759292b8b9673361193c8b21e2903911496fadda4e5d4.scope: Deactivated successfully.
Dec  5 01:13:53 compute-0 podman[201436]: 2025-12-05 01:13:53.435390841 +0000 UTC m=+2.392380386 container died 70171864e5477188a97759292b8b9673361193c8b21e2903911496fadda4e5d4 (image=quay.io/ceph/ceph:v18, name=busy_mcclintock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Dec  5 01:13:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:13:53 compute-0 ceph-mgr[201895]: set uid:gid to 167:167 (ceph:ceph)
Dec  5 01:13:53 compute-0 ceph-mgr[201895]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Dec  5 01:13:53 compute-0 ceph-mgr[201895]: pidfile_write: ignore empty --pid-file
Dec  5 01:13:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:13:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-febce25a46c6b26c2752d74848e930bcae9d09e0d5042f0355df0f9348946542-merged.mount: Deactivated successfully.
Dec  5 01:13:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Dec  5 01:13:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:53 compute-0 podman[201436]: 2025-12-05 01:13:53.498310393 +0000 UTC m=+2.455299938 container remove 70171864e5477188a97759292b8b9673361193c8b21e2903911496fadda4e5d4 (image=quay.io/ceph/ceph:v18, name=busy_mcclintock, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  5 01:13:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Dec  5 01:13:53 compute-0 ceph-mgr[193209]: [progress INFO root] complete: finished ev 7c268afe-0045-4f9a-81e2-b5f2a6f86b3b (Updating mgr deployment (+1 -> 2))
Dec  5 01:13:53 compute-0 ceph-mgr[193209]: [progress INFO root] Completed event 7c268afe-0045-4f9a-81e2-b5f2a6f86b3b (Updating mgr deployment (+1 -> 2)) in 3 seconds
Dec  5 01:13:53 compute-0 systemd[1]: libpod-conmon-70171864e5477188a97759292b8b9673361193c8b21e2903911496fadda4e5d4.scope: Deactivated successfully.
Dec  5 01:13:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:53 compute-0 ceph-mgr[201895]: mgr[py] Loading python module 'alerts'
Dec  5 01:13:53 compute-0 podman[202029]: 2025-12-05 01:13:53.912183765 +0000 UTC m=+0.104973983 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release-0.7.12=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., version=9.4, build-date=2024-09-18T21:23:30, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, name=ubi9, container_name=kepler)
Dec  5 01:13:53 compute-0 ceph-mgr[201895]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  5 01:13:53 compute-0 ceph-mgr[201895]: mgr[py] Loading python module 'balancer'
Dec  5 01:13:53 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-rknuqb[201891]: 2025-12-05T01:13:53.924+0000 7ff61db06140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  5 01:13:53 compute-0 python3[202032]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:13:54 compute-0 podman[202099]: 2025-12-05 01:13:54.024139262 +0000 UTC m=+0.058450658 container create 29c3eecfb2c5bd098d293b1b6508c01d85bcb40b8d274131eb9d1ad29a5849a8 (image=quay.io/ceph/ceph:v18, name=infallible_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  5 01:13:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  5 01:13:54 compute-0 systemd[1]: Started libpod-conmon-29c3eecfb2c5bd098d293b1b6508c01d85bcb40b8d274131eb9d1ad29a5849a8.scope.
Dec  5 01:13:54 compute-0 podman[202099]: 2025-12-05 01:13:53.999523457 +0000 UTC m=+0.033834873 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:13:54 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:13:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395d8140490d1fbb3688f60fc67af97e3477916cfa1a285457d5d76463cb50bc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395d8140490d1fbb3688f60fc67af97e3477916cfa1a285457d5d76463cb50bc/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/395d8140490d1fbb3688f60fc67af97e3477916cfa1a285457d5d76463cb50bc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:54 compute-0 podman[202099]: 2025-12-05 01:13:54.141775517 +0000 UTC m=+0.176086913 container init 29c3eecfb2c5bd098d293b1b6508c01d85bcb40b8d274131eb9d1ad29a5849a8 (image=quay.io/ceph/ceph:v18, name=infallible_cartwright, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:13:54 compute-0 podman[202099]: 2025-12-05 01:13:54.151078406 +0000 UTC m=+0.185389792 container start 29c3eecfb2c5bd098d293b1b6508c01d85bcb40b8d274131eb9d1ad29a5849a8 (image=quay.io/ceph/ceph:v18, name=infallible_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:13:54 compute-0 podman[202099]: 2025-12-05 01:13:54.155036027 +0000 UTC m=+0.189347423 container attach 29c3eecfb2c5bd098d293b1b6508c01d85bcb40b8d274131eb9d1ad29a5849a8 (image=quay.io/ceph/ceph:v18, name=infallible_cartwright, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  5 01:13:54 compute-0 ceph-mgr[201895]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  5 01:13:54 compute-0 ceph-mgr[201895]: mgr[py] Loading python module 'cephadm'
Dec  5 01:13:54 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-rknuqb[201891]: 2025-12-05T01:13:54.205+0000 7ff61db06140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  5 01:13:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:54 compute-0 ceph-mon[192914]: Added host compute-0
Dec  5 01:13:54 compute-0 ceph-mon[192914]: Saving service mon spec with placement compute-0
Dec  5 01:13:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:54 compute-0 ceph-mon[192914]: Saving service mgr spec with placement compute-0
Dec  5 01:13:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:54 compute-0 ceph-mon[192914]: Marking host: compute-0 for OSDSpec preview refresh.
Dec  5 01:13:54 compute-0 ceph-mon[192914]: Saving service osd.default_drive_group spec with placement compute-0
Dec  5 01:13:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Dec  5 01:13:54 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2673528238' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  5 01:13:54 compute-0 infallible_cartwright[202140]: 
Dec  5 01:13:54 compute-0 infallible_cartwright[202140]: {"fsid":"cbd280d3-cbd8-528b-ace6-2b3a887cdcee","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":87,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-12-05T01:12:22.836369+0000","services":{}},"progress_events":{}}
Dec  5 01:13:54 compute-0 systemd[1]: libpod-29c3eecfb2c5bd098d293b1b6508c01d85bcb40b8d274131eb9d1ad29a5849a8.scope: Deactivated successfully.
Dec  5 01:13:54 compute-0 conmon[202140]: conmon 29c3eecfb2c5bd098d29 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-29c3eecfb2c5bd098d293b1b6508c01d85bcb40b8d274131eb9d1ad29a5849a8.scope/container/memory.events
Dec  5 01:13:54 compute-0 podman[202231]: 2025-12-05 01:13:54.876953424 +0000 UTC m=+0.186819782 container exec aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:13:54 compute-0 podman[202253]: 2025-12-05 01:13:54.960231432 +0000 UTC m=+0.075483272 container died 29c3eecfb2c5bd098d293b1b6508c01d85bcb40b8d274131eb9d1ad29a5849a8 (image=quay.io/ceph/ceph:v18, name=infallible_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  5 01:13:54 compute-0 podman[202231]: 2025-12-05 01:13:54.982733449 +0000 UTC m=+0.292599827 container exec_died aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  5 01:13:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-395d8140490d1fbb3688f60fc67af97e3477916cfa1a285457d5d76463cb50bc-merged.mount: Deactivated successfully.
Dec  5 01:13:55 compute-0 podman[202253]: 2025-12-05 01:13:55.05244608 +0000 UTC m=+0.167697910 container remove 29c3eecfb2c5bd098d293b1b6508c01d85bcb40b8d274131eb9d1ad29a5849a8 (image=quay.io/ceph/ceph:v18, name=infallible_cartwright, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  5 01:13:55 compute-0 systemd[1]: libpod-conmon-29c3eecfb2c5bd098d293b1b6508c01d85bcb40b8d274131eb9d1ad29a5849a8.scope: Deactivated successfully.
Dec  5 01:13:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:13:55 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:13:55 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:13:55 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:13:55 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:13:55 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:13:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 01:13:55 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:13:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 01:13:55 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:55 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 7fd103b1-9299-47e5-aec5-12f7663d9561 does not exist
Dec  5 01:13:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Dec  5 01:13:55 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:55 compute-0 ceph-mgr[193209]: [progress INFO root] update: starting ev c7002aca-ef8f-4ee2-8802-0a0ee52f207e (Updating mgr deployment (-1 -> 1))
Dec  5 01:13:55 compute-0 ceph-mgr[193209]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.rknuqb from compute-0 -- ports [8765]
Dec  5 01:13:55 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.rknuqb from compute-0 -- ports [8765]
Dec  5 01:13:55 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:55 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:55 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:55 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:55 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:13:55 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:55 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  5 01:13:56 compute-0 systemd[1]: Stopping Ceph mgr.compute-0.rknuqb for cbd280d3-cbd8-528b-ace6-2b3a887cdcee...
Dec  5 01:13:56 compute-0 ceph-mgr[193209]: [progress INFO root] Writing back 2 completed events
Dec  5 01:13:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec  5 01:13:56 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:56 compute-0 ceph-mgr[201895]: mgr[py] Loading python module 'crash'
Dec  5 01:13:56 compute-0 podman[202504]: 2025-12-05 01:13:56.488579082 +0000 UTC m=+0.113221873 container died a45ef3db10c0d1b234f92ce6ab91a85e98752c9713eff097b262fddba4bc74d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-rknuqb, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  5 01:13:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-51d9f37dca8c04dbd8de69c338aa22edafb1f21a0a05f1474f40ee315ccf4285-merged.mount: Deactivated successfully.
Dec  5 01:13:56 compute-0 podman[202504]: 2025-12-05 01:13:56.5711117 +0000 UTC m=+0.195754501 container remove a45ef3db10c0d1b234f92ce6ab91a85e98752c9713eff097b262fddba4bc74d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-rknuqb, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  5 01:13:56 compute-0 bash[202504]: ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-rknuqb
Dec  5 01:13:56 compute-0 systemd[1]: ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee@mgr.compute-0.rknuqb.service: Main process exited, code=exited, status=143/n/a
Dec  5 01:13:56 compute-0 podman[202531]: 2025-12-05 01:13:56.704617996 +0000 UTC m=+0.122361007 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.openshift.tags=minimal rhel9, distribution-scope=public, maintainer=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, version=9.6, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, release=1755695350, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_id=edpm, vcs-type=git)
Dec  5 01:13:56 compute-0 systemd[1]: ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee@mgr.compute-0.rknuqb.service: Failed with result 'exit-code'.
Dec  5 01:13:56 compute-0 systemd[1]: Stopped Ceph mgr.compute-0.rknuqb for cbd280d3-cbd8-528b-ace6-2b3a887cdcee.
Dec  5 01:13:56 compute-0 systemd[1]: ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee@mgr.compute-0.rknuqb.service: Consumed 4.658s CPU time.
Dec  5 01:13:56 compute-0 systemd[1]: Reloading.
Dec  5 01:13:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:13:57 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:13:57 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:13:57 compute-0 ceph-mon[192914]: Removing daemon mgr.compute-0.rknuqb from compute-0 -- ports [8765]
Dec  5 01:13:57 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:57 compute-0 ceph-mgr[193209]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.rknuqb
Dec  5 01:13:57 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.rknuqb
Dec  5 01:13:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.rknuqb"} v 0) v1
Dec  5 01:13:57 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.rknuqb"}]: dispatch
Dec  5 01:13:57 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.rknuqb"}]': finished
Dec  5 01:13:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Dec  5 01:13:57 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:57 compute-0 ceph-mgr[193209]: [progress INFO root] complete: finished ev c7002aca-ef8f-4ee2-8802-0a0ee52f207e (Updating mgr deployment (-1 -> 1))
Dec  5 01:13:57 compute-0 ceph-mgr[193209]: [progress INFO root] Completed event c7002aca-ef8f-4ee2-8802-0a0ee52f207e (Updating mgr deployment (-1 -> 1)) in 2 seconds
Dec  5 01:13:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Dec  5 01:13:57 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:57 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev b35376fe-f03e-4798-9012-965b03cf11ce does not exist
Dec  5 01:13:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 01:13:57 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 01:13:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 01:13:57 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:13:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:13:57 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:13:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  5 01:13:58 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.rknuqb"}]: dispatch
Dec  5 01:13:58 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.rknuqb"}]': finished
Dec  5 01:13:58 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:58 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:13:58 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:13:58 compute-0 podman[202756]: 2025-12-05 01:13:58.227799561 +0000 UTC m=+0.054420526 container create 38ef62b7c02dd126fa8cc4a1c8d7f0eb9fb16a526d65bed70471d1cb14d5cac5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hypatia, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:13:58 compute-0 systemd[1]: Started libpod-conmon-38ef62b7c02dd126fa8cc4a1c8d7f0eb9fb16a526d65bed70471d1cb14d5cac5.scope.
Dec  5 01:13:58 compute-0 podman[202756]: 2025-12-05 01:13:58.212135975 +0000 UTC m=+0.038756960 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:13:58 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:13:58 compute-0 podman[202756]: 2025-12-05 01:13:58.337451284 +0000 UTC m=+0.164072259 container init 38ef62b7c02dd126fa8cc4a1c8d7f0eb9fb16a526d65bed70471d1cb14d5cac5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Dec  5 01:13:58 compute-0 podman[202756]: 2025-12-05 01:13:58.357200684 +0000 UTC m=+0.183821679 container start 38ef62b7c02dd126fa8cc4a1c8d7f0eb9fb16a526d65bed70471d1cb14d5cac5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hypatia, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:13:58 compute-0 podman[202756]: 2025-12-05 01:13:58.364294581 +0000 UTC m=+0.190915666 container attach 38ef62b7c02dd126fa8cc4a1c8d7f0eb9fb16a526d65bed70471d1cb14d5cac5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:13:58 compute-0 great_hypatia[202770]: 167 167
Dec  5 01:13:58 compute-0 systemd[1]: libpod-38ef62b7c02dd126fa8cc4a1c8d7f0eb9fb16a526d65bed70471d1cb14d5cac5.scope: Deactivated successfully.
Dec  5 01:13:58 compute-0 podman[202756]: 2025-12-05 01:13:58.36999053 +0000 UTC m=+0.196611555 container died 38ef62b7c02dd126fa8cc4a1c8d7f0eb9fb16a526d65bed70471d1cb14d5cac5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hypatia, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Dec  5 01:13:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-86698401ed7035b99d4fa762f739eee4ef6d662191873857b89b09ffce57097a-merged.mount: Deactivated successfully.
Dec  5 01:13:58 compute-0 podman[202756]: 2025-12-05 01:13:58.443511097 +0000 UTC m=+0.270132072 container remove 38ef62b7c02dd126fa8cc4a1c8d7f0eb9fb16a526d65bed70471d1cb14d5cac5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hypatia, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:13:58 compute-0 systemd[1]: libpod-conmon-38ef62b7c02dd126fa8cc4a1c8d7f0eb9fb16a526d65bed70471d1cb14d5cac5.scope: Deactivated successfully.
Dec  5 01:13:58 compute-0 podman[202793]: 2025-12-05 01:13:58.664263142 +0000 UTC m=+0.057941204 container create ca47c3e846d4670fc66759301264fbee3626e8f324c6b934a19c0980b41a0e90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  5 01:13:58 compute-0 systemd[1]: Started libpod-conmon-ca47c3e846d4670fc66759301264fbee3626e8f324c6b934a19c0980b41a0e90.scope.
Dec  5 01:13:58 compute-0 podman[202793]: 2025-12-05 01:13:58.642359833 +0000 UTC m=+0.036037905 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:13:58 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:13:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67417fa98245a11fb0f0d1e2344f29841590f8818262fc5073705e0f7090e666/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67417fa98245a11fb0f0d1e2344f29841590f8818262fc5073705e0f7090e666/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67417fa98245a11fb0f0d1e2344f29841590f8818262fc5073705e0f7090e666/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67417fa98245a11fb0f0d1e2344f29841590f8818262fc5073705e0f7090e666/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67417fa98245a11fb0f0d1e2344f29841590f8818262fc5073705e0f7090e666/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:13:58 compute-0 podman[202793]: 2025-12-05 01:13:58.781960659 +0000 UTC m=+0.175638751 container init ca47c3e846d4670fc66759301264fbee3626e8f324c6b934a19c0980b41a0e90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  5 01:13:58 compute-0 podman[202793]: 2025-12-05 01:13:58.799471157 +0000 UTC m=+0.193149219 container start ca47c3e846d4670fc66759301264fbee3626e8f324c6b934a19c0980b41a0e90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_swirles, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:13:58 compute-0 podman[202793]: 2025-12-05 01:13:58.805154995 +0000 UTC m=+0.198833077 container attach ca47c3e846d4670fc66759301264fbee3626e8f324c6b934a19c0980b41a0e90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_swirles, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  5 01:13:59 compute-0 ceph-mon[192914]: Removing key for mgr.compute-0.rknuqb
Dec  5 01:13:59 compute-0 podman[158197]: time="2025-12-05T01:13:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:13:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:13:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 25441 "" "Go-http-client/1.1"
Dec  5 01:13:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:13:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4827 "" "Go-http-client/1.1"
Dec  5 01:13:59 compute-0 gracious_swirles[202809]: --> passed data devices: 0 physical, 3 LVM
Dec  5 01:13:59 compute-0 gracious_swirles[202809]: --> relative data size: 1.0
Dec  5 01:14:00 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  5 01:14:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  5 01:14:00 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 8c4de221-4fda-4bb1-b794-fc4329742186
Dec  5 01:14:00 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "8c4de221-4fda-4bb1-b794-fc4329742186"} v 0) v1
Dec  5 01:14:00 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/692644570' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8c4de221-4fda-4bb1-b794-fc4329742186"}]: dispatch
Dec  5 01:14:00 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Dec  5 01:14:00 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  5 01:14:00 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/692644570' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8c4de221-4fda-4bb1-b794-fc4329742186"}]': finished
Dec  5 01:14:00 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Dec  5 01:14:00 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Dec  5 01:14:00 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  5 01:14:00 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  5 01:14:00 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  5 01:14:00 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  5 01:14:00 compute-0 gracious_swirles[202809]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Dec  5 01:14:00 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Dec  5 01:14:00 compute-0 lvm[202873]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  5 01:14:00 compute-0 lvm[202873]: VG ceph_vg0 finished
Dec  5 01:14:00 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  5 01:14:00 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec  5 01:14:00 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Dec  5 01:14:01 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Dec  5 01:14:01 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/129212990' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec  5 01:14:01 compute-0 gracious_swirles[202809]: stderr: got monmap epoch 1
Dec  5 01:14:01 compute-0 gracious_swirles[202809]: --> Creating keyring file for osd.0
Dec  5 01:14:01 compute-0 ceph-mgr[193209]: [progress INFO root] Writing back 3 completed events
Dec  5 01:14:01 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec  5 01:14:01 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:01 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Dec  5 01:14:01 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Dec  5 01:14:01 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/692644570' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8c4de221-4fda-4bb1-b794-fc4329742186"}]: dispatch
Dec  5 01:14:01 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/692644570' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8c4de221-4fda-4bb1-b794-fc4329742186"}]': finished
Dec  5 01:14:01 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:01 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 8c4de221-4fda-4bb1-b794-fc4329742186 --setuser ceph --setgroup ceph
Dec  5 01:14:01 compute-0 openstack_network_exporter[160350]: ERROR   01:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:14:01 compute-0 openstack_network_exporter[160350]: ERROR   01:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:14:01 compute-0 openstack_network_exporter[160350]: ERROR   01:14:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:14:01 compute-0 openstack_network_exporter[160350]: ERROR   01:14:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:14:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:14:01 compute-0 openstack_network_exporter[160350]: ERROR   01:14:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:14:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:14:01 compute-0 podman[202926]: 2025-12-05 01:14:01.436377709 +0000 UTC m=+0.094944284 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 01:14:01 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:14:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  5 01:14:02 compute-0 ceph-mon[192914]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec  5 01:14:02 compute-0 ceph-mon[192914]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec  5 01:14:02 compute-0 ceph-mon[192914]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec  5 01:14:02 compute-0 ceph-mon[192914]: Cluster is now healthy
Dec  5 01:14:03 compute-0 gracious_swirles[202809]: stderr: 2025-12-05T01:14:01.286+0000 7f5ce6256740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec  5 01:14:03 compute-0 gracious_swirles[202809]: stderr: 2025-12-05T01:14:01.286+0000 7f5ce6256740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec  5 01:14:03 compute-0 gracious_swirles[202809]: stderr: 2025-12-05T01:14:01.286+0000 7f5ce6256740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec  5 01:14:03 compute-0 gracious_swirles[202809]: stderr: 2025-12-05T01:14:01.287+0000 7f5ce6256740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Dec  5 01:14:03 compute-0 gracious_swirles[202809]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Dec  5 01:14:03 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec  5 01:14:04 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec  5 01:14:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  5 01:14:04 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec  5 01:14:04 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec  5 01:14:04 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  5 01:14:04 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec  5 01:14:04 compute-0 gracious_swirles[202809]: --> ceph-volume lvm activate successful for osd ID: 0
Dec  5 01:14:04 compute-0 gracious_swirles[202809]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Dec  5 01:14:04 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  5 01:14:04 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 944e6457-e96a-45b2-ba7f-23ecd70be9f8
Dec  5 01:14:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8"} v 0) v1
Dec  5 01:14:04 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/658691782' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8"}]: dispatch
Dec  5 01:14:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Dec  5 01:14:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  5 01:14:04 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/658691782' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8"}]': finished
Dec  5 01:14:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Dec  5 01:14:04 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Dec  5 01:14:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  5 01:14:04 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  5 01:14:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  5 01:14:04 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  5 01:14:04 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  5 01:14:04 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  5 01:14:04 compute-0 lvm[203853]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  5 01:14:04 compute-0 lvm[203853]: VG ceph_vg1 finished
Dec  5 01:14:04 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  5 01:14:04 compute-0 gracious_swirles[202809]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Dec  5 01:14:04 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Dec  5 01:14:04 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec  5 01:14:04 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec  5 01:14:04 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Dec  5 01:14:05 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Dec  5 01:14:05 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3148035459' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec  5 01:14:05 compute-0 gracious_swirles[202809]: stderr: got monmap epoch 1
Dec  5 01:14:05 compute-0 gracious_swirles[202809]: --> Creating keyring file for osd.1
Dec  5 01:14:05 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/658691782' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8"}]: dispatch
Dec  5 01:14:05 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/658691782' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8"}]': finished
Dec  5 01:14:05 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Dec  5 01:14:05 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Dec  5 01:14:05 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 944e6457-e96a-45b2-ba7f-23ecd70be9f8 --setuser ceph --setgroup ceph
Dec  5 01:14:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  5 01:14:06 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:14:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  5 01:14:08 compute-0 gracious_swirles[202809]: stderr: 2025-12-05T01:14:05.524+0000 7f0afce68740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec  5 01:14:08 compute-0 gracious_swirles[202809]: stderr: 2025-12-05T01:14:05.524+0000 7f0afce68740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec  5 01:14:08 compute-0 gracious_swirles[202809]: stderr: 2025-12-05T01:14:05.525+0000 7f0afce68740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec  5 01:14:08 compute-0 gracious_swirles[202809]: stderr: 2025-12-05T01:14:05.525+0000 7f0afce68740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Dec  5 01:14:08 compute-0 gracious_swirles[202809]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Dec  5 01:14:08 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  5 01:14:08 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec  5 01:14:08 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec  5 01:14:08 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec  5 01:14:08 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec  5 01:14:08 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  5 01:14:08 compute-0 gracious_swirles[202809]: --> ceph-volume lvm activate successful for osd ID: 1
Dec  5 01:14:08 compute-0 gracious_swirles[202809]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Dec  5 01:14:08 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  5 01:14:08 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new adfceb0a-e5d7-48a8-b6ba-0c42f745777c
Dec  5 01:14:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c"} v 0) v1
Dec  5 01:14:08 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1716259959' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c"}]: dispatch
Dec  5 01:14:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Dec  5 01:14:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  5 01:14:08 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1716259959' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c"}]': finished
Dec  5 01:14:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Dec  5 01:14:08 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Dec  5 01:14:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  5 01:14:08 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  5 01:14:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  5 01:14:08 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  5 01:14:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  5 01:14:08 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  5 01:14:08 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  5 01:14:08 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  5 01:14:08 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  5 01:14:09 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1716259959' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c"}]: dispatch
Dec  5 01:14:09 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1716259959' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c"}]': finished
Dec  5 01:14:09 compute-0 lvm[204811]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  5 01:14:09 compute-0 lvm[204811]: VG ceph_vg2 finished
Dec  5 01:14:09 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  5 01:14:09 compute-0 gracious_swirles[202809]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Dec  5 01:14:09 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Dec  5 01:14:09 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec  5 01:14:09 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec  5 01:14:09 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Dec  5 01:14:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Dec  5 01:14:09 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2591695986' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec  5 01:14:09 compute-0 gracious_swirles[202809]: stderr: got monmap epoch 1
Dec  5 01:14:09 compute-0 gracious_swirles[202809]: --> Creating keyring file for osd.2
Dec  5 01:14:09 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Dec  5 01:14:09 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Dec  5 01:14:09 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid adfceb0a-e5d7-48a8-b6ba-0c42f745777c --setuser ceph --setgroup ceph
Dec  5 01:14:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  5 01:14:11 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:14:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  5 01:14:12 compute-0 gracious_swirles[202809]: stderr: 2025-12-05T01:14:09.890+0000 7f09b44cd740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec  5 01:14:12 compute-0 gracious_swirles[202809]: stderr: 2025-12-05T01:14:09.890+0000 7f09b44cd740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec  5 01:14:12 compute-0 gracious_swirles[202809]: stderr: 2025-12-05T01:14:09.891+0000 7f09b44cd740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec  5 01:14:12 compute-0 gracious_swirles[202809]: stderr: 2025-12-05T01:14:09.891+0000 7f09b44cd740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Dec  5 01:14:12 compute-0 gracious_swirles[202809]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Dec  5 01:14:12 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec  5 01:14:12 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Dec  5 01:14:12 compute-0 gracious_swirles[202809]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec  5 01:14:12 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Dec  5 01:14:12 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec  5 01:14:12 compute-0 gracious_swirles[202809]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec  5 01:14:12 compute-0 gracious_swirles[202809]: --> ceph-volume lvm activate successful for osd ID: 2
Dec  5 01:14:12 compute-0 gracious_swirles[202809]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Dec  5 01:14:12 compute-0 systemd[1]: libpod-ca47c3e846d4670fc66759301264fbee3626e8f324c6b934a19c0980b41a0e90.scope: Deactivated successfully.
Dec  5 01:14:12 compute-0 systemd[1]: libpod-ca47c3e846d4670fc66759301264fbee3626e8f324c6b934a19c0980b41a0e90.scope: Consumed 8.345s CPU time.
Dec  5 01:14:12 compute-0 podman[202793]: 2025-12-05 01:14:12.613722734 +0000 UTC m=+14.007400826 container died ca47c3e846d4670fc66759301264fbee3626e8f324c6b934a19c0980b41a0e90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_swirles, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:14:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-67417fa98245a11fb0f0d1e2344f29841590f8818262fc5073705e0f7090e666-merged.mount: Deactivated successfully.
Dec  5 01:14:12 compute-0 podman[202793]: 2025-12-05 01:14:12.718668356 +0000 UTC m=+14.112346428 container remove ca47c3e846d4670fc66759301264fbee3626e8f324c6b934a19c0980b41a0e90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  5 01:14:12 compute-0 systemd[1]: libpod-conmon-ca47c3e846d4670fc66759301264fbee3626e8f324c6b934a19c0980b41a0e90.scope: Deactivated successfully.
Dec  5 01:14:13 compute-0 podman[205893]: 2025-12-05 01:14:13.618613281 +0000 UTC m=+0.078378163 container create 88143ba220bd09bdfb6198d4afe732308a0c123370edacc9fb1ba7782d69e868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_pike, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:14:13 compute-0 podman[205893]: 2025-12-05 01:14:13.58158338 +0000 UTC m=+0.041348322 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:14:13 compute-0 systemd[1]: Started libpod-conmon-88143ba220bd09bdfb6198d4afe732308a0c123370edacc9fb1ba7782d69e868.scope.
Dec  5 01:14:13 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:14:13 compute-0 podman[205893]: 2025-12-05 01:14:13.74250589 +0000 UTC m=+0.202270772 container init 88143ba220bd09bdfb6198d4afe732308a0c123370edacc9fb1ba7782d69e868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_pike, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  5 01:14:13 compute-0 podman[205893]: 2025-12-05 01:14:13.757127007 +0000 UTC m=+0.216891859 container start 88143ba220bd09bdfb6198d4afe732308a0c123370edacc9fb1ba7782d69e868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_pike, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  5 01:14:13 compute-0 podman[205893]: 2025-12-05 01:14:13.762667601 +0000 UTC m=+0.222432573 container attach 88143ba220bd09bdfb6198d4afe732308a0c123370edacc9fb1ba7782d69e868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_pike, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  5 01:14:13 compute-0 naughty_pike[205910]: 167 167
Dec  5 01:14:13 compute-0 systemd[1]: libpod-88143ba220bd09bdfb6198d4afe732308a0c123370edacc9fb1ba7782d69e868.scope: Deactivated successfully.
Dec  5 01:14:13 compute-0 podman[205893]: 2025-12-05 01:14:13.76586178 +0000 UTC m=+0.225626632 container died 88143ba220bd09bdfb6198d4afe732308a0c123370edacc9fb1ba7782d69e868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_pike, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  5 01:14:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b68dd936ed350d030fa6eef477b214908c3243b23257b3f796d6014e90297da-merged.mount: Deactivated successfully.
Dec  5 01:14:13 compute-0 podman[205893]: 2025-12-05 01:14:13.829653726 +0000 UTC m=+0.289418578 container remove 88143ba220bd09bdfb6198d4afe732308a0c123370edacc9fb1ba7782d69e868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_pike, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:14:13 compute-0 systemd[1]: libpod-conmon-88143ba220bd09bdfb6198d4afe732308a0c123370edacc9fb1ba7782d69e868.scope: Deactivated successfully.
Dec  5 01:14:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  5 01:14:14 compute-0 podman[205933]: 2025-12-05 01:14:14.08912132 +0000 UTC m=+0.091451167 container create 9c30da8737c2c956f19e3396a2d669e7cb051bcab60216a84144e147234f920f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  5 01:14:14 compute-0 podman[205933]: 2025-12-05 01:14:14.062583791 +0000 UTC m=+0.064913668 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:14:14 compute-0 systemd[1]: Started libpod-conmon-9c30da8737c2c956f19e3396a2d669e7cb051bcab60216a84144e147234f920f.scope.
Dec  5 01:14:14 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:14:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/718cefdd81a968875a8b7231cbeb6c013f09146db708fed53e9b8662846cba31/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/718cefdd81a968875a8b7231cbeb6c013f09146db708fed53e9b8662846cba31/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/718cefdd81a968875a8b7231cbeb6c013f09146db708fed53e9b8662846cba31/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/718cefdd81a968875a8b7231cbeb6c013f09146db708fed53e9b8662846cba31/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:14 compute-0 podman[205933]: 2025-12-05 01:14:14.320716597 +0000 UTC m=+0.323046494 container init 9c30da8737c2c956f19e3396a2d669e7cb051bcab60216a84144e147234f920f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_yalow, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:14:14 compute-0 podman[205933]: 2025-12-05 01:14:14.331341603 +0000 UTC m=+0.333671440 container start 9c30da8737c2c956f19e3396a2d669e7cb051bcab60216a84144e147234f920f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_yalow, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:14:14 compute-0 podman[205933]: 2025-12-05 01:14:14.336561969 +0000 UTC m=+0.338891846 container attach 9c30da8737c2c956f19e3396a2d669e7cb051bcab60216a84144e147234f920f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  5 01:14:15 compute-0 priceless_yalow[205950]: {
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:    "0": [
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:        {
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:            "devices": [
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:                "/dev/loop3"
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:            ],
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:            "lv_name": "ceph_lv0",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:            "lv_size": "21470642176",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:            "name": "ceph_lv0",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:            "tags": {
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:                "ceph.cluster_name": "ceph",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:                "ceph.crush_device_class": "",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:                "ceph.encrypted": "0",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:                "ceph.osd_id": "0",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:                "ceph.type": "block",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:                "ceph.vdo": "0"
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:            },
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:            "type": "block",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:            "vg_name": "ceph_vg0"
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:        }
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:    ],
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:    "1": [
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:        {
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:            "devices": [
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:                "/dev/loop4"
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:            ],
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:            "lv_name": "ceph_lv1",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:            "lv_size": "21470642176",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:            "name": "ceph_lv1",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:            "tags": {
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:                "ceph.cluster_name": "ceph",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:                "ceph.crush_device_class": "",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:                "ceph.encrypted": "0",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:                "ceph.osd_id": "1",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:                "ceph.type": "block",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:                "ceph.vdo": "0"
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:            },
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:            "type": "block",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:            "vg_name": "ceph_vg1"
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:        }
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:    ],
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:    "2": [
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:        {
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:            "devices": [
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:                "/dev/loop5"
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:            ],
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:            "lv_name": "ceph_lv2",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:            "lv_size": "21470642176",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:            "name": "ceph_lv2",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:            "tags": {
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:                "ceph.cluster_name": "ceph",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:                "ceph.crush_device_class": "",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:                "ceph.encrypted": "0",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:                "ceph.osd_id": "2",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:                "ceph.type": "block",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:                "ceph.vdo": "0"
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:            },
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:            "type": "block",
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:            "vg_name": "ceph_vg2"
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:        }
Dec  5 01:14:15 compute-0 priceless_yalow[205950]:    ]
Dec  5 01:14:15 compute-0 priceless_yalow[205950]: }
Dec  5 01:14:15 compute-0 systemd[1]: libpod-9c30da8737c2c956f19e3396a2d669e7cb051bcab60216a84144e147234f920f.scope: Deactivated successfully.
Dec  5 01:14:15 compute-0 podman[205959]: 2025-12-05 01:14:15.252810367 +0000 UTC m=+0.046250909 container died 9c30da8737c2c956f19e3396a2d669e7cb051bcab60216a84144e147234f920f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:14:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-718cefdd81a968875a8b7231cbeb6c013f09146db708fed53e9b8662846cba31-merged.mount: Deactivated successfully.
Dec  5 01:14:15 compute-0 podman[205959]: 2025-12-05 01:14:15.348646535 +0000 UTC m=+0.142086997 container remove 9c30da8737c2c956f19e3396a2d669e7cb051bcab60216a84144e147234f920f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:14:15 compute-0 systemd[1]: libpod-conmon-9c30da8737c2c956f19e3396a2d669e7cb051bcab60216a84144e147234f920f.scope: Deactivated successfully.
Dec  5 01:14:15 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Dec  5 01:14:15 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec  5 01:14:15 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:14:15 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:14:15 compute-0 ceph-mgr[193209]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Dec  5 01:14:15 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Dec  5 01:14:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  5 01:14:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:14:16
Dec  5 01:14:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 01:14:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 01:14:16 compute-0 ceph-mgr[193209]: [balancer INFO root] No pools available
Dec  5 01:14:16 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 01:14:16 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec  5 01:14:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:14:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:14:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 01:14:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 01:14:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:14:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:14:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:14:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:14:16 compute-0 podman[206110]: 2025-12-05 01:14:16.356202165 +0000 UTC m=+0.067834030 container create 9ca565f826cac6f4df95a9c18e15201bb29251993452eb0c4ed80ac88404b08e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_allen, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  5 01:14:16 compute-0 systemd[1]: Started libpod-conmon-9ca565f826cac6f4df95a9c18e15201bb29251993452eb0c4ed80ac88404b08e.scope.
Dec  5 01:14:16 compute-0 podman[206110]: 2025-12-05 01:14:16.321530979 +0000 UTC m=+0.033162864 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:14:16 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:14:16 compute-0 podman[206110]: 2025-12-05 01:14:16.472354428 +0000 UTC m=+0.183986353 container init 9ca565f826cac6f4df95a9c18e15201bb29251993452eb0c4ed80ac88404b08e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  5 01:14:16 compute-0 podman[206110]: 2025-12-05 01:14:16.482950303 +0000 UTC m=+0.194582158 container start 9ca565f826cac6f4df95a9c18e15201bb29251993452eb0c4ed80ac88404b08e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_allen, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  5 01:14:16 compute-0 podman[206110]: 2025-12-05 01:14:16.489810444 +0000 UTC m=+0.201442369 container attach 9ca565f826cac6f4df95a9c18e15201bb29251993452eb0c4ed80ac88404b08e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_allen, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:14:16 compute-0 cool_allen[206126]: 167 167
Dec  5 01:14:16 compute-0 systemd[1]: libpod-9ca565f826cac6f4df95a9c18e15201bb29251993452eb0c4ed80ac88404b08e.scope: Deactivated successfully.
Dec  5 01:14:16 compute-0 podman[206110]: 2025-12-05 01:14:16.494067633 +0000 UTC m=+0.205699458 container died 9ca565f826cac6f4df95a9c18e15201bb29251993452eb0c4ed80ac88404b08e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_allen, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  5 01:14:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-46ae0d819869b882ca625f6881492e517bcb81fd0c5e3f9da4391ca048923607-merged.mount: Deactivated successfully.
Dec  5 01:14:16 compute-0 podman[206110]: 2025-12-05 01:14:16.550084723 +0000 UTC m=+0.261716568 container remove 9ca565f826cac6f4df95a9c18e15201bb29251993452eb0c4ed80ac88404b08e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_allen, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  5 01:14:16 compute-0 systemd[1]: libpod-conmon-9ca565f826cac6f4df95a9c18e15201bb29251993452eb0c4ed80ac88404b08e.scope: Deactivated successfully.
Dec  5 01:14:16 compute-0 podman[206157]: 2025-12-05 01:14:16.895410676 +0000 UTC m=+0.063274442 container create 38fed600e05bce803648ef13b17d45b05b6a656e21d173e5aa06c2d7669471df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate-test, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:14:16 compute-0 podman[206157]: 2025-12-05 01:14:16.871292675 +0000 UTC m=+0.039156451 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:14:16 compute-0 systemd[1]: Started libpod-conmon-38fed600e05bce803648ef13b17d45b05b6a656e21d173e5aa06c2d7669471df.scope.
Dec  5 01:14:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:14:17 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:14:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df08a6a7b020ce08338203d78321eff49b7b746244b8089c5a7555db38b7e220/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df08a6a7b020ce08338203d78321eff49b7b746244b8089c5a7555db38b7e220/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df08a6a7b020ce08338203d78321eff49b7b746244b8089c5a7555db38b7e220/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df08a6a7b020ce08338203d78321eff49b7b746244b8089c5a7555db38b7e220/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df08a6a7b020ce08338203d78321eff49b7b746244b8089c5a7555db38b7e220/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:17 compute-0 podman[206157]: 2025-12-05 01:14:17.062289152 +0000 UTC m=+0.230152958 container init 38fed600e05bce803648ef13b17d45b05b6a656e21d173e5aa06c2d7669471df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:14:17 compute-0 podman[206157]: 2025-12-05 01:14:17.081690342 +0000 UTC m=+0.249554108 container start 38fed600e05bce803648ef13b17d45b05b6a656e21d173e5aa06c2d7669471df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate-test, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:14:17 compute-0 podman[206157]: 2025-12-05 01:14:17.086176077 +0000 UTC m=+0.254039893 container attach 38fed600e05bce803648ef13b17d45b05b6a656e21d173e5aa06c2d7669471df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:14:17 compute-0 ceph-mon[192914]: Deploying daemon osd.0 on compute-0
Dec  5 01:14:17 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate-test[206174]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Dec  5 01:14:17 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate-test[206174]:                            [--no-systemd] [--no-tmpfs]
Dec  5 01:14:17 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate-test[206174]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec  5 01:14:17 compute-0 systemd[1]: libpod-38fed600e05bce803648ef13b17d45b05b6a656e21d173e5aa06c2d7669471df.scope: Deactivated successfully.
Dec  5 01:14:17 compute-0 podman[206157]: 2025-12-05 01:14:17.736508142 +0000 UTC m=+0.904371918 container died 38fed600e05bce803648ef13b17d45b05b6a656e21d173e5aa06c2d7669471df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  5 01:14:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-df08a6a7b020ce08338203d78321eff49b7b746244b8089c5a7555db38b7e220-merged.mount: Deactivated successfully.
Dec  5 01:14:17 compute-0 podman[206157]: 2025-12-05 01:14:17.822477675 +0000 UTC m=+0.990341451 container remove 38fed600e05bce803648ef13b17d45b05b6a656e21d173e5aa06c2d7669471df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate-test, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  5 01:14:17 compute-0 systemd[1]: libpod-conmon-38fed600e05bce803648ef13b17d45b05b6a656e21d173e5aa06c2d7669471df.scope: Deactivated successfully.
Dec  5 01:14:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  5 01:14:18 compute-0 systemd[1]: Reloading.
Dec  5 01:14:18 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:14:18 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:14:18 compute-0 systemd[1]: Reloading.
Dec  5 01:14:18 compute-0 podman[206246]: 2025-12-05 01:14:18.763843814 +0000 UTC m=+0.118471890 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  5 01:14:18 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:14:18 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:14:19 compute-0 systemd[1]: Starting Ceph osd.0 for cbd280d3-cbd8-528b-ace6-2b3a887cdcee...
Dec  5 01:14:19 compute-0 podman[206350]: 2025-12-05 01:14:19.472818741 +0000 UTC m=+0.102274728 container create 155781d3de9325927b31a382b8fbb3d626fb8d299d729f8976e09e8d9578e0f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:14:19 compute-0 podman[206350]: 2025-12-05 01:14:19.40814587 +0000 UTC m=+0.037601867 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:14:19 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:14:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7bbd02eaa778214f8bf6a4ce6059cf7a39c914c116ef656c96d23312a6bd5d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7bbd02eaa778214f8bf6a4ce6059cf7a39c914c116ef656c96d23312a6bd5d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7bbd02eaa778214f8bf6a4ce6059cf7a39c914c116ef656c96d23312a6bd5d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7bbd02eaa778214f8bf6a4ce6059cf7a39c914c116ef656c96d23312a6bd5d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7bbd02eaa778214f8bf6a4ce6059cf7a39c914c116ef656c96d23312a6bd5d3/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:19 compute-0 podman[206350]: 2025-12-05 01:14:19.605783552 +0000 UTC m=+0.235239559 container init 155781d3de9325927b31a382b8fbb3d626fb8d299d729f8976e09e8d9578e0f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:14:19 compute-0 podman[206350]: 2025-12-05 01:14:19.621044817 +0000 UTC m=+0.250500794 container start 155781d3de9325927b31a382b8fbb3d626fb8d299d729f8976e09e8d9578e0f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:14:19 compute-0 podman[206350]: 2025-12-05 01:14:19.627342582 +0000 UTC m=+0.256798559 container attach 155781d3de9325927b31a382b8fbb3d626fb8d299d729f8976e09e8d9578e0f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  5 01:14:19 compute-0 podman[206365]: 2025-12-05 01:14:19.675164624 +0000 UTC m=+0.133613061 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  5 01:14:19 compute-0 podman[206368]: 2025-12-05 01:14:19.728383045 +0000 UTC m=+0.177929334 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  5 01:14:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  5 01:14:20 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate[206369]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec  5 01:14:20 compute-0 bash[206350]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec  5 01:14:20 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate[206369]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Dec  5 01:14:20 compute-0 bash[206350]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Dec  5 01:14:21 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate[206369]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Dec  5 01:14:21 compute-0 bash[206350]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Dec  5 01:14:21 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate[206369]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  5 01:14:21 compute-0 bash[206350]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  5 01:14:21 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate[206369]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec  5 01:14:21 compute-0 bash[206350]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec  5 01:14:21 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate[206369]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec  5 01:14:21 compute-0 bash[206350]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec  5 01:14:21 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate[206369]: --> ceph-volume raw activate successful for osd ID: 0
Dec  5 01:14:21 compute-0 bash[206350]: --> ceph-volume raw activate successful for osd ID: 0
Dec  5 01:14:21 compute-0 systemd[1]: libpod-155781d3de9325927b31a382b8fbb3d626fb8d299d729f8976e09e8d9578e0f1.scope: Deactivated successfully.
Dec  5 01:14:21 compute-0 systemd[1]: libpod-155781d3de9325927b31a382b8fbb3d626fb8d299d729f8976e09e8d9578e0f1.scope: Consumed 1.482s CPU time.
Dec  5 01:14:21 compute-0 podman[206350]: 2025-12-05 01:14:21.077720841 +0000 UTC m=+1.707176848 container died 155781d3de9325927b31a382b8fbb3d626fb8d299d729f8976e09e8d9578e0f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  5 01:14:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7bbd02eaa778214f8bf6a4ce6059cf7a39c914c116ef656c96d23312a6bd5d3-merged.mount: Deactivated successfully.
Dec  5 01:14:21 compute-0 podman[206350]: 2025-12-05 01:14:21.211383212 +0000 UTC m=+1.840839219 container remove 155781d3de9325927b31a382b8fbb3d626fb8d299d729f8976e09e8d9578e0f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0-activate, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  5 01:14:21 compute-0 podman[206565]: 2025-12-05 01:14:21.268152792 +0000 UTC m=+0.108918623 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  5 01:14:21 compute-0 podman[206628]: 2025-12-05 01:14:21.534038534 +0000 UTC m=+0.078796904 container create a1423cde747e417c6d5c4992cf49da7bd11f7624f7f045ac8f6cd3cd6dd674e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:14:21 compute-0 podman[206628]: 2025-12-05 01:14:21.500707616 +0000 UTC m=+0.045466036 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:14:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f931fbb0f233f99bf5e84c8a1025767c73da2af8e68d8b31b2133c48a93dca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f931fbb0f233f99bf5e84c8a1025767c73da2af8e68d8b31b2133c48a93dca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f931fbb0f233f99bf5e84c8a1025767c73da2af8e68d8b31b2133c48a93dca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f931fbb0f233f99bf5e84c8a1025767c73da2af8e68d8b31b2133c48a93dca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f931fbb0f233f99bf5e84c8a1025767c73da2af8e68d8b31b2133c48a93dca/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:21 compute-0 podman[206628]: 2025-12-05 01:14:21.680494742 +0000 UTC m=+0.225253162 container init a1423cde747e417c6d5c4992cf49da7bd11f7624f7f045ac8f6cd3cd6dd674e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:14:21 compute-0 podman[206628]: 2025-12-05 01:14:21.697189136 +0000 UTC m=+0.241947496 container start a1423cde747e417c6d5c4992cf49da7bd11f7624f7f045ac8f6cd3cd6dd674e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:14:21 compute-0 bash[206628]: a1423cde747e417c6d5c4992cf49da7bd11f7624f7f045ac8f6cd3cd6dd674e9
Dec  5 01:14:21 compute-0 systemd[1]: Started Ceph osd.0 for cbd280d3-cbd8-528b-ace6-2b3a887cdcee.
Dec  5 01:14:21 compute-0 ceph-osd[206647]: set uid:gid to 167:167 (ceph:ceph)
Dec  5 01:14:21 compute-0 ceph-osd[206647]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Dec  5 01:14:21 compute-0 ceph-osd[206647]: pidfile_write: ignore empty --pid-file
Dec  5 01:14:21 compute-0 ceph-osd[206647]: bdev(0x5630e4c1d800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  5 01:14:21 compute-0 ceph-osd[206647]: bdev(0x5630e4c1d800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  5 01:14:21 compute-0 ceph-osd[206647]: bdev(0x5630e4c1d800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  5 01:14:21 compute-0 ceph-osd[206647]: bdev(0x5630e4c1d800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  5 01:14:21 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  5 01:14:21 compute-0 ceph-osd[206647]: bdev(0x5630e5a55800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  5 01:14:21 compute-0 ceph-osd[206647]: bdev(0x5630e5a55800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  5 01:14:21 compute-0 ceph-osd[206647]: bdev(0x5630e5a55800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  5 01:14:21 compute-0 ceph-osd[206647]: bdev(0x5630e5a55800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  5 01:14:21 compute-0 ceph-osd[206647]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec  5 01:14:21 compute-0 ceph-osd[206647]: bdev(0x5630e5a55800 /var/lib/ceph/osd/ceph-0/block) close
Dec  5 01:14:21 compute-0 ceph-osd[206647]: bdev(0x5630e4c1d800 /var/lib/ceph/osd/ceph-0/block) close
Dec  5 01:14:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:14:21 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:14:21 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Dec  5 01:14:21 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec  5 01:14:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:14:21 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:14:21 compute-0 ceph-mgr[193209]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Dec  5 01:14:21 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Dec  5 01:14:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:14:22 compute-0 ceph-osd[206647]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Dec  5 01:14:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  5 01:14:22 compute-0 ceph-osd[206647]: load: jerasure load: lrc 
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8cc00 /var/lib/ceph/osd/ceph-0/block) close
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8cc00 /var/lib/ceph/osd/ceph-0/block) close
Dec  5 01:14:22 compute-0 ceph-osd[206647]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec  5 01:14:22 compute-0 ceph-osd[206647]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8cc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8cc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8cc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8cc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8d400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8d400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8d400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8d400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bluefs mount
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bluefs mount shared_bdev_used = 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: RocksDB version: 7.9.2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Git sha 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Compile date 2025-05-06 23:30:25
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: DB SUMMARY
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: DB Session ID:  GYHZQKVIA575O32EF2LZ
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: CURRENT file:  CURRENT
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: IDENTITY file:  IDENTITY
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                         Options.error_if_exists: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.create_if_missing: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                         Options.paranoid_checks: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                                     Options.env: 0x5630e5aa7e30
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                                Options.info_log: 0x5630e4ca8aa0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_file_opening_threads: 16
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                              Options.statistics: (nil)
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.use_fsync: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.max_log_file_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                         Options.allow_fallocate: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.use_direct_reads: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.create_missing_column_families: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                              Options.db_log_dir: 
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                                 Options.wal_dir: db.wal
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.advise_random_on_open: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.write_buffer_manager: 0x5630e5bae460
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                            Options.rate_limiter: (nil)
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.unordered_write: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.row_cache: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                              Options.wal_filter: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.allow_ingest_behind: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.two_write_queues: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.manual_wal_flush: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.wal_compression: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.atomic_flush: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.log_readahead_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.allow_data_in_errors: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.db_host_id: __hostname__
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.max_background_jobs: 4
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.max_background_compactions: -1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.max_subcompactions: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.max_open_files: -1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.bytes_per_sync: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.max_background_flushes: -1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Compression algorithms supported:
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: #011kZSTD supported: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: #011kXpressCompression supported: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: #011kBZip2Compression supported: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: #011kLZ4Compression supported: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: #011kZlibCompression supported: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: #011kSnappyCompression supported: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5630e4ca9140)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5630e4c90dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5630e4ca9140)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5630e4c90dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5630e4ca9140)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5630e4c90dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5630e4ca9140)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5630e4c90dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5630e4ca9140)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5630e4c90dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5630e4ca9140)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5630e4c90dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5630e4ca9140)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5630e4c90dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5630e4ca9100)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5630e4c90430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5630e4ca9100)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5630e4c90430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5630e4ca9100)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5630e4c90430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  5 01:14:22 compute-0 podman[206809]: 2025-12-05 01:14:22.647188285 +0000 UTC m=+0.064080795 container create b80d6e5e2a08fe2e7a53305306f2a36627417fc691ab1c89404b06595ac40d0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_albattani, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3a58bf9c-cd82-4306-99b8-5561449df99e
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897262667754, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897262668199, "job": 1, "event": "recovery_finished"}
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: freelist init
Dec  5 01:14:22 compute-0 ceph-osd[206647]: freelist _read_cfg
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bluefs umount
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8d400 /var/lib/ceph/osd/ceph-0/block) close
Dec  5 01:14:22 compute-0 systemd[1]: Started libpod-conmon-b80d6e5e2a08fe2e7a53305306f2a36627417fc691ab1c89404b06595ac40d0e.scope.
Dec  5 01:14:22 compute-0 podman[206809]: 2025-12-05 01:14:22.627560708 +0000 UTC m=+0.044453248 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:14:22 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:14:22 compute-0 podman[206809]: 2025-12-05 01:14:22.757740842 +0000 UTC m=+0.174633382 container init b80d6e5e2a08fe2e7a53305306f2a36627417fc691ab1c89404b06595ac40d0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  5 01:14:22 compute-0 podman[206809]: 2025-12-05 01:14:22.768525163 +0000 UTC m=+0.185417683 container start b80d6e5e2a08fe2e7a53305306f2a36627417fc691ab1c89404b06595ac40d0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_albattani, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:14:22 compute-0 podman[206809]: 2025-12-05 01:14:22.773048709 +0000 UTC m=+0.189941259 container attach b80d6e5e2a08fe2e7a53305306f2a36627417fc691ab1c89404b06595ac40d0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_albattani, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:14:22 compute-0 jolly_albattani[207018]: 167 167
Dec  5 01:14:22 compute-0 systemd[1]: libpod-b80d6e5e2a08fe2e7a53305306f2a36627417fc691ab1c89404b06595ac40d0e.scope: Deactivated successfully.
Dec  5 01:14:22 compute-0 podman[206809]: 2025-12-05 01:14:22.777672267 +0000 UTC m=+0.194564807 container died b80d6e5e2a08fe2e7a53305306f2a36627417fc691ab1c89404b06595ac40d0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_albattani, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:14:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec  5 01:14:22 compute-0 ceph-mon[192914]: Deploying daemon osd.1 on compute-0
Dec  5 01:14:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d6bade19b92aa99aa7adb5c26c74b7db2848ded4da31b67cf8097e7a12d5ff5-merged.mount: Deactivated successfully.
Dec  5 01:14:22 compute-0 podman[206809]: 2025-12-05 01:14:22.828511643 +0000 UTC m=+0.245404163 container remove b80d6e5e2a08fe2e7a53305306f2a36627417fc691ab1c89404b06595ac40d0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_albattani, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  5 01:14:22 compute-0 systemd[1]: libpod-conmon-b80d6e5e2a08fe2e7a53305306f2a36627417fc691ab1c89404b06595ac40d0e.scope: Deactivated successfully.
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8d400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8d400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8d400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bdev(0x5630e4c8d400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bluefs mount
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bluefs mount shared_bdev_used = 4718592
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: RocksDB version: 7.9.2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Git sha 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Compile date 2025-05-06 23:30:25
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: DB SUMMARY
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: DB Session ID:  GYHZQKVIA575O32EF2LY
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: CURRENT file:  CURRENT
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: IDENTITY file:  IDENTITY
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                         Options.error_if_exists: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.create_if_missing: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                         Options.paranoid_checks: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                                     Options.env: 0x5630e5c44230
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                                Options.info_log: 0x5630e4ca8840
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_file_opening_threads: 16
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                              Options.statistics: (nil)
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.use_fsync: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.max_log_file_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                         Options.allow_fallocate: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.use_direct_reads: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.create_missing_column_families: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                              Options.db_log_dir: 
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                                 Options.wal_dir: db.wal
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.advise_random_on_open: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.write_buffer_manager: 0x5630e5bae460
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                            Options.rate_limiter: (nil)
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.unordered_write: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.row_cache: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                              Options.wal_filter: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.allow_ingest_behind: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.two_write_queues: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.manual_wal_flush: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.wal_compression: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.atomic_flush: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.log_readahead_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.allow_data_in_errors: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.db_host_id: __hostname__
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.max_background_jobs: 4
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.max_background_compactions: -1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.max_subcompactions: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.max_open_files: -1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.bytes_per_sync: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.max_background_flushes: -1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Compression algorithms supported:
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: #011kZSTD supported: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: #011kXpressCompression supported: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: #011kBZip2Compression supported: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: #011kLZ4Compression supported: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: #011kZlibCompression supported: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: #011kSnappyCompression supported: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5630e4ca8c20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5630e4c90dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5630e4ca8c20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5630e4c90dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5630e4ca8c20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5630e4c90dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5630e4ca8c20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5630e4c90dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5630e4ca8c20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5630e4c90dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5630e4ca8c20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5630e4c90dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5630e4ca8c20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5630e4c90dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5630e4ca9220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5630e4c90430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5630e4ca9220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5630e4c90430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5630e4ca9220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5630e4c90430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3a58bf9c-cd82-4306-99b8-5561449df99e
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897262923082, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897262930358, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897262, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3a58bf9c-cd82-4306-99b8-5561449df99e", "db_session_id": "GYHZQKVIA575O32EF2LY", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897262935715, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897262, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3a58bf9c-cd82-4306-99b8-5561449df99e", "db_session_id": "GYHZQKVIA575O32EF2LY", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897262940663, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897262, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3a58bf9c-cd82-4306-99b8-5561449df99e", "db_session_id": "GYHZQKVIA575O32EF2LY", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897262943577, "job": 1, "event": "recovery_finished"}
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5630e5c98000
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: DB pointer 0x5630e4ccba00
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Dec  5 01:14:22 compute-0 ceph-osd[206647]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 01:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5630e4c90dd0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5630e4c90dd0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012
Dec  5 01:14:22 compute-0 ceph-osd[206647]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec  5 01:14:22 compute-0 ceph-osd[206647]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec  5 01:14:22 compute-0 ceph-osd[206647]: _get_class not permitted to load lua
Dec  5 01:14:22 compute-0 ceph-osd[206647]: _get_class not permitted to load sdk
Dec  5 01:14:22 compute-0 ceph-osd[206647]: _get_class not permitted to load test_remote_reads
Dec  5 01:14:22 compute-0 ceph-osd[206647]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec  5 01:14:22 compute-0 ceph-osd[206647]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec  5 01:14:22 compute-0 ceph-osd[206647]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec  5 01:14:22 compute-0 ceph-osd[206647]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec  5 01:14:22 compute-0 ceph-osd[206647]: osd.0 0 load_pgs
Dec  5 01:14:22 compute-0 ceph-osd[206647]: osd.0 0 load_pgs opened 0 pgs
Dec  5 01:14:22 compute-0 ceph-osd[206647]: osd.0 0 log_to_monitors true
Dec  5 01:14:22 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0[206643]: 2025-12-05T01:14:22.990+0000 7f938f71a740 -1 osd.0 0 log_to_monitors true
Dec  5 01:14:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Dec  5 01:14:23 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/4014556596,v1:192.168.122.100:6803/4014556596]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Dec  5 01:14:23 compute-0 podman[207263]: 2025-12-05 01:14:23.108701112 +0000 UTC m=+0.058068937 container create 4e24f0920f1f8821451a7cb12d50d0de8b95d32a128d7054e956600d8be53379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate-test, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  5 01:14:23 compute-0 systemd[1]: Started libpod-conmon-4e24f0920f1f8821451a7cb12d50d0de8b95d32a128d7054e956600d8be53379.scope.
Dec  5 01:14:23 compute-0 podman[207263]: 2025-12-05 01:14:23.085511337 +0000 UTC m=+0.034879162 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:14:23 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:14:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac275532091ee7cd958b370752433cba0bd556588939d787c4633a23be4ab44/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac275532091ee7cd958b370752433cba0bd556588939d787c4633a23be4ab44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac275532091ee7cd958b370752433cba0bd556588939d787c4633a23be4ab44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac275532091ee7cd958b370752433cba0bd556588939d787c4633a23be4ab44/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ac275532091ee7cd958b370752433cba0bd556588939d787c4633a23be4ab44/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:23 compute-0 podman[207263]: 2025-12-05 01:14:23.281178844 +0000 UTC m=+0.230546679 container init 4e24f0920f1f8821451a7cb12d50d0de8b95d32a128d7054e956600d8be53379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:14:23 compute-0 podman[207263]: 2025-12-05 01:14:23.305611414 +0000 UTC m=+0.254979209 container start 4e24f0920f1f8821451a7cb12d50d0de8b95d32a128d7054e956600d8be53379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate-test, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  5 01:14:23 compute-0 podman[207263]: 2025-12-05 01:14:23.310102659 +0000 UTC m=+0.259470494 container attach 4e24f0920f1f8821451a7cb12d50d0de8b95d32a128d7054e956600d8be53379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate-test, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  5 01:14:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Dec  5 01:14:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  5 01:14:23 compute-0 ceph-mon[192914]: from='osd.0 [v2:192.168.122.100:6802/4014556596,v1:192.168.122.100:6803/4014556596]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Dec  5 01:14:23 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/4014556596,v1:192.168.122.100:6803/4014556596]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec  5 01:14:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Dec  5 01:14:23 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Dec  5 01:14:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Dec  5 01:14:23 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/4014556596,v1:192.168.122.100:6803/4014556596]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec  5 01:14:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Dec  5 01:14:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  5 01:14:23 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  5 01:14:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  5 01:14:23 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  5 01:14:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  5 01:14:23 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  5 01:14:23 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  5 01:14:23 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  5 01:14:23 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  5 01:14:23 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate-test[207279]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Dec  5 01:14:23 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate-test[207279]:                            [--no-systemd] [--no-tmpfs]
Dec  5 01:14:23 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate-test[207279]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec  5 01:14:23 compute-0 systemd[1]: libpod-4e24f0920f1f8821451a7cb12d50d0de8b95d32a128d7054e956600d8be53379.scope: Deactivated successfully.
Dec  5 01:14:23 compute-0 podman[207263]: 2025-12-05 01:14:23.962183233 +0000 UTC m=+0.911551058 container died 4e24f0920f1f8821451a7cb12d50d0de8b95d32a128d7054e956600d8be53379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:14:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ac275532091ee7cd958b370752433cba0bd556588939d787c4633a23be4ab44-merged.mount: Deactivated successfully.
Dec  5 01:14:24 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec  5 01:14:24 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec  5 01:14:24 compute-0 podman[207263]: 2025-12-05 01:14:24.043917449 +0000 UTC m=+0.993285254 container remove 4e24f0920f1f8821451a7cb12d50d0de8b95d32a128d7054e956600d8be53379 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate-test, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:14:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  5 01:14:24 compute-0 systemd[1]: libpod-conmon-4e24f0920f1f8821451a7cb12d50d0de8b95d32a128d7054e956600d8be53379.scope: Deactivated successfully.
Dec  5 01:14:24 compute-0 podman[207285]: 2025-12-05 01:14:24.138135242 +0000 UTC m=+0.142355144 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, version=9.4, distribution-scope=public, io.openshift.tags=base rhel9, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., name=ubi9, release-0.7.12=, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=)
Dec  5 01:14:24 compute-0 systemd[1]: Reloading.
Dec  5 01:14:24 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:14:24 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:14:24 compute-0 systemd[1]: Reloading.
Dec  5 01:14:24 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Dec  5 01:14:24 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  5 01:14:24 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/4014556596,v1:192.168.122.100:6803/4014556596]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  5 01:14:24 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Dec  5 01:14:24 compute-0 ceph-osd[206647]: osd.0 0 done with init, starting boot process
Dec  5 01:14:24 compute-0 ceph-osd[206647]: osd.0 0 start_boot
Dec  5 01:14:24 compute-0 ceph-osd[206647]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec  5 01:14:24 compute-0 ceph-osd[206647]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec  5 01:14:24 compute-0 ceph-osd[206647]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec  5 01:14:24 compute-0 ceph-osd[206647]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec  5 01:14:24 compute-0 ceph-osd[206647]: osd.0 0  bench count 12288000 bsize 4 KiB
Dec  5 01:14:24 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Dec  5 01:14:24 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  5 01:14:24 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  5 01:14:24 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  5 01:14:24 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  5 01:14:24 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  5 01:14:24 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  5 01:14:24 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  5 01:14:24 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  5 01:14:24 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  5 01:14:24 compute-0 ceph-mgr[193209]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/4014556596; not ready for session (expect reconnect)
Dec  5 01:14:24 compute-0 ceph-mon[192914]: from='osd.0 [v2:192.168.122.100:6802/4014556596,v1:192.168.122.100:6803/4014556596]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec  5 01:14:24 compute-0 ceph-mon[192914]: from='osd.0 [v2:192.168.122.100:6802/4014556596,v1:192.168.122.100:6803/4014556596]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec  5 01:14:24 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  5 01:14:24 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  5 01:14:24 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  5 01:14:25 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:14:25 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:14:25 compute-0 systemd[1]: Starting Ceph osd.1 for cbd280d3-cbd8-528b-ace6-2b3a887cdcee...
Dec  5 01:14:25 compute-0 python3[207443]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:14:25 compute-0 podman[207474]: 2025-12-05 01:14:25.547230041 +0000 UTC m=+0.073698072 container create ff03e9b5fa393239416979e62cd283849c0e3b5600d199ecc4e2b73327949043 (image=quay.io/ceph/ceph:v18, name=infallible_mahavira, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:14:25 compute-0 podman[207495]: 2025-12-05 01:14:25.592465811 +0000 UTC m=+0.067451999 container create 3e37c8594622052a1ba2bf5f0d3df7622e1108c1f9ce1966160a7e564396249c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  5 01:14:25 compute-0 podman[207474]: 2025-12-05 01:14:25.515129038 +0000 UTC m=+0.041597089 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:14:25 compute-0 systemd[1]: Started libpod-conmon-ff03e9b5fa393239416979e62cd283849c0e3b5600d199ecc4e2b73327949043.scope.
Dec  5 01:14:25 compute-0 podman[207495]: 2025-12-05 01:14:25.560567583 +0000 UTC m=+0.035553781 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:14:25 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:14:25 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:14:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ad17509fe53fb2424eddcdfaaccd7701b9d09fe571fbf45c128fa30f44aa03/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2c2b7f7ba518dde22bbf15aa14ee918fdf4ec7845d7c85b01fd85a3630d8ca5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ad17509fe53fb2424eddcdfaaccd7701b9d09fe571fbf45c128fa30f44aa03/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4ad17509fe53fb2424eddcdfaaccd7701b9d09fe571fbf45c128fa30f44aa03/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2c2b7f7ba518dde22bbf15aa14ee918fdf4ec7845d7c85b01fd85a3630d8ca5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2c2b7f7ba518dde22bbf15aa14ee918fdf4ec7845d7c85b01fd85a3630d8ca5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2c2b7f7ba518dde22bbf15aa14ee918fdf4ec7845d7c85b01fd85a3630d8ca5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2c2b7f7ba518dde22bbf15aa14ee918fdf4ec7845d7c85b01fd85a3630d8ca5/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:25 compute-0 podman[207474]: 2025-12-05 01:14:25.707300088 +0000 UTC m=+0.233768139 container init ff03e9b5fa393239416979e62cd283849c0e3b5600d199ecc4e2b73327949043 (image=quay.io/ceph/ceph:v18, name=infallible_mahavira, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  5 01:14:25 compute-0 podman[207474]: 2025-12-05 01:14:25.722868191 +0000 UTC m=+0.249336222 container start ff03e9b5fa393239416979e62cd283849c0e3b5600d199ecc4e2b73327949043 (image=quay.io/ceph/ceph:v18, name=infallible_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:14:25 compute-0 podman[207495]: 2025-12-05 01:14:25.733136747 +0000 UTC m=+0.208122955 container init 3e37c8594622052a1ba2bf5f0d3df7622e1108c1f9ce1966160a7e564396249c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:14:25 compute-0 podman[207474]: 2025-12-05 01:14:25.741165391 +0000 UTC m=+0.267633442 container attach ff03e9b5fa393239416979e62cd283849c0e3b5600d199ecc4e2b73327949043 (image=quay.io/ceph/ceph:v18, name=infallible_mahavira, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:14:25 compute-0 podman[207495]: 2025-12-05 01:14:25.742820257 +0000 UTC m=+0.217806445 container start 3e37c8594622052a1ba2bf5f0d3df7622e1108c1f9ce1966160a7e564396249c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Dec  5 01:14:25 compute-0 podman[207495]: 2025-12-05 01:14:25.763610716 +0000 UTC m=+0.238596914 container attach 3e37c8594622052a1ba2bf5f0d3df7622e1108c1f9ce1966160a7e564396249c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  5 01:14:25 compute-0 ceph-mgr[193209]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/4014556596; not ready for session (expect reconnect)
Dec  5 01:14:25 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  5 01:14:25 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  5 01:14:25 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  5 01:14:25 compute-0 ceph-mon[192914]: from='osd.0 [v2:192.168.122.100:6802/4014556596,v1:192.168.122.100:6803/4014556596]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  5 01:14:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  5 01:14:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Dec  5 01:14:26 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1252479157' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  5 01:14:26 compute-0 infallible_mahavira[207513]: 
Dec  5 01:14:26 compute-0 infallible_mahavira[207513]: {"fsid":"cbd280d3-cbd8-528b-ace6-2b3a887cdcee","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":119,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":8,"num_osds":3,"num_up_osds":0,"osd_up_since":0,"num_in_osds":3,"osd_in_since":1764897248,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-05T01:14:18.046599+0000","services":{}},"progress_events":{}}
Dec  5 01:14:26 compute-0 systemd[1]: libpod-ff03e9b5fa393239416979e62cd283849c0e3b5600d199ecc4e2b73327949043.scope: Deactivated successfully.
Dec  5 01:14:26 compute-0 conmon[207513]: conmon ff03e9b5fa3932394169 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ff03e9b5fa393239416979e62cd283849c0e3b5600d199ecc4e2b73327949043.scope/container/memory.events
Dec  5 01:14:26 compute-0 podman[207550]: 2025-12-05 01:14:26.456454794 +0000 UTC m=+0.027874657 container died ff03e9b5fa393239416979e62cd283849c0e3b5600d199ecc4e2b73327949043 (image=quay.io/ceph/ceph:v18, name=infallible_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Dec  5 01:14:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4ad17509fe53fb2424eddcdfaaccd7701b9d09fe571fbf45c128fa30f44aa03-merged.mount: Deactivated successfully.
Dec  5 01:14:26 compute-0 podman[207550]: 2025-12-05 01:14:26.546291636 +0000 UTC m=+0.117711469 container remove ff03e9b5fa393239416979e62cd283849c0e3b5600d199ecc4e2b73327949043 (image=quay.io/ceph/ceph:v18, name=infallible_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:14:26 compute-0 systemd[1]: libpod-conmon-ff03e9b5fa393239416979e62cd283849c0e3b5600d199ecc4e2b73327949043.scope: Deactivated successfully.
Dec  5 01:14:26 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate[207517]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  5 01:14:26 compute-0 bash[207495]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  5 01:14:26 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate[207517]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Dec  5 01:14:26 compute-0 bash[207495]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Dec  5 01:14:26 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate[207517]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Dec  5 01:14:26 compute-0 bash[207495]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Dec  5 01:14:26 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate[207517]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec  5 01:14:26 compute-0 bash[207495]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec  5 01:14:26 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate[207517]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec  5 01:14:26 compute-0 bash[207495]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec  5 01:14:26 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate[207517]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  5 01:14:26 compute-0 bash[207495]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  5 01:14:26 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate[207517]: --> ceph-volume raw activate successful for osd ID: 1
Dec  5 01:14:26 compute-0 bash[207495]: --> ceph-volume raw activate successful for osd ID: 1
Dec  5 01:14:26 compute-0 systemd[1]: libpod-3e37c8594622052a1ba2bf5f0d3df7622e1108c1f9ce1966160a7e564396249c.scope: Deactivated successfully.
Dec  5 01:14:26 compute-0 systemd[1]: libpod-3e37c8594622052a1ba2bf5f0d3df7622e1108c1f9ce1966160a7e564396249c.scope: Consumed 1.124s CPU time.
Dec  5 01:14:26 compute-0 podman[207495]: 2025-12-05 01:14:26.872316981 +0000 UTC m=+1.347303169 container died 3e37c8594622052a1ba2bf5f0d3df7622e1108c1f9ce1966160a7e564396249c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:14:26 compute-0 ceph-mgr[193209]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/4014556596; not ready for session (expect reconnect)
Dec  5 01:14:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  5 01:14:26 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  5 01:14:26 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  5 01:14:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2c2b7f7ba518dde22bbf15aa14ee918fdf4ec7845d7c85b01fd85a3630d8ca5-merged.mount: Deactivated successfully.
Dec  5 01:14:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:14:27 compute-0 podman[207702]: 2025-12-05 01:14:27.00049143 +0000 UTC m=+0.091097878 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vcs-type=git, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, name=ubi9-minimal)
Dec  5 01:14:27 compute-0 podman[207495]: 2025-12-05 01:14:27.030534036 +0000 UTC m=+1.505520214 container remove 3e37c8594622052a1ba2bf5f0d3df7622e1108c1f9ce1966160a7e564396249c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1-activate, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  5 01:14:27 compute-0 podman[207777]: 2025-12-05 01:14:27.30619776 +0000 UTC m=+0.071946814 container create 4bb9d15168558ce4ae587a8509818133e2ad26af978ea8ff9feb20c8ba2b4839 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:14:27 compute-0 podman[207777]: 2025-12-05 01:14:27.273992234 +0000 UTC m=+0.039741288 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:14:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3d8edcf8b60eace6ecbb88dbbb627660008ae3e828280673345062c5448a09a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3d8edcf8b60eace6ecbb88dbbb627660008ae3e828280673345062c5448a09a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3d8edcf8b60eace6ecbb88dbbb627660008ae3e828280673345062c5448a09a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3d8edcf8b60eace6ecbb88dbbb627660008ae3e828280673345062c5448a09a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3d8edcf8b60eace6ecbb88dbbb627660008ae3e828280673345062c5448a09a/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:27 compute-0 podman[207777]: 2025-12-05 01:14:27.419191996 +0000 UTC m=+0.184941060 container init 4bb9d15168558ce4ae587a8509818133e2ad26af978ea8ff9feb20c8ba2b4839 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  5 01:14:27 compute-0 podman[207777]: 2025-12-05 01:14:27.432098136 +0000 UTC m=+0.197847180 container start 4bb9d15168558ce4ae587a8509818133e2ad26af978ea8ff9feb20c8ba2b4839 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:14:27 compute-0 bash[207777]: 4bb9d15168558ce4ae587a8509818133e2ad26af978ea8ff9feb20c8ba2b4839
Dec  5 01:14:27 compute-0 systemd[1]: Started Ceph osd.1 for cbd280d3-cbd8-528b-ace6-2b3a887cdcee.
Dec  5 01:14:27 compute-0 ceph-osd[207795]: set uid:gid to 167:167 (ceph:ceph)
Dec  5 01:14:27 compute-0 ceph-osd[207795]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Dec  5 01:14:27 compute-0 ceph-osd[207795]: pidfile_write: ignore empty --pid-file
Dec  5 01:14:27 compute-0 ceph-osd[207795]: bdev(0x564846697800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  5 01:14:27 compute-0 ceph-osd[207795]: bdev(0x564846697800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  5 01:14:27 compute-0 ceph-osd[207795]: bdev(0x564846697800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  5 01:14:27 compute-0 ceph-osd[207795]: bdev(0x564846697800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  5 01:14:27 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  5 01:14:27 compute-0 ceph-osd[207795]: bdev(0x5648474d9800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  5 01:14:27 compute-0 ceph-osd[207795]: bdev(0x5648474d9800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  5 01:14:27 compute-0 ceph-osd[207795]: bdev(0x5648474d9800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  5 01:14:27 compute-0 ceph-osd[207795]: bdev(0x5648474d9800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  5 01:14:27 compute-0 ceph-osd[207795]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec  5 01:14:27 compute-0 ceph-osd[207795]: bdev(0x5648474d9800 /var/lib/ceph/osd/ceph-1/block) close
Dec  5 01:14:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:14:27 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:14:27 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Dec  5 01:14:27 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec  5 01:14:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:14:27 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:14:27 compute-0 ceph-mgr[193209]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Dec  5 01:14:27 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Dec  5 01:14:27 compute-0 ceph-osd[207795]: bdev(0x564846697800 /var/lib/ceph/osd/ceph-1/block) close
Dec  5 01:14:27 compute-0 ceph-mgr[193209]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/4014556596; not ready for session (expect reconnect)
Dec  5 01:14:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  5 01:14:27 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  5 01:14:27 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  5 01:14:28 compute-0 ceph-osd[207795]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Dec  5 01:14:28 compute-0 ceph-osd[207795]: load: jerasure load: lrc 
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846860c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846860c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846860c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846860c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  5 01:14:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846860c00 /var/lib/ceph/osd/ceph-1/block) close
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846860c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846860c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846860c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846860c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846860c00 /var/lib/ceph/osd/ceph-1/block) close
Dec  5 01:14:28 compute-0 podman[207958]: 2025-12-05 01:14:28.461503995 +0000 UTC m=+0.064665822 container create b5ff8d47eb855d34de167f88e779500de92cc5f884ab3251da684e94bb1129ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  5 01:14:28 compute-0 systemd[1]: Started libpod-conmon-b5ff8d47eb855d34de167f88e779500de92cc5f884ab3251da684e94bb1129ba.scope.
Dec  5 01:14:28 compute-0 podman[207958]: 2025-12-05 01:14:28.431660324 +0000 UTC m=+0.034822221 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:14:28 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:28 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:14:28 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:28 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec  5 01:14:28 compute-0 ceph-mon[192914]: Deploying daemon osd.2 on compute-0
Dec  5 01:14:28 compute-0 podman[207958]: 2025-12-05 01:14:28.577133434 +0000 UTC m=+0.180295311 container init b5ff8d47eb855d34de167f88e779500de92cc5f884ab3251da684e94bb1129ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_khorana, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  5 01:14:28 compute-0 ceph-osd[207795]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec  5 01:14:28 compute-0 ceph-osd[207795]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846860c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846860c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846860c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846860c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846861400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846861400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846861400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846861400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bluefs mount
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  5 01:14:28 compute-0 podman[207958]: 2025-12-05 01:14:28.58884155 +0000 UTC m=+0.192003377 container start b5ff8d47eb855d34de167f88e779500de92cc5f884ab3251da684e94bb1129ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bluefs mount shared_bdev_used = 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: RocksDB version: 7.9.2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Git sha 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Compile date 2025-05-06 23:30:25
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: DB SUMMARY
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: DB Session ID:  JNG2OMSWA32VFJZW4PQ8
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: CURRENT file:  CURRENT
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: IDENTITY file:  IDENTITY
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                         Options.error_if_exists: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.create_if_missing: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                         Options.paranoid_checks: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                                     Options.env: 0x56484752be30
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                                Options.info_log: 0x564846722720
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_file_opening_threads: 16
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                              Options.statistics: (nil)
Dec  5 01:14:28 compute-0 podman[207958]: 2025-12-05 01:14:28.594105916 +0000 UTC m=+0.197267783 container attach b5ff8d47eb855d34de167f88e779500de92cc5f884ab3251da684e94bb1129ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.use_fsync: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.max_log_file_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                         Options.allow_fallocate: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.use_direct_reads: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.create_missing_column_families: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                              Options.db_log_dir: 
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                                 Options.wal_dir: db.wal
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.advise_random_on_open: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.write_buffer_manager: 0x564847638460
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                            Options.rate_limiter: (nil)
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.unordered_write: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.row_cache: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                              Options.wal_filter: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.allow_ingest_behind: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.two_write_queues: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.manual_wal_flush: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.wal_compression: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.atomic_flush: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.log_readahead_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.allow_data_in_errors: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.db_host_id: __hostname__
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.max_background_jobs: 4
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.max_background_compactions: -1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.max_subcompactions: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.max_open_files: -1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.bytes_per_sync: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.max_background_flushes: -1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Compression algorithms supported:
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: #011kZSTD supported: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: #011kXpressCompression supported: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: #011kBZip2Compression supported: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: #011kLZ4Compression supported: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: #011kZlibCompression supported: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: #011kSnappyCompression supported: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564846722d80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56484670add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564846722d80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56484670add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:28 compute-0 affectionate_khorana[207971]: 167 167
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564846722d80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56484670add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:28 compute-0 systemd[1]: libpod-b5ff8d47eb855d34de167f88e779500de92cc5f884ab3251da684e94bb1129ba.scope: Deactivated successfully.
Dec  5 01:14:28 compute-0 podman[207958]: 2025-12-05 01:14:28.599187958 +0000 UTC m=+0.202349775 container died b5ff8d47eb855d34de167f88e779500de92cc5f884ab3251da684e94bb1129ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_khorana, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564846722d80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56484670add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564846722d80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56484670add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564846722d80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56484670add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564846722d80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56484670add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564846722d60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56484670a430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564846722d60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56484670a430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564846722d60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56484670a430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  5 01:14:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-d11a9ff0d1692adadae838336c8084c6f31dc9782fdfd8a1c9184d2592fea31e-merged.mount: Deactivated successfully.
Dec  5 01:14:28 compute-0 podman[207958]: 2025-12-05 01:14:28.661165203 +0000 UTC m=+0.264327030 container remove b5ff8d47eb855d34de167f88e779500de92cc5f884ab3251da684e94bb1129ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_khorana, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1bcc1f64-0499-4951-869f-3891619a47de
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897268665156, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897268666023, "job": 1, "event": "recovery_finished"}
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: freelist init
Dec  5 01:14:28 compute-0 ceph-osd[207795]: freelist _read_cfg
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bluefs umount
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846861400 /var/lib/ceph/osd/ceph-1/block) close
Dec  5 01:14:28 compute-0 systemd[1]: libpod-conmon-b5ff8d47eb855d34de167f88e779500de92cc5f884ab3251da684e94bb1129ba.scope: Deactivated successfully.
Dec  5 01:14:28 compute-0 ceph-osd[206647]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 18.620 iops: 4766.618 elapsed_sec: 0.629
Dec  5 01:14:28 compute-0 ceph-osd[206647]: log_channel(cluster) log [WRN] : OSD bench result of 4766.617645 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  5 01:14:28 compute-0 ceph-osd[206647]: osd.0 0 waiting for initial osdmap
Dec  5 01:14:28 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0[206643]: 2025-12-05T01:14:28.687+0000 7f938b69a640 -1 osd.0 0 waiting for initial osdmap
Dec  5 01:14:28 compute-0 ceph-osd[206647]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Dec  5 01:14:28 compute-0 ceph-osd[206647]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Dec  5 01:14:28 compute-0 ceph-osd[206647]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Dec  5 01:14:28 compute-0 ceph-osd[206647]: osd.0 8 check_osdmap_features require_osd_release unknown -> reef
Dec  5 01:14:28 compute-0 ceph-osd[206647]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  5 01:14:28 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-0[206643]: 2025-12-05T01:14:28.715+0000 7f9386cc2640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  5 01:14:28 compute-0 ceph-osd[206647]: osd.0 8 set_numa_affinity not setting numa affinity
Dec  5 01:14:28 compute-0 ceph-osd[206647]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846861400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846861400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846861400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bdev(0x564846861400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bluefs mount
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bluefs mount shared_bdev_used = 4718592
Dec  5 01:14:28 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: RocksDB version: 7.9.2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Git sha 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Compile date 2025-05-06 23:30:25
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: DB SUMMARY
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: DB Session ID:  JNG2OMSWA32VFJZW4PQ9
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: CURRENT file:  CURRENT
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: IDENTITY file:  IDENTITY
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                         Options.error_if_exists: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.create_if_missing: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                         Options.paranoid_checks: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                                     Options.env: 0x5648476ec460
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                                Options.info_log: 0x564846722de0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_file_opening_threads: 16
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                              Options.statistics: (nil)
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.use_fsync: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.max_log_file_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                         Options.allow_fallocate: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.use_direct_reads: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.create_missing_column_families: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                              Options.db_log_dir: 
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                                 Options.wal_dir: db.wal
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.advise_random_on_open: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.write_buffer_manager: 0x564847638460
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                            Options.rate_limiter: (nil)
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.unordered_write: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.row_cache: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                              Options.wal_filter: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.allow_ingest_behind: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.two_write_queues: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.manual_wal_flush: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.wal_compression: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.atomic_flush: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.log_readahead_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.allow_data_in_errors: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.db_host_id: __hostname__
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.max_background_jobs: 4
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.max_background_compactions: -1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.max_subcompactions: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.max_open_files: -1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.bytes_per_sync: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.max_background_flushes: -1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Compression algorithms supported:
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: #011kZSTD supported: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: #011kXpressCompression supported: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: #011kBZip2Compression supported: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: #011kLZ4Compression supported: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: #011kZlibCompression supported: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: #011kSnappyCompression supported: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5648467228a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56484670add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5648467228a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56484670add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5648467228a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56484670add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5648467228a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56484670add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5648467228a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56484670add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:28 compute-0 ceph-mgr[193209]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/4014556596; not ready for session (expect reconnect)
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5648467228a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56484670add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:28 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:28 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5648467228a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56484670add0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564846722e80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56484670a430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564846722e80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56484670a430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564846722e80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56484670a430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1bcc1f64-0499-4951-869f-3891619a47de
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897268946491, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897268959667, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897268, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bcc1f64-0499-4951-869f-3891619a47de", "db_session_id": "JNG2OMSWA32VFJZW4PQ9", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897268964226, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897268, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bcc1f64-0499-4951-869f-3891619a47de", "db_session_id": "JNG2OMSWA32VFJZW4PQ9", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897268974451, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897268, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1bcc1f64-0499-4951-869f-3891619a47de", "db_session_id": "JNG2OMSWA32VFJZW4PQ9", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897268978726, "job": 1, "event": "recovery_finished"}
Dec  5 01:14:28 compute-0 ceph-osd[207795]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec  5 01:14:29 compute-0 podman[208335]: 2025-12-05 01:14:29.028754827 +0000 UTC m=+0.087927589 container create 0a69aca1d39a2dab4b7bb93ddc351052e2759e4b481e78a82388054a6180808d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  5 01:14:29 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5648476f8000
Dec  5 01:14:29 compute-0 ceph-osd[207795]: rocksdb: DB pointer 0x564846749a00
Dec  5 01:14:29 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  5 01:14:29 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Dec  5 01:14:29 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Dec  5 01:14:29 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 01:14:29 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.2 total, 0.2 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56484670add0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56484670add0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.2 total, 0.2 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012
Dec  5 01:14:29 compute-0 ceph-osd[207795]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec  5 01:14:29 compute-0 ceph-osd[207795]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec  5 01:14:29 compute-0 ceph-osd[207795]: _get_class not permitted to load lua
Dec  5 01:14:29 compute-0 ceph-osd[207795]: _get_class not permitted to load sdk
Dec  5 01:14:29 compute-0 ceph-osd[207795]: _get_class not permitted to load test_remote_reads
Dec  5 01:14:29 compute-0 ceph-osd[207795]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec  5 01:14:29 compute-0 ceph-osd[207795]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec  5 01:14:29 compute-0 ceph-osd[207795]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec  5 01:14:29 compute-0 ceph-osd[207795]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec  5 01:14:29 compute-0 ceph-osd[207795]: osd.1 0 load_pgs
Dec  5 01:14:29 compute-0 ceph-osd[207795]: osd.1 0 load_pgs opened 0 pgs
Dec  5 01:14:29 compute-0 ceph-osd[207795]: osd.1 0 log_to_monitors true
Dec  5 01:14:29 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1[207791]: 2025-12-05T01:14:29.050+0000 7f1d272b4740 -1 osd.1 0 log_to_monitors true
Dec  5 01:14:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Dec  5 01:14:29 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2180313635,v1:192.168.122.100:6807/2180313635]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Dec  5 01:14:29 compute-0 podman[208335]: 2025-12-05 01:14:28.994549635 +0000 UTC m=+0.053722467 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:14:29 compute-0 systemd[1]: Started libpod-conmon-0a69aca1d39a2dab4b7bb93ddc351052e2759e4b481e78a82388054a6180808d.scope.
Dec  5 01:14:29 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:14:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f2e58e1a7acb5d5ee4e1c4e17144605e6f96d5774031cad44251431a726c145/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f2e58e1a7acb5d5ee4e1c4e17144605e6f96d5774031cad44251431a726c145/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f2e58e1a7acb5d5ee4e1c4e17144605e6f96d5774031cad44251431a726c145/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f2e58e1a7acb5d5ee4e1c4e17144605e6f96d5774031cad44251431a726c145/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f2e58e1a7acb5d5ee4e1c4e17144605e6f96d5774031cad44251431a726c145/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:29 compute-0 podman[208335]: 2025-12-05 01:14:29.176690546 +0000 UTC m=+0.235863328 container init 0a69aca1d39a2dab4b7bb93ddc351052e2759e4b481e78a82388054a6180808d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:14:29 compute-0 podman[208335]: 2025-12-05 01:14:29.199294965 +0000 UTC m=+0.258467747 container start 0a69aca1d39a2dab4b7bb93ddc351052e2759e4b481e78a82388054a6180808d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate-test, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Dec  5 01:14:29 compute-0 podman[208335]: 2025-12-05 01:14:29.20522437 +0000 UTC m=+0.264397132 container attach 0a69aca1d39a2dab4b7bb93ddc351052e2759e4b481e78a82388054a6180808d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  5 01:14:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Dec  5 01:14:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  5 01:14:29 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2180313635,v1:192.168.122.100:6807/2180313635]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec  5 01:14:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e9 e9: 3 total, 1 up, 3 in
Dec  5 01:14:29 compute-0 ceph-mon[192914]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/4014556596,v1:192.168.122.100:6803/4014556596] boot
Dec  5 01:14:29 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 1 up, 3 in
Dec  5 01:14:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Dec  5 01:14:29 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2180313635,v1:192.168.122.100:6807/2180313635]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec  5 01:14:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Dec  5 01:14:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Dec  5 01:14:29 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  5 01:14:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  5 01:14:29 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  5 01:14:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  5 01:14:29 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  5 01:14:29 compute-0 ceph-mon[192914]: OSD bench result of 4766.617645 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  5 01:14:29 compute-0 ceph-mon[192914]: from='osd.1 [v2:192.168.122.100:6806/2180313635,v1:192.168.122.100:6807/2180313635]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Dec  5 01:14:29 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  5 01:14:29 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  5 01:14:29 compute-0 ceph-osd[206647]: osd.0 9 state: booting -> active
Dec  5 01:14:29 compute-0 podman[158197]: time="2025-12-05T01:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:14:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29000 "" "Go-http-client/1.1"
Dec  5 01:14:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5784 "" "Go-http-client/1.1"
Dec  5 01:14:29 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate-test[208429]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Dec  5 01:14:29 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate-test[208429]:                            [--no-systemd] [--no-tmpfs]
Dec  5 01:14:29 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate-test[208429]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec  5 01:14:29 compute-0 systemd[1]: libpod-0a69aca1d39a2dab4b7bb93ddc351052e2759e4b481e78a82388054a6180808d.scope: Deactivated successfully.
Dec  5 01:14:29 compute-0 podman[208335]: 2025-12-05 01:14:29.923431005 +0000 UTC m=+0.982603747 container died 0a69aca1d39a2dab4b7bb93ddc351052e2759e4b481e78a82388054a6180808d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  5 01:14:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f2e58e1a7acb5d5ee4e1c4e17144605e6f96d5774031cad44251431a726c145-merged.mount: Deactivated successfully.
Dec  5 01:14:29 compute-0 podman[208335]: 2025-12-05 01:14:29.984275159 +0000 UTC m=+1.043447901 container remove 0a69aca1d39a2dab4b7bb93ddc351052e2759e4b481e78a82388054a6180808d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  5 01:14:29 compute-0 systemd[1]: libpod-conmon-0a69aca1d39a2dab4b7bb93ddc351052e2759e4b481e78a82388054a6180808d.scope: Deactivated successfully.
Dec  5 01:14:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec  5 01:14:30 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec  5 01:14:30 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec  5 01:14:30 compute-0 ceph-mgr[193209]: [devicehealth INFO root] creating mgr pool
Dec  5 01:14:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Dec  5 01:14:30 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Dec  5 01:14:30 compute-0 systemd[1]: Reloading.
Dec  5 01:14:30 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:14:30 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:14:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Dec  5 01:14:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  5 01:14:30 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/2180313635,v1:192.168.122.100:6807/2180313635]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  5 01:14:30 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec  5 01:14:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Dec  5 01:14:30 compute-0 ceph-osd[207795]: osd.1 0 done with init, starting boot process
Dec  5 01:14:30 compute-0 ceph-osd[207795]: osd.1 0 start_boot
Dec  5 01:14:30 compute-0 ceph-osd[207795]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec  5 01:14:30 compute-0 ceph-osd[207795]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec  5 01:14:30 compute-0 ceph-osd[207795]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec  5 01:14:30 compute-0 ceph-osd[207795]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec  5 01:14:30 compute-0 ceph-osd[207795]: osd.1 0  bench count 12288000 bsize 4 KiB
Dec  5 01:14:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e10 crush map has features 3314933000852226048, adjusting msgr requires
Dec  5 01:14:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Dec  5 01:14:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Dec  5 01:14:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Dec  5 01:14:30 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Dec  5 01:14:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  5 01:14:30 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  5 01:14:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  5 01:14:30 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  5 01:14:30 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  5 01:14:30 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  5 01:14:30 compute-0 ceph-osd[206647]: osd.0 10 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec  5 01:14:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Dec  5 01:14:30 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Dec  5 01:14:30 compute-0 ceph-osd[206647]: osd.0 10 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Dec  5 01:14:30 compute-0 ceph-osd[206647]: osd.0 10 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec  5 01:14:30 compute-0 ceph-mgr[193209]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2180313635; not ready for session (expect reconnect)
Dec  5 01:14:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  5 01:14:30 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  5 01:14:30 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  5 01:14:30 compute-0 ceph-mon[192914]: from='osd.1 [v2:192.168.122.100:6806/2180313635,v1:192.168.122.100:6807/2180313635]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec  5 01:14:30 compute-0 ceph-mon[192914]: osd.0 [v2:192.168.122.100:6802/4014556596,v1:192.168.122.100:6803/4014556596] boot
Dec  5 01:14:30 compute-0 ceph-mon[192914]: from='osd.1 [v2:192.168.122.100:6806/2180313635,v1:192.168.122.100:6807/2180313635]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec  5 01:14:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Dec  5 01:14:30 compute-0 ceph-mon[192914]: from='osd.1 [v2:192.168.122.100:6806/2180313635,v1:192.168.122.100:6807/2180313635]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  5 01:14:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec  5 01:14:30 compute-0 systemd[1]: Reloading.
Dec  5 01:14:31 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:14:31 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:14:31 compute-0 systemd[1]: Starting Ceph osd.2 for cbd280d3-cbd8-528b-ace6-2b3a887cdcee...
Dec  5 01:14:31 compute-0 openstack_network_exporter[160350]: ERROR   01:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:14:31 compute-0 openstack_network_exporter[160350]: ERROR   01:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:14:31 compute-0 openstack_network_exporter[160350]: ERROR   01:14:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:14:31 compute-0 openstack_network_exporter[160350]: ERROR   01:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:14:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:14:31 compute-0 openstack_network_exporter[160350]: ERROR   01:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:14:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:14:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Dec  5 01:14:31 compute-0 ceph-mgr[193209]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2180313635; not ready for session (expect reconnect)
Dec  5 01:14:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  5 01:14:31 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  5 01:14:31 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  5 01:14:31 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec  5 01:14:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Dec  5 01:14:31 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Dec  5 01:14:31 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  5 01:14:31 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  5 01:14:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  5 01:14:31 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  5 01:14:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  5 01:14:31 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  5 01:14:31 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Dec  5 01:14:31 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec  5 01:14:31 compute-0 podman[208574]: 2025-12-05 01:14:31.68081295 +0000 UTC m=+0.091023795 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  5 01:14:31 compute-0 podman[208594]: 2025-12-05 01:14:31.714405595 +0000 UTC m=+0.078640980 container create cb116a37651a01f6de19824ee721a826e522a1eb68d3df93cfeea6a127ec52be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:14:31 compute-0 podman[208594]: 2025-12-05 01:14:31.686723614 +0000 UTC m=+0.050959009 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:14:31 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:14:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de31eebe7e9042a9854aa7871de0be3864e9495eaf2d11dfb906e4bd740d673b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de31eebe7e9042a9854aa7871de0be3864e9495eaf2d11dfb906e4bd740d673b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de31eebe7e9042a9854aa7871de0be3864e9495eaf2d11dfb906e4bd740d673b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de31eebe7e9042a9854aa7871de0be3864e9495eaf2d11dfb906e4bd740d673b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de31eebe7e9042a9854aa7871de0be3864e9495eaf2d11dfb906e4bd740d673b/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:31 compute-0 podman[208594]: 2025-12-05 01:14:31.900017883 +0000 UTC m=+0.264253258 container init cb116a37651a01f6de19824ee721a826e522a1eb68d3df93cfeea6a127ec52be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Dec  5 01:14:31 compute-0 podman[208594]: 2025-12-05 01:14:31.908242411 +0000 UTC m=+0.272477796 container start cb116a37651a01f6de19824ee721a826e522a1eb68d3df93cfeea6a127ec52be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:14:31 compute-0 podman[208594]: 2025-12-05 01:14:31.924467653 +0000 UTC m=+0.288703078 container attach cb116a37651a01f6de19824ee721a826e522a1eb68d3df93cfeea6a127ec52be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  5 01:14:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:14:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v40: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec  5 01:14:32 compute-0 ceph-mgr[193209]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2180313635; not ready for session (expect reconnect)
Dec  5 01:14:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  5 01:14:32 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  5 01:14:32 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  5 01:14:33 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate[208622]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec  5 01:14:33 compute-0 bash[208594]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec  5 01:14:33 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate[208622]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Dec  5 01:14:33 compute-0 bash[208594]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Dec  5 01:14:33 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate[208622]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Dec  5 01:14:33 compute-0 bash[208594]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Dec  5 01:14:33 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate[208622]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec  5 01:14:33 compute-0 bash[208594]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec  5 01:14:33 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate[208622]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec  5 01:14:33 compute-0 bash[208594]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec  5 01:14:33 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate[208622]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec  5 01:14:33 compute-0 bash[208594]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec  5 01:14:33 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate[208622]: --> ceph-volume raw activate successful for osd ID: 2
Dec  5 01:14:33 compute-0 bash[208594]: --> ceph-volume raw activate successful for osd ID: 2
Dec  5 01:14:33 compute-0 systemd[1]: libpod-cb116a37651a01f6de19824ee721a826e522a1eb68d3df93cfeea6a127ec52be.scope: Deactivated successfully.
Dec  5 01:14:33 compute-0 systemd[1]: libpod-cb116a37651a01f6de19824ee721a826e522a1eb68d3df93cfeea6a127ec52be.scope: Consumed 1.373s CPU time.
Dec  5 01:14:33 compute-0 podman[208594]: 2025-12-05 01:14:33.26845451 +0000 UTC m=+1.632689955 container died cb116a37651a01f6de19824ee721a826e522a1eb68d3df93cfeea6a127ec52be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Dec  5 01:14:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-de31eebe7e9042a9854aa7871de0be3864e9495eaf2d11dfb906e4bd740d673b-merged.mount: Deactivated successfully.
Dec  5 01:14:33 compute-0 podman[208594]: 2025-12-05 01:14:33.424780342 +0000 UTC m=+1.789015757 container remove cb116a37651a01f6de19824ee721a826e522a1eb68d3df93cfeea6a127ec52be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  5 01:14:33 compute-0 ceph-mgr[193209]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2180313635; not ready for session (expect reconnect)
Dec  5 01:14:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  5 01:14:33 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  5 01:14:33 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  5 01:14:33 compute-0 podman[208809]: 2025-12-05 01:14:33.899836337 +0000 UTC m=+0.081201532 container create 6e6a7cedb28bff2eaefd9d1f0a74b137d90c05ecaa1d91aa67275a4d70d5d74a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:14:33 compute-0 podman[208809]: 2025-12-05 01:14:33.864061521 +0000 UTC m=+0.045426756 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:14:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de3b83122a6d15bf4d6f88ba2d42cc8e5c5fd5c90d968c3d58c311055a27e01/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de3b83122a6d15bf4d6f88ba2d42cc8e5c5fd5c90d968c3d58c311055a27e01/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de3b83122a6d15bf4d6f88ba2d42cc8e5c5fd5c90d968c3d58c311055a27e01/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de3b83122a6d15bf4d6f88ba2d42cc8e5c5fd5c90d968c3d58c311055a27e01/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de3b83122a6d15bf4d6f88ba2d42cc8e5c5fd5c90d968c3d58c311055a27e01/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:34 compute-0 podman[208809]: 2025-12-05 01:14:34.036333357 +0000 UTC m=+0.217698822 container init 6e6a7cedb28bff2eaefd9d1f0a74b137d90c05ecaa1d91aa67275a4d70d5d74a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  5 01:14:34 compute-0 podman[208809]: 2025-12-05 01:14:34.050572833 +0000 UTC m=+0.231938018 container start 6e6a7cedb28bff2eaefd9d1f0a74b137d90c05ecaa1d91aa67275a4d70d5d74a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  5 01:14:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v41: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec  5 01:14:34 compute-0 bash[208809]: 6e6a7cedb28bff2eaefd9d1f0a74b137d90c05ecaa1d91aa67275a4d70d5d74a
Dec  5 01:14:34 compute-0 systemd[1]: Started Ceph osd.2 for cbd280d3-cbd8-528b-ace6-2b3a887cdcee.
Dec  5 01:14:34 compute-0 ceph-osd[208828]: set uid:gid to 167:167 (ceph:ceph)
Dec  5 01:14:34 compute-0 ceph-osd[208828]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Dec  5 01:14:34 compute-0 ceph-osd[208828]: pidfile_write: ignore empty --pid-file
Dec  5 01:14:34 compute-0 ceph-osd[208828]: bdev(0x55c4356eb800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  5 01:14:34 compute-0 ceph-osd[208828]: bdev(0x55c4356eb800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  5 01:14:34 compute-0 ceph-osd[208828]: bdev(0x55c4356eb800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  5 01:14:34 compute-0 ceph-osd[208828]: bdev(0x55c4356eb800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  5 01:14:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  5 01:14:34 compute-0 ceph-osd[208828]: bdev(0x55c43652d800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  5 01:14:34 compute-0 ceph-osd[208828]: bdev(0x55c43652d800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  5 01:14:34 compute-0 ceph-osd[208828]: bdev(0x55c43652d800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  5 01:14:34 compute-0 ceph-osd[208828]: bdev(0x55c43652d800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  5 01:14:34 compute-0 ceph-osd[208828]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec  5 01:14:34 compute-0 ceph-osd[208828]: bdev(0x55c43652d800 /var/lib/ceph/osd/ceph-2/block) close
Dec  5 01:14:34 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:14:34 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:34 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:14:34 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:34 compute-0 ceph-osd[208828]: bdev(0x55c4356eb800 /var/lib/ceph/osd/ceph-2/block) close
Dec  5 01:14:34 compute-0 ceph-mgr[193209]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2180313635; not ready for session (expect reconnect)
Dec  5 01:14:34 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  5 01:14:34 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  5 01:14:34 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  5 01:14:34 compute-0 ceph-osd[208828]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Dec  5 01:14:34 compute-0 ceph-osd[208828]: load: jerasure load: lrc 
Dec  5 01:14:34 compute-0 ceph-osd[208828]: bdev(0x55c4358b4c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  5 01:14:34 compute-0 ceph-osd[208828]: bdev(0x55c4358b4c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  5 01:14:34 compute-0 ceph-osd[208828]: bdev(0x55c4358b4c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  5 01:14:34 compute-0 ceph-osd[208828]: bdev(0x55c4358b4c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  5 01:14:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  5 01:14:34 compute-0 ceph-osd[208828]: bdev(0x55c4358b4c00 /var/lib/ceph/osd/ceph-2/block) close
Dec  5 01:14:34 compute-0 ceph-osd[207795]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 22.222 iops: 5688.725 elapsed_sec: 0.527
Dec  5 01:14:34 compute-0 ceph-osd[207795]: log_channel(cluster) log [WRN] : OSD bench result of 5688.724897 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  5 01:14:34 compute-0 ceph-osd[207795]: osd.1 0 waiting for initial osdmap
Dec  5 01:14:34 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1[207791]: 2025-12-05T01:14:34.733+0000 7f1d23a4b640 -1 osd.1 0 waiting for initial osdmap
Dec  5 01:14:34 compute-0 ceph-osd[207795]: osd.1 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec  5 01:14:34 compute-0 ceph-osd[207795]: osd.1 11 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Dec  5 01:14:34 compute-0 ceph-osd[207795]: osd.1 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec  5 01:14:34 compute-0 ceph-osd[207795]: osd.1 11 check_osdmap_features require_osd_release unknown -> reef
Dec  5 01:14:34 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-1[207791]: 2025-12-05T01:14:34.766+0000 7f1d1e85c640 -1 osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  5 01:14:34 compute-0 ceph-osd[207795]: osd.1 11 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  5 01:14:34 compute-0 ceph-osd[207795]: osd.1 11 set_numa_affinity not setting numa affinity
Dec  5 01:14:34 compute-0 ceph-osd[207795]: osd.1 11 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Dec  5 01:14:34 compute-0 ceph-osd[208828]: bdev(0x55c4358b4c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  5 01:14:34 compute-0 ceph-osd[208828]: bdev(0x55c4358b4c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  5 01:14:34 compute-0 ceph-osd[208828]: bdev(0x55c4358b4c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  5 01:14:34 compute-0 ceph-osd[208828]: bdev(0x55c4358b4c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  5 01:14:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  5 01:14:34 compute-0 ceph-osd[208828]: bdev(0x55c4358b4c00 /var/lib/ceph/osd/ceph-2/block) close
Dec  5 01:14:35 compute-0 podman[208986]: 2025-12-05 01:14:35.027932213 +0000 UTC m=+0.074800643 container create fe52e5c8471f5419d04d1120caefa14e497ba48dd3030121202f4faa8be63fb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  5 01:14:35 compute-0 podman[208986]: 2025-12-05 01:14:34.993207056 +0000 UTC m=+0.040075576 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:14:35 compute-0 systemd[1]: Started libpod-conmon-fe52e5c8471f5419d04d1120caefa14e497ba48dd3030121202f4faa8be63fb9.scope.
Dec  5 01:14:35 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:14:35 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:35 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:35 compute-0 ceph-mon[192914]: OSD bench result of 5688.724897 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  5 01:14:35 compute-0 podman[208986]: 2025-12-05 01:14:35.160457523 +0000 UTC m=+0.207325973 container init fe52e5c8471f5419d04d1120caefa14e497ba48dd3030121202f4faa8be63fb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:14:35 compute-0 podman[208986]: 2025-12-05 01:14:35.178563747 +0000 UTC m=+0.225432167 container start fe52e5c8471f5419d04d1120caefa14e497ba48dd3030121202f4faa8be63fb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  5 01:14:35 compute-0 podman[208986]: 2025-12-05 01:14:35.18333604 +0000 UTC m=+0.230204460 container attach fe52e5c8471f5419d04d1120caefa14e497ba48dd3030121202f4faa8be63fb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_matsumoto, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:14:35 compute-0 sweet_matsumoto[209006]: 167 167
Dec  5 01:14:35 compute-0 systemd[1]: libpod-fe52e5c8471f5419d04d1120caefa14e497ba48dd3030121202f4faa8be63fb9.scope: Deactivated successfully.
Dec  5 01:14:35 compute-0 conmon[209006]: conmon fe52e5c8471f5419d04d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fe52e5c8471f5419d04d1120caefa14e497ba48dd3030121202f4faa8be63fb9.scope/container/memory.events
Dec  5 01:14:35 compute-0 podman[208986]: 2025-12-05 01:14:35.191555418 +0000 UTC m=+0.238423858 container died fe52e5c8471f5419d04d1120caefa14e497ba48dd3030121202f4faa8be63fb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:14:35 compute-0 ceph-osd[208828]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec  5 01:14:35 compute-0 ceph-osd[208828]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec  5 01:14:35 compute-0 ceph-osd[208828]: bdev(0x55c4358b4c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  5 01:14:35 compute-0 ceph-osd[208828]: bdev(0x55c4358b4c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  5 01:14:35 compute-0 ceph-osd[208828]: bdev(0x55c4358b4c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  5 01:14:35 compute-0 ceph-osd[208828]: bdev(0x55c4358b4c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  5 01:14:35 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  5 01:14:35 compute-0 ceph-osd[208828]: bdev(0x55c4358b5400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  5 01:14:35 compute-0 ceph-osd[208828]: bdev(0x55c4358b5400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  5 01:14:35 compute-0 ceph-osd[208828]: bdev(0x55c4358b5400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  5 01:14:35 compute-0 ceph-osd[208828]: bdev(0x55c4358b5400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  5 01:14:35 compute-0 ceph-osd[208828]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec  5 01:14:35 compute-0 ceph-osd[208828]: bluefs mount
Dec  5 01:14:35 compute-0 ceph-osd[208828]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: bluefs mount shared_bdev_used = 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: RocksDB version: 7.9.2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Git sha 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Compile date 2025-05-06 23:30:25
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: DB SUMMARY
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: DB Session ID:  XCPSDI8P01OE9YLX3G6I
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: CURRENT file:  CURRENT
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: IDENTITY file:  IDENTITY
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                         Options.error_if_exists: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.create_if_missing: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                         Options.paranoid_checks: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                                     Options.env: 0x55c43657fd50
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                                Options.info_log: 0x55c435776800
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_file_opening_threads: 16
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                              Options.statistics: (nil)
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.use_fsync: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.max_log_file_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                         Options.allow_fallocate: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.use_direct_reads: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.create_missing_column_families: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                              Options.db_log_dir: 
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                                 Options.wal_dir: db.wal
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.advise_random_on_open: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.write_buffer_manager: 0x55c436668460
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                            Options.rate_limiter: (nil)
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.unordered_write: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.row_cache: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                              Options.wal_filter: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.allow_ingest_behind: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.two_write_queues: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.manual_wal_flush: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.wal_compression: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.atomic_flush: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.log_readahead_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.allow_data_in_errors: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.db_host_id: __hostname__
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.max_background_jobs: 4
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.max_background_compactions: -1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.max_subcompactions: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.max_open_files: -1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.bytes_per_sync: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.max_background_flushes: -1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Compression algorithms supported:
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: #011kZSTD supported: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: #011kXpressCompression supported: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: #011kBZip2Compression supported: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: #011kLZ4Compression supported: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: #011kZlibCompression supported: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: #011kSnappyCompression supported: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c435776e80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c43575edd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c435776e80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c43575edd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c435776e80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c43575edd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c435776e80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c43575edd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c435776e80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c43575edd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c435776e80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c43575edd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c435776e80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c43575edd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c8ebee885af0519e04e0880b76a481772193f04499d02b55cb4df13e5e7e79c-merged.mount: Deactivated successfully.
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c435776e60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c43575e430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c435776e60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c43575e430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c435776e60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c43575e430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  5 01:14:35 compute-0 podman[208986]: 2025-12-05 01:14:35.255856439 +0000 UTC m=+0.302724859 container remove fe52e5c8471f5419d04d1120caefa14e497ba48dd3030121202f4faa8be63fb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  5 01:14:35 compute-0 systemd[1]: libpod-conmon-fe52e5c8471f5419d04d1120caefa14e497ba48dd3030121202f4faa8be63fb9.scope: Deactivated successfully.
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 71b297f4-3cae-4ec5-b4fe-6311e6b0d4ce
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897275284132, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897275284609, "job": 1, "event": "recovery_finished"}
Dec  5 01:14:35 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Dec  5 01:14:35 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Dec  5 01:14:35 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec  5 01:14:35 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: freelist init
Dec  5 01:14:35 compute-0 ceph-osd[208828]: freelist _read_cfg
Dec  5 01:14:35 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec  5 01:14:35 compute-0 ceph-osd[208828]: bluefs umount
Dec  5 01:14:35 compute-0 ceph-osd[208828]: bdev(0x55c4358b5400 /var/lib/ceph/osd/ceph-2/block) close
Dec  5 01:14:35 compute-0 ceph-osd[208828]: bdev(0x55c4358b5400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec  5 01:14:35 compute-0 ceph-osd[208828]: bdev(0x55c4358b5400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec  5 01:14:35 compute-0 ceph-osd[208828]: bdev(0x55c4358b5400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  5 01:14:35 compute-0 ceph-osd[208828]: bdev(0x55c4358b5400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  5 01:14:35 compute-0 ceph-osd[208828]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec  5 01:14:35 compute-0 ceph-osd[208828]: bluefs mount
Dec  5 01:14:35 compute-0 ceph-osd[208828]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: bluefs mount shared_bdev_used = 4718592
Dec  5 01:14:35 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: RocksDB version: 7.9.2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Git sha 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Compile date 2025-05-06 23:30:25
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: DB SUMMARY
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: DB Session ID:  XCPSDI8P01OE9YLX3G6J
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: CURRENT file:  CURRENT
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: IDENTITY file:  IDENTITY
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                         Options.error_if_exists: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.create_if_missing: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                         Options.paranoid_checks: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                                     Options.env: 0x55c43671c460
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                                Options.info_log: 0x55c4357771c0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_file_opening_threads: 16
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                              Options.statistics: (nil)
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.use_fsync: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.max_log_file_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                         Options.allow_fallocate: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.use_direct_reads: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.create_missing_column_families: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                              Options.db_log_dir: 
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                                 Options.wal_dir: db.wal
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.advise_random_on_open: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.write_buffer_manager: 0x55c4366686e0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                            Options.rate_limiter: (nil)
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.unordered_write: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.row_cache: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                              Options.wal_filter: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.allow_ingest_behind: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.two_write_queues: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.manual_wal_flush: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.wal_compression: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.atomic_flush: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.log_readahead_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.allow_data_in_errors: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.db_host_id: __hostname__
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.max_background_jobs: 4
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.max_background_compactions: -1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.max_subcompactions: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.max_open_files: -1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.bytes_per_sync: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.max_background_flushes: -1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Compression algorithms supported:
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: #011kZSTD supported: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: #011kXpressCompression supported: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: #011kBZip2Compression supported: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: #011kLZ4Compression supported: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: #011kZlibCompression supported: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  5 01:14:35 compute-0 podman[209224]: 2025-12-05 01:14:35.494653377 +0000 UTC m=+0.090477750 container create 5c3dca1ac9784c5ee558bd4f8f448242ddc395c70d1ce88b790c391f38c14a6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_tharp, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: #011kSnappyCompression supported: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4357769c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c43575edd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4357769c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c43575edd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4357769c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c43575edd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4357769c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c43575edd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4357769c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c43575edd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4357769c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c43575edd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c4357769c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c43575edd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c435776f60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c43575e430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c435776f60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c43575e430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:           Options.merge_operator: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.compaction_filter_factory: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.sst_partitioner_factory: None
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55c435776f60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55c43575e430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.write_buffer_size: 16777216
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.max_write_buffer_number: 64
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.compression: LZ4
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.prefix_extractor: nullptr
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.num_levels: 7
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.level: 32767
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.compression_opts.strategy: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                  Options.compression_opts.enabled: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.arena_block_size: 1048576
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.disable_auto_compactions: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.inplace_update_support: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.bloom_locality: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                    Options.max_successive_merges: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.paranoid_file_checks: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.force_consistency_checks: 1
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.report_bg_io_stats: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                               Options.ttl: 2592000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                       Options.enable_blob_files: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                           Options.min_blob_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                          Options.blob_file_size: 268435456
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb:                Options.blob_file_starting_level: 0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  5 01:14:35 compute-0 podman[209224]: 2025-12-05 01:14:35.453035648 +0000 UTC m=+0.048860101 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:14:35 compute-0 systemd[1]: Started libpod-conmon-5c3dca1ac9784c5ee558bd4f8f448242ddc395c70d1ce88b790c391f38c14a6b.scope.
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 71b297f4-3cae-4ec5-b4fe-6311e6b0d4ce
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897275556537, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897275563837, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897275, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "71b297f4-3cae-4ec5-b4fe-6311e6b0d4ce", "db_session_id": "XCPSDI8P01OE9YLX3G6J", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897275569875, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897275, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "71b297f4-3cae-4ec5-b4fe-6311e6b0d4ce", "db_session_id": "XCPSDI8P01OE9YLX3G6J", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897275576071, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897275, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "71b297f4-3cae-4ec5-b4fe-6311e6b0d4ce", "db_session_id": "XCPSDI8P01OE9YLX3G6J", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897275581702, "job": 1, "event": "recovery_finished"}
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec  5 01:14:35 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:14:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26d8ab5b1a2efa965c0ad3ff366b2d724ddcd5a7087498746d2d2f3afff00739/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26d8ab5b1a2efa965c0ad3ff366b2d724ddcd5a7087498746d2d2f3afff00739/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26d8ab5b1a2efa965c0ad3ff366b2d724ddcd5a7087498746d2d2f3afff00739/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26d8ab5b1a2efa965c0ad3ff366b2d724ddcd5a7087498746d2d2f3afff00739/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:35 compute-0 podman[209224]: 2025-12-05 01:14:35.619329468 +0000 UTC m=+0.215153861 container init 5c3dca1ac9784c5ee558bd4f8f448242ddc395c70d1ce88b790c391f38c14a6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_tharp, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55c43675fc00
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: DB pointer 0x55c435799a00
Dec  5 01:14:35 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  5 01:14:35 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Dec  5 01:14:35 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 01:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c43575edd0#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c43575edd0#2 capacity: 460.80 MB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000301335%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012
Dec  5 01:14:35 compute-0 ceph-mgr[193209]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/2180313635; not ready for session (expect reconnect)
Dec  5 01:14:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  5 01:14:35 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  5 01:14:35 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  5 01:14:35 compute-0 ceph-osd[208828]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec  5 01:14:35 compute-0 podman[209224]: 2025-12-05 01:14:35.633200764 +0000 UTC m=+0.229025127 container start 5c3dca1ac9784c5ee558bd4f8f448242ddc395c70d1ce88b790c391f38c14a6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_tharp, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:14:35 compute-0 ceph-osd[208828]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec  5 01:14:35 compute-0 ceph-osd[208828]: _get_class not permitted to load lua
Dec  5 01:14:35 compute-0 podman[209224]: 2025-12-05 01:14:35.63773302 +0000 UTC m=+0.233557383 container attach 5c3dca1ac9784c5ee558bd4f8f448242ddc395c70d1ce88b790c391f38c14a6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  5 01:14:35 compute-0 ceph-osd[208828]: _get_class not permitted to load sdk
Dec  5 01:14:35 compute-0 ceph-osd[208828]: _get_class not permitted to load test_remote_reads
Dec  5 01:14:35 compute-0 ceph-osd[208828]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec  5 01:14:35 compute-0 ceph-osd[208828]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec  5 01:14:35 compute-0 ceph-osd[208828]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec  5 01:14:35 compute-0 ceph-osd[208828]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec  5 01:14:35 compute-0 ceph-osd[208828]: osd.2 0 load_pgs
Dec  5 01:14:35 compute-0 ceph-osd[208828]: osd.2 0 load_pgs opened 0 pgs
Dec  5 01:14:35 compute-0 ceph-osd[208828]: osd.2 0 log_to_monitors true
Dec  5 01:14:35 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2[208824]: 2025-12-05T01:14:35.639+0000 7f35ea256740 -1 osd.2 0 log_to_monitors true
Dec  5 01:14:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Dec  5 01:14:35 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3318741722,v1:192.168.122.100:6811/3318741722]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec  5 01:14:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Dec  5 01:14:35 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3318741722,v1:192.168.122.100:6811/3318741722]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec  5 01:14:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e12 e12: 3 total, 2 up, 3 in
Dec  5 01:14:35 compute-0 ceph-mon[192914]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/2180313635,v1:192.168.122.100:6807/2180313635] boot
Dec  5 01:14:35 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 2 up, 3 in
Dec  5 01:14:35 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  5 01:14:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Dec  5 01:14:35 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3318741722,v1:192.168.122.100:6811/3318741722]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec  5 01:14:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e12 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-0,root=default}
Dec  5 01:14:35 compute-0 ceph-osd[207795]: osd.1 12 state: booting -> active
Dec  5 01:14:35 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 12 pg[1.0( empty local-lis/les=0/0 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:14:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Dec  5 01:14:35 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  5 01:14:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  5 01:14:35 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  5 01:14:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v43: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec  5 01:14:36 compute-0 ceph-mon[192914]: from='osd.2 [v2:192.168.122.100:6810/3318741722,v1:192.168.122.100:6811/3318741722]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec  5 01:14:36 compute-0 ceph-mon[192914]: from='osd.2 [v2:192.168.122.100:6810/3318741722,v1:192.168.122.100:6811/3318741722]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec  5 01:14:36 compute-0 ceph-mon[192914]: osd.1 [v2:192.168.122.100:6806/2180313635,v1:192.168.122.100:6807/2180313635] boot
Dec  5 01:14:36 compute-0 ceph-mon[192914]: from='osd.2 [v2:192.168.122.100:6810/3318741722,v1:192.168.122.100:6811/3318741722]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec  5 01:14:36 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec  5 01:14:36 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec  5 01:14:36 compute-0 wonderful_tharp[209421]: {
Dec  5 01:14:36 compute-0 wonderful_tharp[209421]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 01:14:36 compute-0 wonderful_tharp[209421]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:14:36 compute-0 wonderful_tharp[209421]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 01:14:36 compute-0 wonderful_tharp[209421]:        "osd_id": 0,
Dec  5 01:14:36 compute-0 wonderful_tharp[209421]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:14:36 compute-0 wonderful_tharp[209421]:        "type": "bluestore"
Dec  5 01:14:36 compute-0 wonderful_tharp[209421]:    },
Dec  5 01:14:36 compute-0 wonderful_tharp[209421]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 01:14:36 compute-0 wonderful_tharp[209421]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:14:36 compute-0 wonderful_tharp[209421]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 01:14:36 compute-0 wonderful_tharp[209421]:        "osd_id": 1,
Dec  5 01:14:36 compute-0 wonderful_tharp[209421]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:14:36 compute-0 wonderful_tharp[209421]:        "type": "bluestore"
Dec  5 01:14:36 compute-0 wonderful_tharp[209421]:    },
Dec  5 01:14:36 compute-0 wonderful_tharp[209421]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 01:14:36 compute-0 wonderful_tharp[209421]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:14:36 compute-0 wonderful_tharp[209421]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 01:14:36 compute-0 wonderful_tharp[209421]:        "osd_id": 2,
Dec  5 01:14:36 compute-0 wonderful_tharp[209421]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:14:36 compute-0 wonderful_tharp[209421]:        "type": "bluestore"
Dec  5 01:14:36 compute-0 wonderful_tharp[209421]:    }
Dec  5 01:14:36 compute-0 wonderful_tharp[209421]: }
Dec  5 01:14:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Dec  5 01:14:36 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3318741722,v1:192.168.122.100:6811/3318741722]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  5 01:14:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e13 e13: 3 total, 2 up, 3 in
Dec  5 01:14:36 compute-0 systemd[1]: libpod-5c3dca1ac9784c5ee558bd4f8f448242ddc395c70d1ce88b790c391f38c14a6b.scope: Deactivated successfully.
Dec  5 01:14:36 compute-0 conmon[209421]: conmon 5c3dca1ac9784c5ee558 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5c3dca1ac9784c5ee558bd4f8f448242ddc395c70d1ce88b790c391f38c14a6b.scope/container/memory.events
Dec  5 01:14:36 compute-0 systemd[1]: libpod-5c3dca1ac9784c5ee558bd4f8f448242ddc395c70d1ce88b790c391f38c14a6b.scope: Consumed 1.108s CPU time.
Dec  5 01:14:36 compute-0 podman[209224]: 2025-12-05 01:14:36.746222131 +0000 UTC m=+1.342046534 container died 5c3dca1ac9784c5ee558bd4f8f448242ddc395c70d1ce88b790c391f38c14a6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  5 01:14:36 compute-0 ceph-osd[208828]: osd.2 0 done with init, starting boot process
Dec  5 01:14:36 compute-0 ceph-osd[208828]: osd.2 0 start_boot
Dec  5 01:14:36 compute-0 ceph-osd[208828]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec  5 01:14:36 compute-0 ceph-osd[208828]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec  5 01:14:36 compute-0 ceph-osd[208828]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec  5 01:14:36 compute-0 ceph-osd[208828]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec  5 01:14:36 compute-0 ceph-osd[208828]: osd.2 0  bench count 12288000 bsize 4 KiB
Dec  5 01:14:36 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 2 up, 3 in
Dec  5 01:14:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  5 01:14:36 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  5 01:14:36 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  5 01:14:36 compute-0 ceph-mgr[193209]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3318741722; not ready for session (expect reconnect)
Dec  5 01:14:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  5 01:14:36 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  5 01:14:36 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  5 01:14:36 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=12/13 n=0 ec=10/10 lis/c=0/0 les/c/f=0/0/0 sis=12) [1] r=0 lpr=12 pi=[10,12)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:14:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-26d8ab5b1a2efa965c0ad3ff366b2d724ddcd5a7087498746d2d2f3afff00739-merged.mount: Deactivated successfully.
Dec  5 01:14:36 compute-0 podman[209224]: 2025-12-05 01:14:36.921236193 +0000 UTC m=+1.517060556 container remove 5c3dca1ac9784c5ee558bd4f8f448242ddc395c70d1ce88b790c391f38c14a6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_tharp, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  5 01:14:36 compute-0 systemd[1]: libpod-conmon-5c3dca1ac9784c5ee558bd4f8f448242ddc395c70d1ce88b790c391f38c14a6b.scope: Deactivated successfully.
Dec  5 01:14:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:14:36 compute-0 ceph-mgr[193209]: [devicehealth INFO root] creating main.db for devicehealth
Dec  5 01:14:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:14:37 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:14:37 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:37 compute-0 ceph-mon[192914]: from='osd.2 [v2:192.168.122.100:6810/3318741722,v1:192.168.122.100:6811/3318741722]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  5 01:14:37 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:37 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:37 compute-0 ceph-mgr[193209]: [devicehealth INFO root] Check health
Dec  5 01:14:37 compute-0 ceph-mgr[193209]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Dec  5 01:14:37 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec  5 01:14:37 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec  5 01:14:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Dec  5 01:14:37 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  5 01:14:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Dec  5 01:14:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Dec  5 01:14:37 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Dec  5 01:14:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  5 01:14:37 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  5 01:14:37 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  5 01:14:37 compute-0 ceph-mgr[193209]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3318741722; not ready for session (expect reconnect)
Dec  5 01:14:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  5 01:14:37 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  5 01:14:37 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  5 01:14:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v46: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec  5 01:14:38 compute-0 ceph-mon[192914]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec  5 01:14:38 compute-0 ceph-mon[192914]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec  5 01:14:38 compute-0 podman[209728]: 2025-12-05 01:14:38.38270821 +0000 UTC m=+0.134097224 container exec aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  5 01:14:38 compute-0 podman[209728]: 2025-12-05 01:14:38.504250974 +0000 UTC m=+0.255639988 container exec_died aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:14:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  5 01:14:38 compute-0 ceph-mgr[193209]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3318741722; not ready for session (expect reconnect)
Dec  5 01:14:38 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  5 01:14:38 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  5 01:14:38 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.afshmv(active, since 82s)
Dec  5 01:14:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:14:39 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:14:39 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:39 compute-0 ceph-mgr[193209]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3318741722; not ready for session (expect reconnect)
Dec  5 01:14:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  5 01:14:39 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  5 01:14:39 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  5 01:14:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v47: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Dec  5 01:14:40 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:40 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:40 compute-0 ceph-mgr[193209]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3318741722; not ready for session (expect reconnect)
Dec  5 01:14:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  5 01:14:40 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  5 01:14:40 compute-0 ceph-mgr[193209]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  5 01:14:40 compute-0 ceph-osd[208828]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 18.678 iops: 4781.521 elapsed_sec: 0.627
Dec  5 01:14:40 compute-0 ceph-osd[208828]: log_channel(cluster) log [WRN] : OSD bench result of 4781.521020 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  5 01:14:40 compute-0 ceph-osd[208828]: osd.2 0 waiting for initial osdmap
Dec  5 01:14:40 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2[208824]: 2025-12-05T01:14:40.918+0000 7f35e69ed640 -1 osd.2 0 waiting for initial osdmap
Dec  5 01:14:40 compute-0 ceph-osd[208828]: osd.2 14 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec  5 01:14:40 compute-0 ceph-osd[208828]: osd.2 14 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Dec  5 01:14:40 compute-0 ceph-osd[208828]: osd.2 14 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec  5 01:14:40 compute-0 ceph-osd[208828]: osd.2 14 check_osdmap_features require_osd_release unknown -> reef
Dec  5 01:14:40 compute-0 ceph-osd[208828]: osd.2 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  5 01:14:40 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-osd-2[208824]: 2025-12-05T01:14:40.947+0000 7f35e17fe640 -1 osd.2 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  5 01:14:40 compute-0 ceph-osd[208828]: osd.2 14 set_numa_affinity not setting numa affinity
Dec  5 01:14:40 compute-0 ceph-osd[208828]: osd.2 14 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial
Dec  5 01:14:41 compute-0 podman[210119]: 2025-12-05 01:14:41.37838767 +0000 UTC m=+0.063203051 container create 185a75e9ccb30fdaf16f2a86ce6d64860d3762bbc18ed04abd26420a20ccf742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  5 01:14:41 compute-0 systemd[1]: Started libpod-conmon-185a75e9ccb30fdaf16f2a86ce6d64860d3762bbc18ed04abd26420a20ccf742.scope.
Dec  5 01:14:41 compute-0 podman[210119]: 2025-12-05 01:14:41.354754872 +0000 UTC m=+0.039570263 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:14:41 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Dec  5 01:14:41 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e15 e15: 3 total, 3 up, 3 in
Dec  5 01:14:41 compute-0 ceph-mon[192914]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/3318741722,v1:192.168.122.100:6811/3318741722] boot
Dec  5 01:14:41 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 3 up, 3 in
Dec  5 01:14:41 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:14:41 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Dec  5 01:14:41 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  5 01:14:41 compute-0 ceph-osd[208828]: osd.2 15 state: booting -> active
Dec  5 01:14:41 compute-0 podman[210119]: 2025-12-05 01:14:41.508825331 +0000 UTC m=+0.193640742 container init 185a75e9ccb30fdaf16f2a86ce6d64860d3762bbc18ed04abd26420a20ccf742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_blackburn, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:14:41 compute-0 podman[210119]: 2025-12-05 01:14:41.523050367 +0000 UTC m=+0.207865738 container start 185a75e9ccb30fdaf16f2a86ce6d64860d3762bbc18ed04abd26420a20ccf742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_blackburn, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Dec  5 01:14:41 compute-0 podman[210119]: 2025-12-05 01:14:41.529849867 +0000 UTC m=+0.214665438 container attach 185a75e9ccb30fdaf16f2a86ce6d64860d3762bbc18ed04abd26420a20ccf742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_blackburn, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:14:41 compute-0 admiring_blackburn[210135]: 167 167
Dec  5 01:14:41 compute-0 systemd[1]: libpod-185a75e9ccb30fdaf16f2a86ce6d64860d3762bbc18ed04abd26420a20ccf742.scope: Deactivated successfully.
Dec  5 01:14:41 compute-0 podman[210119]: 2025-12-05 01:14:41.534496886 +0000 UTC m=+0.219312277 container died 185a75e9ccb30fdaf16f2a86ce6d64860d3762bbc18ed04abd26420a20ccf742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:14:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-59a4342c6a2c6116266627aa057e800f20100bbbe198174045079e9ca4bcb7ee-merged.mount: Deactivated successfully.
Dec  5 01:14:41 compute-0 podman[210119]: 2025-12-05 01:14:41.591505963 +0000 UTC m=+0.276321334 container remove 185a75e9ccb30fdaf16f2a86ce6d64860d3762bbc18ed04abd26420a20ccf742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_blackburn, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:14:41 compute-0 systemd[1]: libpod-conmon-185a75e9ccb30fdaf16f2a86ce6d64860d3762bbc18ed04abd26420a20ccf742.scope: Deactivated successfully.
Dec  5 01:14:41 compute-0 podman[210158]: 2025-12-05 01:14:41.862030284 +0000 UTC m=+0.096487057 container create 3b960fb67c3fcb7e7df228d6ac5266042f012de4c1b849fb70d26f735817e043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  5 01:14:41 compute-0 podman[210158]: 2025-12-05 01:14:41.812098104 +0000 UTC m=+0.046554937 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:14:41 compute-0 systemd[1]: Started libpod-conmon-3b960fb67c3fcb7e7df228d6ac5266042f012de4c1b849fb70d26f735817e043.scope.
Dec  5 01:14:41 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:14:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b28d09b62b182d77cbc6628f1fda0ed5f2ff09cbf956f212da6f40dbe4d3b158/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b28d09b62b182d77cbc6628f1fda0ed5f2ff09cbf956f212da6f40dbe4d3b158/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b28d09b62b182d77cbc6628f1fda0ed5f2ff09cbf956f212da6f40dbe4d3b158/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b28d09b62b182d77cbc6628f1fda0ed5f2ff09cbf956f212da6f40dbe4d3b158/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:14:42 compute-0 podman[210158]: 2025-12-05 01:14:42.028583791 +0000 UTC m=+0.263040594 container init 3b960fb67c3fcb7e7df228d6ac5266042f012de4c1b849fb70d26f735817e043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:14:42 compute-0 podman[210158]: 2025-12-05 01:14:42.040102372 +0000 UTC m=+0.274559125 container start 3b960fb67c3fcb7e7df228d6ac5266042f012de4c1b849fb70d26f735817e043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chaum, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:14:42 compute-0 podman[210158]: 2025-12-05 01:14:42.045100541 +0000 UTC m=+0.279557374 container attach 3b960fb67c3fcb7e7df228d6ac5266042f012de4c1b849fb70d26f735817e043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chaum, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:14:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Dec  5 01:14:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Dec  5 01:14:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e16 e16: 3 total, 3 up, 3 in
Dec  5 01:14:42 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 3 up, 3 in
Dec  5 01:14:42 compute-0 ceph-mon[192914]: OSD bench result of 4781.521020 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  5 01:14:42 compute-0 ceph-mon[192914]: osd.2 [v2:192.168.122.100:6810/3318741722,v1:192.168.122.100:6811/3318741722] boot
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.542 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.543 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.543 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f83151a5f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.544 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f83151a6690>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.544 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8316c39160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee59a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f941a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee79e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.546 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f942c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.546 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee6300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.546 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.546 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee74d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.546 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee76b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8314f94050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.550 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8314f940e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.552 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f831506dc10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.552 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8314ee7950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8314ee7a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8314f94170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8314ee79b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8314f94200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8314f94290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8314ee7ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8314f94320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8314ee59d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8314ee7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8314ee7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8314ee74a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8314ee7500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8314ee7560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8314ee75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8314f945f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.560 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8314ee7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.560 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.561 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8314ee7680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.561 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.561 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8314ee76e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.561 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.562 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8314ee7f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.562 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.562 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8314ee7740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.562 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.563 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8314ee7f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.563 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.565 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.565 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.565 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.565 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.565 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.565 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.566 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.566 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.566 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.566 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.566 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.566 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.566 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.566 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.568 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:14:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:14:42.568 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:14:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Dec  5 01:14:44 compute-0 sad_chaum[210175]: [
Dec  5 01:14:44 compute-0 sad_chaum[210175]:    {
Dec  5 01:14:44 compute-0 sad_chaum[210175]:        "available": false,
Dec  5 01:14:44 compute-0 sad_chaum[210175]:        "ceph_device": false,
Dec  5 01:14:44 compute-0 sad_chaum[210175]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec  5 01:14:44 compute-0 sad_chaum[210175]:        "lsm_data": {},
Dec  5 01:14:44 compute-0 sad_chaum[210175]:        "lvs": [],
Dec  5 01:14:44 compute-0 sad_chaum[210175]:        "path": "/dev/sr0",
Dec  5 01:14:44 compute-0 sad_chaum[210175]:        "rejected_reasons": [
Dec  5 01:14:44 compute-0 sad_chaum[210175]:            "Insufficient space (<5GB)",
Dec  5 01:14:44 compute-0 sad_chaum[210175]:            "Has a FileSystem"
Dec  5 01:14:44 compute-0 sad_chaum[210175]:        ],
Dec  5 01:14:44 compute-0 sad_chaum[210175]:        "sys_api": {
Dec  5 01:14:44 compute-0 sad_chaum[210175]:            "actuators": null,
Dec  5 01:14:44 compute-0 sad_chaum[210175]:            "device_nodes": "sr0",
Dec  5 01:14:44 compute-0 sad_chaum[210175]:            "devname": "sr0",
Dec  5 01:14:44 compute-0 sad_chaum[210175]:            "human_readable_size": "482.00 KB",
Dec  5 01:14:44 compute-0 sad_chaum[210175]:            "id_bus": "ata",
Dec  5 01:14:44 compute-0 sad_chaum[210175]:            "model": "QEMU DVD-ROM",
Dec  5 01:14:44 compute-0 sad_chaum[210175]:            "nr_requests": "2",
Dec  5 01:14:44 compute-0 sad_chaum[210175]:            "parent": "/dev/sr0",
Dec  5 01:14:44 compute-0 sad_chaum[210175]:            "partitions": {},
Dec  5 01:14:44 compute-0 sad_chaum[210175]:            "path": "/dev/sr0",
Dec  5 01:14:44 compute-0 sad_chaum[210175]:            "removable": "1",
Dec  5 01:14:44 compute-0 sad_chaum[210175]:            "rev": "2.5+",
Dec  5 01:14:44 compute-0 sad_chaum[210175]:            "ro": "0",
Dec  5 01:14:44 compute-0 sad_chaum[210175]:            "rotational": "1",
Dec  5 01:14:44 compute-0 sad_chaum[210175]:            "sas_address": "",
Dec  5 01:14:44 compute-0 sad_chaum[210175]:            "sas_device_handle": "",
Dec  5 01:14:44 compute-0 sad_chaum[210175]:            "scheduler_mode": "mq-deadline",
Dec  5 01:14:44 compute-0 sad_chaum[210175]:            "sectors": 0,
Dec  5 01:14:44 compute-0 sad_chaum[210175]:            "sectorsize": "2048",
Dec  5 01:14:44 compute-0 sad_chaum[210175]:            "size": 493568.0,
Dec  5 01:14:44 compute-0 sad_chaum[210175]:            "support_discard": "2048",
Dec  5 01:14:44 compute-0 sad_chaum[210175]:            "type": "disk",
Dec  5 01:14:44 compute-0 sad_chaum[210175]:            "vendor": "QEMU"
Dec  5 01:14:44 compute-0 sad_chaum[210175]:        }
Dec  5 01:14:44 compute-0 sad_chaum[210175]:    }
Dec  5 01:14:44 compute-0 sad_chaum[210175]: ]
Dec  5 01:14:44 compute-0 systemd[1]: libpod-3b960fb67c3fcb7e7df228d6ac5266042f012de4c1b849fb70d26f735817e043.scope: Deactivated successfully.
Dec  5 01:14:44 compute-0 systemd[1]: libpod-3b960fb67c3fcb7e7df228d6ac5266042f012de4c1b849fb70d26f735817e043.scope: Consumed 2.205s CPU time.
Dec  5 01:14:44 compute-0 podman[210158]: 2025-12-05 01:14:44.151324469 +0000 UTC m=+2.385781232 container died 3b960fb67c3fcb7e7df228d6ac5266042f012de4c1b849fb70d26f735817e043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  5 01:14:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-b28d09b62b182d77cbc6628f1fda0ed5f2ff09cbf956f212da6f40dbe4d3b158-merged.mount: Deactivated successfully.
Dec  5 01:14:44 compute-0 podman[210158]: 2025-12-05 01:14:44.23002979 +0000 UTC m=+2.464486553 container remove 3b960fb67c3fcb7e7df228d6ac5266042f012de4c1b849fb70d26f735817e043 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:14:44 compute-0 systemd[1]: libpod-conmon-3b960fb67c3fcb7e7df228d6ac5266042f012de4c1b849fb70d26f735817e043.scope: Deactivated successfully.
Dec  5 01:14:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:14:44 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:14:44 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Dec  5 01:14:44 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec  5 01:14:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Dec  5 01:14:44 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec  5 01:14:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Dec  5 01:14:44 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec  5 01:14:44 compute-0 ceph-mgr[193209]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43690k
Dec  5 01:14:44 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43690k
Dec  5 01:14:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Dec  5 01:14:44 compute-0 ceph-mgr[193209]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Dec  5 01:14:44 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Dec  5 01:14:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:14:44 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:14:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 01:14:44 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:14:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 01:14:44 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:44 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev f90cb7b8-f51b-435b-bcdd-6502b5985af0 does not exist
Dec  5 01:14:44 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 8e4bfb61-48c9-43d9-b97f-273bd6a2475b does not exist
Dec  5 01:14:44 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 22e20da0-917a-41c9-bd88-e06517a486b6 does not exist
Dec  5 01:14:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 01:14:44 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 01:14:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 01:14:44 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:14:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:14:44 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:14:45 compute-0 podman[212374]: 2025-12-05 01:14:45.139757626 +0000 UTC m=+0.075203815 container create 814386589e1c33bd33f38da008c1cfed4c45473dc797d293ff529ec3029fce28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  5 01:14:45 compute-0 podman[212374]: 2025-12-05 01:14:45.111495529 +0000 UTC m=+0.046941798 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:14:45 compute-0 systemd[1]: Started libpod-conmon-814386589e1c33bd33f38da008c1cfed4c45473dc797d293ff529ec3029fce28.scope.
Dec  5 01:14:45 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:14:45 compute-0 podman[212374]: 2025-12-05 01:14:45.262763811 +0000 UTC m=+0.198210030 container init 814386589e1c33bd33f38da008c1cfed4c45473dc797d293ff529ec3029fce28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_herschel, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:14:45 compute-0 podman[212374]: 2025-12-05 01:14:45.274819876 +0000 UTC m=+0.210266085 container start 814386589e1c33bd33f38da008c1cfed4c45473dc797d293ff529ec3029fce28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:14:45 compute-0 podman[212374]: 2025-12-05 01:14:45.280724961 +0000 UTC m=+0.216171190 container attach 814386589e1c33bd33f38da008c1cfed4c45473dc797d293ff529ec3029fce28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_herschel, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  5 01:14:45 compute-0 youthful_herschel[212390]: 167 167
Dec  5 01:14:45 compute-0 systemd[1]: libpod-814386589e1c33bd33f38da008c1cfed4c45473dc797d293ff529ec3029fce28.scope: Deactivated successfully.
Dec  5 01:14:45 compute-0 podman[212374]: 2025-12-05 01:14:45.285880964 +0000 UTC m=+0.221327143 container died 814386589e1c33bd33f38da008c1cfed4c45473dc797d293ff529ec3029fce28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:14:45 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:45 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:45 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec  5 01:14:45 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec  5 01:14:45 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec  5 01:14:45 compute-0 ceph-mon[192914]: Adjusting osd_memory_target on compute-0 to 43690k
Dec  5 01:14:45 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:14:45 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:45 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:14:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d3b7578590ffdf54a468406e97c4d49c78e05c54640adc4fd3ff5bb853041a1-merged.mount: Deactivated successfully.
Dec  5 01:14:45 compute-0 podman[212374]: 2025-12-05 01:14:45.334529049 +0000 UTC m=+0.269975228 container remove 814386589e1c33bd33f38da008c1cfed4c45473dc797d293ff529ec3029fce28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Dec  5 01:14:45 compute-0 systemd[1]: libpod-conmon-814386589e1c33bd33f38da008c1cfed4c45473dc797d293ff529ec3029fce28.scope: Deactivated successfully.
Dec  5 01:14:45 compute-0 podman[212413]: 2025-12-05 01:14:45.524034515 +0000 UTC m=+0.054332854 container create 0ea5afd2f550f613acc2630751462aca5b8d7b5db9f473ee837d464ce1443051 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hugle, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:14:45 compute-0 podman[212413]: 2025-12-05 01:14:45.502240428 +0000 UTC m=+0.032538747 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:14:45 compute-0 systemd[1]: Started libpod-conmon-0ea5afd2f550f613acc2630751462aca5b8d7b5db9f473ee837d464ce1443051.scope.
Dec  5 01:14:45 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:14:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef64c72de7de042c853a5d11f68d82a128b285a0018e8f3e0e1d291c9ac80de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef64c72de7de042c853a5d11f68d82a128b285a0018e8f3e0e1d291c9ac80de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef64c72de7de042c853a5d11f68d82a128b285a0018e8f3e0e1d291c9ac80de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef64c72de7de042c853a5d11f68d82a128b285a0018e8f3e0e1d291c9ac80de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ef64c72de7de042c853a5d11f68d82a128b285a0018e8f3e0e1d291c9ac80de/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:45 compute-0 podman[212413]: 2025-12-05 01:14:45.670942505 +0000 UTC m=+0.201240824 container init 0ea5afd2f550f613acc2630751462aca5b8d7b5db9f473ee837d464ce1443051 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hugle, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:14:45 compute-0 podman[212413]: 2025-12-05 01:14:45.68551441 +0000 UTC m=+0.215812749 container start 0ea5afd2f550f613acc2630751462aca5b8d7b5db9f473ee837d464ce1443051 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hugle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  5 01:14:45 compute-0 podman[212413]: 2025-12-05 01:14:45.69376918 +0000 UTC m=+0.224067479 container attach 0ea5afd2f550f613acc2630751462aca5b8d7b5db9f473ee837d464ce1443051 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:14:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Dec  5 01:14:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:14:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:14:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:14:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:14:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:14:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:14:46 compute-0 ceph-mon[192914]: Unable to set osd_memory_target on compute-0 to 44739242: error parsing value: Value '44739242' is below minimum 939524096
Dec  5 01:14:46 compute-0 trusting_hugle[212428]: --> passed data devices: 0 physical, 3 LVM
Dec  5 01:14:46 compute-0 trusting_hugle[212428]: --> relative data size: 1.0
Dec  5 01:14:46 compute-0 trusting_hugle[212428]: --> All data devices are unavailable
Dec  5 01:14:46 compute-0 systemd[1]: libpod-0ea5afd2f550f613acc2630751462aca5b8d7b5db9f473ee837d464ce1443051.scope: Deactivated successfully.
Dec  5 01:14:46 compute-0 podman[212413]: 2025-12-05 01:14:46.800749709 +0000 UTC m=+1.331048008 container died 0ea5afd2f550f613acc2630751462aca5b8d7b5db9f473ee837d464ce1443051 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  5 01:14:46 compute-0 systemd[1]: libpod-0ea5afd2f550f613acc2630751462aca5b8d7b5db9f473ee837d464ce1443051.scope: Consumed 1.056s CPU time.
Dec  5 01:14:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ef64c72de7de042c853a5d11f68d82a128b285a0018e8f3e0e1d291c9ac80de-merged.mount: Deactivated successfully.
Dec  5 01:14:46 compute-0 podman[212413]: 2025-12-05 01:14:46.86474056 +0000 UTC m=+1.395038859 container remove 0ea5afd2f550f613acc2630751462aca5b8d7b5db9f473ee837d464ce1443051 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hugle, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:14:46 compute-0 systemd[1]: libpod-conmon-0ea5afd2f550f613acc2630751462aca5b8d7b5db9f473ee837d464ce1443051.scope: Deactivated successfully.
Dec  5 01:14:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:14:47 compute-0 podman[212608]: 2025-12-05 01:14:47.728268161 +0000 UTC m=+0.086656673 container create a0b81d9d302169395452672585dd2849a710af40eda82215205a9df4326c0470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  5 01:14:47 compute-0 systemd[1]: Started libpod-conmon-a0b81d9d302169395452672585dd2849a710af40eda82215205a9df4326c0470.scope.
Dec  5 01:14:47 compute-0 podman[212608]: 2025-12-05 01:14:47.695362205 +0000 UTC m=+0.053750757 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:14:47 compute-0 rsyslogd[188644]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  5 01:14:47 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:14:47 compute-0 podman[212608]: 2025-12-05 01:14:47.843831778 +0000 UTC m=+0.202220290 container init a0b81d9d302169395452672585dd2849a710af40eda82215205a9df4326c0470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  5 01:14:47 compute-0 podman[212608]: 2025-12-05 01:14:47.855780281 +0000 UTC m=+0.214168753 container start a0b81d9d302169395452672585dd2849a710af40eda82215205a9df4326c0470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_goldwasser, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  5 01:14:47 compute-0 podman[212608]: 2025-12-05 01:14:47.859853264 +0000 UTC m=+0.218241776 container attach a0b81d9d302169395452672585dd2849a710af40eda82215205a9df4326c0470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_goldwasser, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  5 01:14:47 compute-0 tender_goldwasser[212624]: 167 167
Dec  5 01:14:47 compute-0 systemd[1]: libpod-a0b81d9d302169395452672585dd2849a710af40eda82215205a9df4326c0470.scope: Deactivated successfully.
Dec  5 01:14:47 compute-0 podman[212608]: 2025-12-05 01:14:47.863026993 +0000 UTC m=+0.221415465 container died a0b81d9d302169395452672585dd2849a710af40eda82215205a9df4326c0470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_goldwasser, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:14:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ba757d730ff10c334650be52e677c2bd835f95f1588311c19b6aed4848505fd-merged.mount: Deactivated successfully.
Dec  5 01:14:47 compute-0 podman[212608]: 2025-12-05 01:14:47.942180876 +0000 UTC m=+0.300569349 container remove a0b81d9d302169395452672585dd2849a710af40eda82215205a9df4326c0470 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  5 01:14:47 compute-0 systemd[1]: libpod-conmon-a0b81d9d302169395452672585dd2849a710af40eda82215205a9df4326c0470.scope: Deactivated successfully.
Dec  5 01:14:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:14:48 compute-0 podman[212648]: 2025-12-05 01:14:48.198710427 +0000 UTC m=+0.089874473 container create 7c597e6fc0226a383e44187d5d364071ea467717f2fa57cd0707aa3a4aa5c272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_lichterman, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:14:48 compute-0 podman[212648]: 2025-12-05 01:14:48.170223434 +0000 UTC m=+0.061387510 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:14:48 compute-0 systemd[1]: Started libpod-conmon-7c597e6fc0226a383e44187d5d364071ea467717f2fa57cd0707aa3a4aa5c272.scope.
Dec  5 01:14:48 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:14:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/005e630fd4daef456975de4e8cd48fbe25712a2c11592d8726cf4903a13a5edd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/005e630fd4daef456975de4e8cd48fbe25712a2c11592d8726cf4903a13a5edd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/005e630fd4daef456975de4e8cd48fbe25712a2c11592d8726cf4903a13a5edd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/005e630fd4daef456975de4e8cd48fbe25712a2c11592d8726cf4903a13a5edd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:48 compute-0 podman[212648]: 2025-12-05 01:14:48.380960361 +0000 UTC m=+0.272124427 container init 7c597e6fc0226a383e44187d5d364071ea467717f2fa57cd0707aa3a4aa5c272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_lichterman, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec  5 01:14:48 compute-0 podman[212648]: 2025-12-05 01:14:48.413055205 +0000 UTC m=+0.304219241 container start 7c597e6fc0226a383e44187d5d364071ea467717f2fa57cd0707aa3a4aa5c272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_lichterman, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:14:48 compute-0 podman[212648]: 2025-12-05 01:14:48.419626968 +0000 UTC m=+0.310791094 container attach 7c597e6fc0226a383e44187d5d364071ea467717f2fa57cd0707aa3a4aa5c272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_lichterman, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]: {
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:    "0": [
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:        {
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:            "devices": [
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:                "/dev/loop3"
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:            ],
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:            "lv_name": "ceph_lv0",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:            "lv_size": "21470642176",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:            "name": "ceph_lv0",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:            "tags": {
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:                "ceph.cluster_name": "ceph",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:                "ceph.crush_device_class": "",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:                "ceph.encrypted": "0",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:                "ceph.osd_id": "0",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:                "ceph.type": "block",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:                "ceph.vdo": "0"
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:            },
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:            "type": "block",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:            "vg_name": "ceph_vg0"
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:        }
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:    ],
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:    "1": [
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:        {
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:            "devices": [
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:                "/dev/loop4"
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:            ],
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:            "lv_name": "ceph_lv1",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:            "lv_size": "21470642176",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:            "name": "ceph_lv1",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:            "tags": {
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:                "ceph.cluster_name": "ceph",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:                "ceph.crush_device_class": "",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:                "ceph.encrypted": "0",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:                "ceph.osd_id": "1",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:                "ceph.type": "block",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:                "ceph.vdo": "0"
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:            },
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:            "type": "block",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:            "vg_name": "ceph_vg1"
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:        }
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:    ],
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:    "2": [
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:        {
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:            "devices": [
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:                "/dev/loop5"
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:            ],
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:            "lv_name": "ceph_lv2",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:            "lv_size": "21470642176",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:            "name": "ceph_lv2",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:            "tags": {
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:                "ceph.cluster_name": "ceph",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:                "ceph.crush_device_class": "",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:                "ceph.encrypted": "0",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:                "ceph.osd_id": "2",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:                "ceph.type": "block",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:                "ceph.vdo": "0"
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:            },
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:            "type": "block",
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:            "vg_name": "ceph_vg2"
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:        }
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]:    ]
Dec  5 01:14:49 compute-0 awesome_lichterman[212664]: }
Dec  5 01:14:49 compute-0 systemd[1]: libpod-7c597e6fc0226a383e44187d5d364071ea467717f2fa57cd0707aa3a4aa5c272.scope: Deactivated successfully.
Dec  5 01:14:49 compute-0 podman[212648]: 2025-12-05 01:14:49.252102194 +0000 UTC m=+1.143266280 container died 7c597e6fc0226a383e44187d5d364071ea467717f2fa57cd0707aa3a4aa5c272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_lichterman, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:14:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-005e630fd4daef456975de4e8cd48fbe25712a2c11592d8726cf4903a13a5edd-merged.mount: Deactivated successfully.
Dec  5 01:14:49 compute-0 podman[212648]: 2025-12-05 01:14:49.363398062 +0000 UTC m=+1.254562128 container remove 7c597e6fc0226a383e44187d5d364071ea467717f2fa57cd0707aa3a4aa5c272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:14:49 compute-0 systemd[1]: libpod-conmon-7c597e6fc0226a383e44187d5d364071ea467717f2fa57cd0707aa3a4aa5c272.scope: Deactivated successfully.
Dec  5 01:14:49 compute-0 podman[212674]: 2025-12-05 01:14:49.425028468 +0000 UTC m=+0.136209233 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  5 01:14:49 compute-0 podman[212781]: 2025-12-05 01:14:49.91251152 +0000 UTC m=+0.108078860 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  5 01:14:49 compute-0 podman[212782]: 2025-12-05 01:14:49.957614546 +0000 UTC m=+0.154101052 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  5 01:14:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:14:50 compute-0 podman[212895]: 2025-12-05 01:14:50.452359939 +0000 UTC m=+0.093296668 container create 26030b446d39c4727a88c2a9529bd43f2a76d2b5248f66ebad792fa04e151b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  5 01:14:50 compute-0 podman[212895]: 2025-12-05 01:14:50.411118611 +0000 UTC m=+0.052055390 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:14:50 compute-0 systemd[1]: Started libpod-conmon-26030b446d39c4727a88c2a9529bd43f2a76d2b5248f66ebad792fa04e151b21.scope.
Dec  5 01:14:50 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:14:50 compute-0 podman[212895]: 2025-12-05 01:14:50.608219319 +0000 UTC m=+0.249156048 container init 26030b446d39c4727a88c2a9529bd43f2a76d2b5248f66ebad792fa04e151b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  5 01:14:50 compute-0 podman[212895]: 2025-12-05 01:14:50.619615626 +0000 UTC m=+0.260552325 container start 26030b446d39c4727a88c2a9529bd43f2a76d2b5248f66ebad792fa04e151b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:14:50 compute-0 podman[212895]: 2025-12-05 01:14:50.625587222 +0000 UTC m=+0.266523971 container attach 26030b446d39c4727a88c2a9529bd43f2a76d2b5248f66ebad792fa04e151b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:14:50 compute-0 strange_ellis[212910]: 167 167
Dec  5 01:14:50 compute-0 systemd[1]: libpod-26030b446d39c4727a88c2a9529bd43f2a76d2b5248f66ebad792fa04e151b21.scope: Deactivated successfully.
Dec  5 01:14:50 compute-0 podman[212915]: 2025-12-05 01:14:50.717194202 +0000 UTC m=+0.050652821 container died 26030b446d39c4727a88c2a9529bd43f2a76d2b5248f66ebad792fa04e151b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ellis, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:14:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe52a3a2a907b67780964303a6702aa48d084501cc1d644f56e3b6f198f4f133-merged.mount: Deactivated successfully.
Dec  5 01:14:50 compute-0 podman[212915]: 2025-12-05 01:14:50.801489359 +0000 UTC m=+0.134947918 container remove 26030b446d39c4727a88c2a9529bd43f2a76d2b5248f66ebad792fa04e151b21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_ellis, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:14:50 compute-0 systemd[1]: libpod-conmon-26030b446d39c4727a88c2a9529bd43f2a76d2b5248f66ebad792fa04e151b21.scope: Deactivated successfully.
Dec  5 01:14:51 compute-0 podman[212937]: 2025-12-05 01:14:51.07520544 +0000 UTC m=+0.098754471 container create ebac158785ad08b61be247e6c01e813a69d196fb1d409e14df87d5eda22fb1b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_jennings, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:14:51 compute-0 podman[212937]: 2025-12-05 01:14:51.035537485 +0000 UTC m=+0.059086556 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:14:51 compute-0 systemd[1]: Started libpod-conmon-ebac158785ad08b61be247e6c01e813a69d196fb1d409e14df87d5eda22fb1b3.scope.
Dec  5 01:14:51 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:14:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73b5bc714bf8f2293280f1fe97eeeadd91f1e442e5748cbf093ff466c8048bb4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73b5bc714bf8f2293280f1fe97eeeadd91f1e442e5748cbf093ff466c8048bb4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73b5bc714bf8f2293280f1fe97eeeadd91f1e442e5748cbf093ff466c8048bb4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73b5bc714bf8f2293280f1fe97eeeadd91f1e442e5748cbf093ff466c8048bb4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:51 compute-0 podman[212937]: 2025-12-05 01:14:51.211650278 +0000 UTC m=+0.235199359 container init ebac158785ad08b61be247e6c01e813a69d196fb1d409e14df87d5eda22fb1b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_jennings, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  5 01:14:51 compute-0 podman[212937]: 2025-12-05 01:14:51.225057432 +0000 UTC m=+0.248606433 container start ebac158785ad08b61be247e6c01e813a69d196fb1d409e14df87d5eda22fb1b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_jennings, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  5 01:14:51 compute-0 podman[212937]: 2025-12-05 01:14:51.230797111 +0000 UTC m=+0.254346142 container attach ebac158785ad08b61be247e6c01e813a69d196fb1d409e14df87d5eda22fb1b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  5 01:14:51 compute-0 podman[212959]: 2025-12-05 01:14:51.711486823 +0000 UTC m=+0.107870943 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 01:14:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:14:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:14:52 compute-0 elated_jennings[212954]: {
Dec  5 01:14:52 compute-0 elated_jennings[212954]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 01:14:52 compute-0 elated_jennings[212954]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:14:52 compute-0 elated_jennings[212954]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 01:14:52 compute-0 elated_jennings[212954]:        "osd_id": 0,
Dec  5 01:14:52 compute-0 elated_jennings[212954]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:14:52 compute-0 elated_jennings[212954]:        "type": "bluestore"
Dec  5 01:14:52 compute-0 elated_jennings[212954]:    },
Dec  5 01:14:52 compute-0 elated_jennings[212954]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 01:14:52 compute-0 elated_jennings[212954]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:14:52 compute-0 elated_jennings[212954]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 01:14:52 compute-0 elated_jennings[212954]:        "osd_id": 1,
Dec  5 01:14:52 compute-0 elated_jennings[212954]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:14:52 compute-0 elated_jennings[212954]:        "type": "bluestore"
Dec  5 01:14:52 compute-0 elated_jennings[212954]:    },
Dec  5 01:14:52 compute-0 elated_jennings[212954]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 01:14:52 compute-0 elated_jennings[212954]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:14:52 compute-0 elated_jennings[212954]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 01:14:52 compute-0 elated_jennings[212954]:        "osd_id": 2,
Dec  5 01:14:52 compute-0 elated_jennings[212954]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:14:52 compute-0 elated_jennings[212954]:        "type": "bluestore"
Dec  5 01:14:52 compute-0 elated_jennings[212954]:    }
Dec  5 01:14:52 compute-0 elated_jennings[212954]: }
Dec  5 01:14:52 compute-0 systemd[1]: libpod-ebac158785ad08b61be247e6c01e813a69d196fb1d409e14df87d5eda22fb1b3.scope: Deactivated successfully.
Dec  5 01:14:52 compute-0 systemd[1]: libpod-ebac158785ad08b61be247e6c01e813a69d196fb1d409e14df87d5eda22fb1b3.scope: Consumed 1.165s CPU time.
Dec  5 01:14:52 compute-0 conmon[212954]: conmon ebac158785ad08b61be2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ebac158785ad08b61be247e6c01e813a69d196fb1d409e14df87d5eda22fb1b3.scope/container/memory.events
Dec  5 01:14:52 compute-0 podman[212937]: 2025-12-05 01:14:52.403644603 +0000 UTC m=+1.427193634 container died ebac158785ad08b61be247e6c01e813a69d196fb1d409e14df87d5eda22fb1b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:14:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-73b5bc714bf8f2293280f1fe97eeeadd91f1e442e5748cbf093ff466c8048bb4-merged.mount: Deactivated successfully.
Dec  5 01:14:52 compute-0 podman[212937]: 2025-12-05 01:14:52.491463068 +0000 UTC m=+1.515012049 container remove ebac158785ad08b61be247e6c01e813a69d196fb1d409e14df87d5eda22fb1b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_jennings, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Dec  5 01:14:52 compute-0 systemd[1]: libpod-conmon-ebac158785ad08b61be247e6c01e813a69d196fb1d409e14df87d5eda22fb1b3.scope: Deactivated successfully.
Dec  5 01:14:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:14:52 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:14:52 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Dec  5 01:14:52 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Dec  5 01:14:52 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Dec  5 01:14:52 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Dec  5 01:14:52 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:52 compute-0 ceph-mgr[193209]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Dec  5 01:14:52 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Dec  5 01:14:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Dec  5 01:14:52 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  5 01:14:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Dec  5 01:14:52 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec  5 01:14:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:14:52 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:14:52 compute-0 ceph-mgr[193209]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Dec  5 01:14:52 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Dec  5 01:14:53 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:53 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:53 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:53 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:53 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:53 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:53 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  5 01:14:53 compute-0 podman[213186]: 2025-12-05 01:14:53.604245837 +0000 UTC m=+0.049282523 container create aa6040f9ec02c34c8715edd95c08c9e5c1e8f6725088605c640211e5b7245ed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_darwin, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  5 01:14:53 compute-0 systemd[1]: Started libpod-conmon-aa6040f9ec02c34c8715edd95c08c9e5c1e8f6725088605c640211e5b7245ed6.scope.
Dec  5 01:14:53 compute-0 podman[213186]: 2025-12-05 01:14:53.585005061 +0000 UTC m=+0.030041767 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:14:53 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:14:53 compute-0 podman[213186]: 2025-12-05 01:14:53.719768113 +0000 UTC m=+0.164804819 container init aa6040f9ec02c34c8715edd95c08c9e5c1e8f6725088605c640211e5b7245ed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_darwin, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  5 01:14:53 compute-0 podman[213186]: 2025-12-05 01:14:53.729878215 +0000 UTC m=+0.174914921 container start aa6040f9ec02c34c8715edd95c08c9e5c1e8f6725088605c640211e5b7245ed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:14:53 compute-0 podman[213186]: 2025-12-05 01:14:53.734720779 +0000 UTC m=+0.179757465 container attach aa6040f9ec02c34c8715edd95c08c9e5c1e8f6725088605c640211e5b7245ed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  5 01:14:53 compute-0 clever_darwin[213201]: 167 167
Dec  5 01:14:53 compute-0 systemd[1]: libpod-aa6040f9ec02c34c8715edd95c08c9e5c1e8f6725088605c640211e5b7245ed6.scope: Deactivated successfully.
Dec  5 01:14:53 compute-0 podman[213186]: 2025-12-05 01:14:53.737412344 +0000 UTC m=+0.182449030 container died aa6040f9ec02c34c8715edd95c08c9e5c1e8f6725088605c640211e5b7245ed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_darwin, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  5 01:14:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4df6a19e4ced6c34fffb0ebc080ab24e338ba19d2f75e1043d9fa22cce95348-merged.mount: Deactivated successfully.
Dec  5 01:14:53 compute-0 podman[213186]: 2025-12-05 01:14:53.806856098 +0000 UTC m=+0.251892784 container remove aa6040f9ec02c34c8715edd95c08c9e5c1e8f6725088605c640211e5b7245ed6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_darwin, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:14:53 compute-0 systemd[1]: libpod-conmon-aa6040f9ec02c34c8715edd95c08c9e5c1e8f6725088605c640211e5b7245ed6.scope: Deactivated successfully.
Dec  5 01:14:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:14:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:14:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:53 compute-0 ceph-mgr[193209]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.afshmv (unknown last config time)...
Dec  5 01:14:53 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.afshmv (unknown last config time)...
Dec  5 01:14:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.afshmv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Dec  5 01:14:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.afshmv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  5 01:14:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec  5 01:14:53 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  5 01:14:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:14:53 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:14:53 compute-0 ceph-mgr[193209]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.afshmv on compute-0
Dec  5 01:14:53 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.afshmv on compute-0
Dec  5 01:14:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:14:54 compute-0 ceph-mon[192914]: Reconfiguring mon.compute-0 (unknown last config time)...
Dec  5 01:14:54 compute-0 ceph-mon[192914]: Reconfiguring daemon mon.compute-0 on compute-0
Dec  5 01:14:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.afshmv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  5 01:14:54 compute-0 podman[213295]: 2025-12-05 01:14:54.352216981 +0000 UTC m=+0.112748530 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, distribution-scope=public, vcs-type=git, container_name=kepler, io.openshift.tags=base rhel9, version=9.4, config_id=edpm, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30)
Dec  5 01:14:54 compute-0 podman[213356]: 2025-12-05 01:14:54.573762369 +0000 UTC m=+0.047085272 container create 08fe32d4ca85915365df0dad5e7420d8c852ac871cc9d4e6142ffa10578fad9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_lalande, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:14:54 compute-0 systemd[1]: Started libpod-conmon-08fe32d4ca85915365df0dad5e7420d8c852ac871cc9d4e6142ffa10578fad9c.scope.
Dec  5 01:14:54 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:14:54 compute-0 podman[213356]: 2025-12-05 01:14:54.551825368 +0000 UTC m=+0.025148251 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:14:54 compute-0 podman[213356]: 2025-12-05 01:14:54.670753919 +0000 UTC m=+0.144076832 container init 08fe32d4ca85915365df0dad5e7420d8c852ac871cc9d4e6142ffa10578fad9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec  5 01:14:54 compute-0 podman[213356]: 2025-12-05 01:14:54.679542284 +0000 UTC m=+0.152865177 container start 08fe32d4ca85915365df0dad5e7420d8c852ac871cc9d4e6142ffa10578fad9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_lalande, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  5 01:14:54 compute-0 podman[213356]: 2025-12-05 01:14:54.683455513 +0000 UTC m=+0.156778416 container attach 08fe32d4ca85915365df0dad5e7420d8c852ac871cc9d4e6142ffa10578fad9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  5 01:14:54 compute-0 serene_lalande[213372]: 167 167
Dec  5 01:14:54 compute-0 systemd[1]: libpod-08fe32d4ca85915365df0dad5e7420d8c852ac871cc9d4e6142ffa10578fad9c.scope: Deactivated successfully.
Dec  5 01:14:54 compute-0 podman[213356]: 2025-12-05 01:14:54.692656519 +0000 UTC m=+0.165979412 container died 08fe32d4ca85915365df0dad5e7420d8c852ac871cc9d4e6142ffa10578fad9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_lalande, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  5 01:14:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-23f6851471a0ef9a74aefeea38cbf0323fe725c6dbc84110ca0ddbeeec77ab95-merged.mount: Deactivated successfully.
Dec  5 01:14:54 compute-0 podman[213356]: 2025-12-05 01:14:54.738935688 +0000 UTC m=+0.212258591 container remove 08fe32d4ca85915365df0dad5e7420d8c852ac871cc9d4e6142ffa10578fad9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  5 01:14:54 compute-0 systemd[1]: libpod-conmon-08fe32d4ca85915365df0dad5e7420d8c852ac871cc9d4e6142ffa10578fad9c.scope: Deactivated successfully.
Dec  5 01:14:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:14:54 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:14:54 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:55 compute-0 ceph-mon[192914]: Reconfiguring mgr.compute-0.afshmv (unknown last config time)...
Dec  5 01:14:55 compute-0 ceph-mon[192914]: Reconfiguring daemon mgr.compute-0.afshmv on compute-0
Dec  5 01:14:55 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:55 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:55 compute-0 podman[213559]: 2025-12-05 01:14:55.715760911 +0000 UTC m=+0.070145514 container exec aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  5 01:14:55 compute-0 podman[213559]: 2025-12-05 01:14:55.832741078 +0000 UTC m=+0.187125691 container exec_died aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:14:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:14:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:14:56 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:14:56 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:14:56 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:14:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 01:14:56 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:14:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 01:14:56 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:56 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 25a0b490-005e-441b-a51c-b0cba394fca0 does not exist
Dec  5 01:14:56 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev b92f9777-0a54-4544-bebf-fa911afaed42 does not exist
Dec  5 01:14:56 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 8160cbf4-62f7-4181-ae9e-cf19cba0726b does not exist
Dec  5 01:14:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 01:14:56 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 01:14:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 01:14:56 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:14:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:14:56 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:14:56 compute-0 python3[213775]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:14:56 compute-0 podman[213802]: 2025-12-05 01:14:56.953608522 +0000 UTC m=+0.052135522 container create 97a30fd0e15d4f8ece7b0599abf03f747fbc985e6f13eabd54ee16bea5042ecf (image=quay.io/ceph/ceph:v18, name=practical_poincare, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:14:57 compute-0 systemd[1]: Started libpod-conmon-97a30fd0e15d4f8ece7b0599abf03f747fbc985e6f13eabd54ee16bea5042ecf.scope.
Dec  5 01:14:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:14:57 compute-0 podman[213802]: 2025-12-05 01:14:56.928989677 +0000 UTC m=+0.027516707 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:14:57 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d038bd823964328c51e25e06913a79ae7f05a00da1e3d48eb001ed7ca4e8226c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d038bd823964328c51e25e06913a79ae7f05a00da1e3d48eb001ed7ca4e8226c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d038bd823964328c51e25e06913a79ae7f05a00da1e3d48eb001ed7ca4e8226c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:57 compute-0 podman[213802]: 2025-12-05 01:14:57.090691139 +0000 UTC m=+0.189218129 container init 97a30fd0e15d4f8ece7b0599abf03f747fbc985e6f13eabd54ee16bea5042ecf (image=quay.io/ceph/ceph:v18, name=practical_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  5 01:14:57 compute-0 podman[213802]: 2025-12-05 01:14:57.11732207 +0000 UTC m=+0.215849050 container start 97a30fd0e15d4f8ece7b0599abf03f747fbc985e6f13eabd54ee16bea5042ecf (image=quay.io/ceph/ceph:v18, name=practical_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Dec  5 01:14:57 compute-0 podman[213802]: 2025-12-05 01:14:57.122853485 +0000 UTC m=+0.221380465 container attach 97a30fd0e15d4f8ece7b0599abf03f747fbc985e6f13eabd54ee16bea5042ecf (image=quay.io/ceph/ceph:v18, name=practical_poincare, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:14:57 compute-0 podman[213840]: 2025-12-05 01:14:57.161010297 +0000 UTC m=+0.123526411 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, release=1755695350, architecture=x86_64, container_name=openstack_network_exporter, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc.)
Dec  5 01:14:57 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:57 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:57 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:14:57 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:14:57 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:14:57 compute-0 podman[213878]: 2025-12-05 01:14:57.323555822 +0000 UTC m=+0.061750800 container create 1a22f0c3f066077eb7a8a7ed296e47636b558800432d8b755179522c67fa2be8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_leavitt, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:14:57 compute-0 systemd[1]: Started libpod-conmon-1a22f0c3f066077eb7a8a7ed296e47636b558800432d8b755179522c67fa2be8.scope.
Dec  5 01:14:57 compute-0 podman[213878]: 2025-12-05 01:14:57.293724421 +0000 UTC m=+0.031919409 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:14:57 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:14:57 compute-0 podman[213878]: 2025-12-05 01:14:57.423047212 +0000 UTC m=+0.161242170 container init 1a22f0c3f066077eb7a8a7ed296e47636b558800432d8b755179522c67fa2be8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_leavitt, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:14:57 compute-0 podman[213878]: 2025-12-05 01:14:57.434076669 +0000 UTC m=+0.172271627 container start 1a22f0c3f066077eb7a8a7ed296e47636b558800432d8b755179522c67fa2be8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  5 01:14:57 compute-0 podman[213878]: 2025-12-05 01:14:57.439114349 +0000 UTC m=+0.177309297 container attach 1a22f0c3f066077eb7a8a7ed296e47636b558800432d8b755179522c67fa2be8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_leavitt, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  5 01:14:57 compute-0 zealous_leavitt[213895]: 167 167
Dec  5 01:14:57 compute-0 systemd[1]: libpod-1a22f0c3f066077eb7a8a7ed296e47636b558800432d8b755179522c67fa2be8.scope: Deactivated successfully.
Dec  5 01:14:57 compute-0 podman[213878]: 2025-12-05 01:14:57.447731269 +0000 UTC m=+0.185926227 container died 1a22f0c3f066077eb7a8a7ed296e47636b558800432d8b755179522c67fa2be8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_leavitt, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:14:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-1db5914ce12bef39b78ba8067b9e73c11a95a0bb0cde50ef055d03ad3b12f8f9-merged.mount: Deactivated successfully.
Dec  5 01:14:57 compute-0 podman[213878]: 2025-12-05 01:14:57.498500762 +0000 UTC m=+0.236695710 container remove 1a22f0c3f066077eb7a8a7ed296e47636b558800432d8b755179522c67fa2be8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  5 01:14:57 compute-0 systemd[1]: libpod-conmon-1a22f0c3f066077eb7a8a7ed296e47636b558800432d8b755179522c67fa2be8.scope: Deactivated successfully.
Dec  5 01:14:57 compute-0 podman[213937]: 2025-12-05 01:14:57.752505463 +0000 UTC m=+0.079966444 container create 521be331ee93edc723e7a7b5fe819e4cdc08c76e530fd053e8c2b69c11c67ac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_archimedes, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:14:57 compute-0 podman[213937]: 2025-12-05 01:14:57.717991588 +0000 UTC m=+0.045452629 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:14:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Dec  5 01:14:57 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3439009551' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  5 01:14:57 compute-0 practical_poincare[213842]: 
Dec  5 01:14:57 compute-0 practical_poincare[213842]: {"fsid":"cbd280d3-cbd8-528b-ace6-2b3a887cdcee","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":150,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":16,"num_osds":3,"num_up_osds":3,"osd_up_since":1764897281,"num_in_osds":3,"osd_in_since":1764897248,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":502738944,"bytes_avail":63909187584,"bytes_total":64411926528},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-05T01:14:18.046599+0000","services":{}},"progress_events":{}}
Dec  5 01:14:57 compute-0 systemd[1]: Started libpod-conmon-521be331ee93edc723e7a7b5fe819e4cdc08c76e530fd053e8c2b69c11c67ac0.scope.
Dec  5 01:14:57 compute-0 podman[213802]: 2025-12-05 01:14:57.877354257 +0000 UTC m=+0.975881297 container died 97a30fd0e15d4f8ece7b0599abf03f747fbc985e6f13eabd54ee16bea5042ecf (image=quay.io/ceph/ceph:v18, name=practical_poincare, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec  5 01:14:57 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:14:57 compute-0 systemd[1]: libpod-97a30fd0e15d4f8ece7b0599abf03f747fbc985e6f13eabd54ee16bea5042ecf.scope: Deactivated successfully.
Dec  5 01:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f4a51d442750fa1428c1136d3cdeb9a080c1db84db95cee9207d79f3dc29a71/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f4a51d442750fa1428c1136d3cdeb9a080c1db84db95cee9207d79f3dc29a71/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f4a51d442750fa1428c1136d3cdeb9a080c1db84db95cee9207d79f3dc29a71/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f4a51d442750fa1428c1136d3cdeb9a080c1db84db95cee9207d79f3dc29a71/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f4a51d442750fa1428c1136d3cdeb9a080c1db84db95cee9207d79f3dc29a71/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:57 compute-0 podman[213937]: 2025-12-05 01:14:57.938609578 +0000 UTC m=+0.266070569 container init 521be331ee93edc723e7a7b5fe819e4cdc08c76e530fd053e8c2b69c11c67ac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_archimedes, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:14:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-d038bd823964328c51e25e06913a79ae7f05a00da1e3d48eb001ed7ca4e8226c-merged.mount: Deactivated successfully.
Dec  5 01:14:57 compute-0 podman[213937]: 2025-12-05 01:14:57.959915909 +0000 UTC m=+0.287376860 container start 521be331ee93edc723e7a7b5fe819e4cdc08c76e530fd053e8c2b69c11c67ac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_archimedes, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  5 01:14:57 compute-0 podman[213937]: 2025-12-05 01:14:57.96742586 +0000 UTC m=+0.294886851 container attach 521be331ee93edc723e7a7b5fe819e4cdc08c76e530fd053e8c2b69c11c67ac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_archimedes, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Dec  5 01:14:57 compute-0 podman[213802]: 2025-12-05 01:14:57.983271894 +0000 UTC m=+1.081798864 container remove 97a30fd0e15d4f8ece7b0599abf03f747fbc985e6f13eabd54ee16bea5042ecf (image=quay.io/ceph/ceph:v18, name=practical_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  5 01:14:58 compute-0 systemd[1]: libpod-conmon-97a30fd0e15d4f8ece7b0599abf03f747fbc985e6f13eabd54ee16bea5042ecf.scope: Deactivated successfully.
Dec  5 01:14:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:14:58 compute-0 python3[213996]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:14:58 compute-0 podman[213997]: 2025-12-05 01:14:58.649422728 +0000 UTC m=+0.090983728 container create ee927366d55720da649cf108f4b5fac7a6900ae9803783ba59847238d0cc27d5 (image=quay.io/ceph/ceph:v18, name=cranky_margulis, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Dec  5 01:14:58 compute-0 podman[213997]: 2025-12-05 01:14:58.610744212 +0000 UTC m=+0.052305292 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:14:58 compute-0 systemd[1]: Started libpod-conmon-ee927366d55720da649cf108f4b5fac7a6900ae9803783ba59847238d0cc27d5.scope.
Dec  5 01:14:58 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:14:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/086b60429445134e1299d512fc865234e034c94d6d29507051f8cead594d5219/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/086b60429445134e1299d512fc865234e034c94d6d29507051f8cead594d5219/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:14:58 compute-0 podman[213997]: 2025-12-05 01:14:58.816340809 +0000 UTC m=+0.257901879 container init ee927366d55720da649cf108f4b5fac7a6900ae9803783ba59847238d0cc27d5 (image=quay.io/ceph/ceph:v18, name=cranky_margulis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  5 01:14:58 compute-0 podman[213997]: 2025-12-05 01:14:58.833256652 +0000 UTC m=+0.274817642 container start ee927366d55720da649cf108f4b5fac7a6900ae9803783ba59847238d0cc27d5 (image=quay.io/ceph/ceph:v18, name=cranky_margulis, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:14:58 compute-0 podman[213997]: 2025-12-05 01:14:58.840377622 +0000 UTC m=+0.281938642 container attach ee927366d55720da649cf108f4b5fac7a6900ae9803783ba59847238d0cc27d5 (image=quay.io/ceph/ceph:v18, name=cranky_margulis, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  5 01:14:59 compute-0 wonderful_archimedes[213955]: --> passed data devices: 0 physical, 3 LVM
Dec  5 01:14:59 compute-0 wonderful_archimedes[213955]: --> relative data size: 1.0
Dec  5 01:14:59 compute-0 wonderful_archimedes[213955]: --> All data devices are unavailable
Dec  5 01:14:59 compute-0 systemd[1]: libpod-521be331ee93edc723e7a7b5fe819e4cdc08c76e530fd053e8c2b69c11c67ac0.scope: Deactivated successfully.
Dec  5 01:14:59 compute-0 podman[213937]: 2025-12-05 01:14:59.258820471 +0000 UTC m=+1.586281422 container died 521be331ee93edc723e7a7b5fe819e4cdc08c76e530fd053e8c2b69c11c67ac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Dec  5 01:14:59 compute-0 systemd[1]: libpod-521be331ee93edc723e7a7b5fe819e4cdc08c76e530fd053e8c2b69c11c67ac0.scope: Consumed 1.221s CPU time.
Dec  5 01:14:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f4a51d442750fa1428c1136d3cdeb9a080c1db84db95cee9207d79f3dc29a71-merged.mount: Deactivated successfully.
Dec  5 01:14:59 compute-0 podman[213937]: 2025-12-05 01:14:59.34835406 +0000 UTC m=+1.675815001 container remove 521be331ee93edc723e7a7b5fe819e4cdc08c76e530fd053e8c2b69c11c67ac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:14:59 compute-0 systemd[1]: libpod-conmon-521be331ee93edc723e7a7b5fe819e4cdc08c76e530fd053e8c2b69c11c67ac0.scope: Deactivated successfully.
Dec  5 01:14:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Dec  5 01:14:59 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1204988894' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  5 01:14:59 compute-0 podman[158197]: time="2025-12-05T01:14:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:14:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:14:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30683 "" "Go-http-client/1.1"
Dec  5 01:14:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:14:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6252 "" "Go-http-client/1.1"
Dec  5 01:15:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 479 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:15:00 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Dec  5 01:15:00 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1204988894' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  5 01:15:00 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1204988894' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  5 01:15:00 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Dec  5 01:15:00 compute-0 cranky_margulis[214016]: pool 'vms' created
Dec  5 01:15:00 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Dec  5 01:15:00 compute-0 systemd[1]: libpod-ee927366d55720da649cf108f4b5fac7a6900ae9803783ba59847238d0cc27d5.scope: Deactivated successfully.
Dec  5 01:15:00 compute-0 podman[213997]: 2025-12-05 01:15:00.265148708 +0000 UTC m=+1.706709708 container died ee927366d55720da649cf108f4b5fac7a6900ae9803783ba59847238d0cc27d5 (image=quay.io/ceph/ceph:v18, name=cranky_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  5 01:15:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-086b60429445134e1299d512fc865234e034c94d6d29507051f8cead594d5219-merged.mount: Deactivated successfully.
Dec  5 01:15:00 compute-0 podman[213997]: 2025-12-05 01:15:00.337860005 +0000 UTC m=+1.779420995 container remove ee927366d55720da649cf108f4b5fac7a6900ae9803783ba59847238d0cc27d5 (image=quay.io/ceph/ceph:v18, name=cranky_margulis, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  5 01:15:00 compute-0 podman[214214]: 2025-12-05 01:15:00.360506562 +0000 UTC m=+0.079019228 container create 43f03b24827fbf80ef163998bcbe6cb872940ca359c8cae2837bc84d6a642184 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_leavitt, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:15:00 compute-0 systemd[1]: libpod-conmon-ee927366d55720da649cf108f4b5fac7a6900ae9803783ba59847238d0cc27d5.scope: Deactivated successfully.
Dec  5 01:15:00 compute-0 systemd[1]: Started libpod-conmon-43f03b24827fbf80ef163998bcbe6cb872940ca359c8cae2837bc84d6a642184.scope.
Dec  5 01:15:00 compute-0 podman[214214]: 2025-12-05 01:15:00.341932124 +0000 UTC m=+0.060444810 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:15:00 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:00 compute-0 podman[214214]: 2025-12-05 01:15:00.465431343 +0000 UTC m=+0.183944029 container init 43f03b24827fbf80ef163998bcbe6cb872940ca359c8cae2837bc84d6a642184 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Dec  5 01:15:00 compute-0 podman[214214]: 2025-12-05 01:15:00.475191814 +0000 UTC m=+0.193704480 container start 43f03b24827fbf80ef163998bcbe6cb872940ca359c8cae2837bc84d6a642184 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_leavitt, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  5 01:15:00 compute-0 podman[214214]: 2025-12-05 01:15:00.479349486 +0000 UTC m=+0.197862252 container attach 43f03b24827fbf80ef163998bcbe6cb872940ca359c8cae2837bc84d6a642184 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_leavitt, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:15:00 compute-0 interesting_leavitt[214240]: 167 167
Dec  5 01:15:00 compute-0 systemd[1]: libpod-43f03b24827fbf80ef163998bcbe6cb872940ca359c8cae2837bc84d6a642184.scope: Deactivated successfully.
Dec  5 01:15:00 compute-0 podman[214214]: 2025-12-05 01:15:00.487698009 +0000 UTC m=+0.206210675 container died 43f03b24827fbf80ef163998bcbe6cb872940ca359c8cae2837bc84d6a642184 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_leavitt, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Dec  5 01:15:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-69cafeafffae32b2f7de309446f8a65fde52389a7d3c5200d128262fae904f69-merged.mount: Deactivated successfully.
Dec  5 01:15:00 compute-0 podman[214214]: 2025-12-05 01:15:00.538226183 +0000 UTC m=+0.256738849 container remove 43f03b24827fbf80ef163998bcbe6cb872940ca359c8cae2837bc84d6a642184 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_leavitt, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  5 01:15:00 compute-0 systemd[1]: libpod-conmon-43f03b24827fbf80ef163998bcbe6cb872940ca359c8cae2837bc84d6a642184.scope: Deactivated successfully.
Dec  5 01:15:00 compute-0 python3[214280]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:15:00 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 17 pg[2.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:00 compute-0 podman[214289]: 2025-12-05 01:15:00.794707893 +0000 UTC m=+0.071864066 container create ec7b4e35c577b2c4c4b86d33459a006ccf832d16ca8f267006ba53001a44e019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  5 01:15:00 compute-0 podman[214299]: 2025-12-05 01:15:00.814595426 +0000 UTC m=+0.058157389 container create 0f62494d4a40e27c1edde502a88166b85447b3209c15bb6752f8d277037f17d8 (image=quay.io/ceph/ceph:v18, name=exciting_mendel, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:15:00 compute-0 podman[214289]: 2025-12-05 01:15:00.77107263 +0000 UTC m=+0.048228783 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:15:00 compute-0 systemd[1]: Started libpod-conmon-0f62494d4a40e27c1edde502a88166b85447b3209c15bb6752f8d277037f17d8.scope.
Dec  5 01:15:00 compute-0 podman[214299]: 2025-12-05 01:15:00.781348145 +0000 UTC m=+0.024910128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:15:00 compute-0 systemd[1]: Started libpod-conmon-ec7b4e35c577b2c4c4b86d33459a006ccf832d16ca8f267006ba53001a44e019.scope.
Dec  5 01:15:00 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:00 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6298e8c9dc1b52b2a83c6d140862906b8994f6920c55f010f18af1200e309369/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6298e8c9dc1b52b2a83c6d140862906b8994f6920c55f010f18af1200e309369/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09fa312c4b985690566eb6b623a4d02c9870be9e84da412825f3991a4a6418ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09fa312c4b985690566eb6b623a4d02c9870be9e84da412825f3991a4a6418ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09fa312c4b985690566eb6b623a4d02c9870be9e84da412825f3991a4a6418ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09fa312c4b985690566eb6b623a4d02c9870be9e84da412825f3991a4a6418ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:00 compute-0 podman[214289]: 2025-12-05 01:15:00.940838488 +0000 UTC m=+0.217994731 container init ec7b4e35c577b2c4c4b86d33459a006ccf832d16ca8f267006ba53001a44e019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  5 01:15:00 compute-0 podman[214299]: 2025-12-05 01:15:00.952451209 +0000 UTC m=+0.196013162 container init 0f62494d4a40e27c1edde502a88166b85447b3209c15bb6752f8d277037f17d8 (image=quay.io/ceph/ceph:v18, name=exciting_mendel, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Dec  5 01:15:00 compute-0 podman[214289]: 2025-12-05 01:15:00.956804195 +0000 UTC m=+0.233960348 container start ec7b4e35c577b2c4c4b86d33459a006ccf832d16ca8f267006ba53001a44e019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lovelace, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  5 01:15:00 compute-0 podman[214299]: 2025-12-05 01:15:00.959469817 +0000 UTC m=+0.203031770 container start 0f62494d4a40e27c1edde502a88166b85447b3209c15bb6752f8d277037f17d8 (image=quay.io/ceph/ceph:v18, name=exciting_mendel, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  5 01:15:00 compute-0 podman[214289]: 2025-12-05 01:15:00.96333335 +0000 UTC m=+0.240489503 container attach ec7b4e35c577b2c4c4b86d33459a006ccf832d16ca8f267006ba53001a44e019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:15:00 compute-0 podman[214299]: 2025-12-05 01:15:00.974272483 +0000 UTC m=+0.217834456 container attach 0f62494d4a40e27c1edde502a88166b85447b3209c15bb6752f8d277037f17d8 (image=quay.io/ceph/ceph:v18, name=exciting_mendel, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  5 01:15:01 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Dec  5 01:15:01 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Dec  5 01:15:01 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Dec  5 01:15:01 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1204988894' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  5 01:15:01 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:01 compute-0 openstack_network_exporter[160350]: ERROR   01:15:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:15:01 compute-0 openstack_network_exporter[160350]: ERROR   01:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:15:01 compute-0 openstack_network_exporter[160350]: ERROR   01:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:15:01 compute-0 openstack_network_exporter[160350]: ERROR   01:15:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:15:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:15:01 compute-0 openstack_network_exporter[160350]: ERROR   01:15:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:15:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:15:01 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Dec  5 01:15:01 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4206810067' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]: {
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:    "0": [
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:        {
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:            "devices": [
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:                "/dev/loop3"
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:            ],
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:            "lv_name": "ceph_lv0",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:            "lv_size": "21470642176",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:            "name": "ceph_lv0",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:            "tags": {
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:                "ceph.cluster_name": "ceph",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:                "ceph.crush_device_class": "",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:                "ceph.encrypted": "0",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:                "ceph.osd_id": "0",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:                "ceph.type": "block",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:                "ceph.vdo": "0"
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:            },
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:            "type": "block",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:            "vg_name": "ceph_vg0"
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:        }
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:    ],
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:    "1": [
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:        {
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:            "devices": [
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:                "/dev/loop4"
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:            ],
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:            "lv_name": "ceph_lv1",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:            "lv_size": "21470642176",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:            "name": "ceph_lv1",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:            "tags": {
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:                "ceph.cluster_name": "ceph",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:                "ceph.crush_device_class": "",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:                "ceph.encrypted": "0",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:                "ceph.osd_id": "1",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:                "ceph.type": "block",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:                "ceph.vdo": "0"
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:            },
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:            "type": "block",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:            "vg_name": "ceph_vg1"
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:        }
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:    ],
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:    "2": [
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:        {
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:            "devices": [
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:                "/dev/loop5"
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:            ],
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:            "lv_name": "ceph_lv2",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:            "lv_size": "21470642176",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:            "name": "ceph_lv2",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:            "tags": {
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:                "ceph.cluster_name": "ceph",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:                "ceph.crush_device_class": "",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:                "ceph.encrypted": "0",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:                "ceph.osd_id": "2",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:                "ceph.type": "block",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:                "ceph.vdo": "0"
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:            },
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:            "type": "block",
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:            "vg_name": "ceph_vg2"
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:        }
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]:    ]
Dec  5 01:15:01 compute-0 stoic_lovelace[214318]: }
Dec  5 01:15:01 compute-0 systemd[1]: libpod-ec7b4e35c577b2c4c4b86d33459a006ccf832d16ca8f267006ba53001a44e019.scope: Deactivated successfully.
Dec  5 01:15:01 compute-0 podman[214289]: 2025-12-05 01:15:01.786539021 +0000 UTC m=+1.063695204 container died ec7b4e35c577b2c4c4b86d33459a006ccf832d16ca8f267006ba53001a44e019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lovelace, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:15:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-09fa312c4b985690566eb6b623a4d02c9870be9e84da412825f3991a4a6418ca-merged.mount: Deactivated successfully.
Dec  5 01:15:01 compute-0 podman[214289]: 2025-12-05 01:15:01.891257046 +0000 UTC m=+1.168413209 container remove ec7b4e35c577b2c4c4b86d33459a006ccf832d16ca8f267006ba53001a44e019 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:15:01 compute-0 systemd[1]: libpod-conmon-ec7b4e35c577b2c4c4b86d33459a006ccf832d16ca8f267006ba53001a44e019.scope: Deactivated successfully.
Dec  5 01:15:01 compute-0 podman[214352]: 2025-12-05 01:15:01.951919851 +0000 UTC m=+0.116087370 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  5 01:15:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:15:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v62: 2 pgs: 2 active+clean; 449 KiB data, 79 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:15:02 compute-0 ceph-mon[192914]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  5 01:15:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Dec  5 01:15:02 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/4206810067' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  5 01:15:02 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4206810067' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  5 01:15:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Dec  5 01:15:02 compute-0 exciting_mendel[214317]: pool 'volumes' created
Dec  5 01:15:02 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Dec  5 01:15:02 compute-0 systemd[1]: libpod-0f62494d4a40e27c1edde502a88166b85447b3209c15bb6752f8d277037f17d8.scope: Deactivated successfully.
Dec  5 01:15:02 compute-0 podman[214299]: 2025-12-05 01:15:02.338338842 +0000 UTC m=+1.581900805 container died 0f62494d4a40e27c1edde502a88166b85447b3209c15bb6752f8d277037f17d8 (image=quay.io/ceph/ceph:v18, name=exciting_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  5 01:15:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-6298e8c9dc1b52b2a83c6d140862906b8994f6920c55f010f18af1200e309369-merged.mount: Deactivated successfully.
Dec  5 01:15:02 compute-0 podman[214299]: 2025-12-05 01:15:02.411549892 +0000 UTC m=+1.655111855 container remove 0f62494d4a40e27c1edde502a88166b85447b3209c15bb6752f8d277037f17d8 (image=quay.io/ceph/ceph:v18, name=exciting_mendel, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  5 01:15:02 compute-0 systemd[1]: libpod-conmon-0f62494d4a40e27c1edde502a88166b85447b3209c15bb6752f8d277037f17d8.scope: Deactivated successfully.
Dec  5 01:15:02 compute-0 python3[214526]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:15:02 compute-0 podman[214550]: 2025-12-05 01:15:02.889202887 +0000 UTC m=+0.078101763 container create f3a6a726a5842eda42feecd8be7f645541634f1f47143fbc4df0418ecbe1120a (image=quay.io/ceph/ceph:v18, name=clever_jones, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:15:02 compute-0 systemd[1]: Started libpod-conmon-f3a6a726a5842eda42feecd8be7f645541634f1f47143fbc4df0418ecbe1120a.scope.
Dec  5 01:15:02 compute-0 podman[214550]: 2025-12-05 01:15:02.857677353 +0000 UTC m=+0.046576269 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:15:02 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 19 pg[3.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [1] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:02 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e20b1575d741839d61402c17f1e4b53f2c6c780e9f282d50f350e1d67ee95c8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e20b1575d741839d61402c17f1e4b53f2c6c780e9f282d50f350e1d67ee95c8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:03 compute-0 podman[214550]: 2025-12-05 01:15:03.021172732 +0000 UTC m=+0.210071628 container init f3a6a726a5842eda42feecd8be7f645541634f1f47143fbc4df0418ecbe1120a (image=quay.io/ceph/ceph:v18, name=clever_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:15:03 compute-0 podman[214550]: 2025-12-05 01:15:03.03898635 +0000 UTC m=+0.227885226 container start f3a6a726a5842eda42feecd8be7f645541634f1f47143fbc4df0418ecbe1120a (image=quay.io/ceph/ceph:v18, name=clever_jones, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:15:03 compute-0 podman[214579]: 2025-12-05 01:15:03.042474113 +0000 UTC m=+0.093349592 container create a0d068d9dadc36ebe93e16b56655f13514b20bf78c0615c47780ed7fee518e88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_blackwell, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  5 01:15:03 compute-0 podman[214550]: 2025-12-05 01:15:03.047664942 +0000 UTC m=+0.236563828 container attach f3a6a726a5842eda42feecd8be7f645541634f1f47143fbc4df0418ecbe1120a (image=quay.io/ceph/ceph:v18, name=clever_jones, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:15:03 compute-0 systemd[1]: Started libpod-conmon-a0d068d9dadc36ebe93e16b56655f13514b20bf78c0615c47780ed7fee518e88.scope.
Dec  5 01:15:03 compute-0 podman[214579]: 2025-12-05 01:15:02.999948414 +0000 UTC m=+0.050823943 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:15:03 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:03 compute-0 podman[214579]: 2025-12-05 01:15:03.138246798 +0000 UTC m=+0.189122337 container init a0d068d9dadc36ebe93e16b56655f13514b20bf78c0615c47780ed7fee518e88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_blackwell, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  5 01:15:03 compute-0 podman[214579]: 2025-12-05 01:15:03.149205432 +0000 UTC m=+0.200080921 container start a0d068d9dadc36ebe93e16b56655f13514b20bf78c0615c47780ed7fee518e88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_blackwell, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:15:03 compute-0 podman[214579]: 2025-12-05 01:15:03.155533372 +0000 UTC m=+0.206408901 container attach a0d068d9dadc36ebe93e16b56655f13514b20bf78c0615c47780ed7fee518e88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:15:03 compute-0 upbeat_blackwell[214599]: 167 167
Dec  5 01:15:03 compute-0 systemd[1]: libpod-a0d068d9dadc36ebe93e16b56655f13514b20bf78c0615c47780ed7fee518e88.scope: Deactivated successfully.
Dec  5 01:15:03 compute-0 podman[214579]: 2025-12-05 01:15:03.168975902 +0000 UTC m=+0.219851421 container died a0d068d9dadc36ebe93e16b56655f13514b20bf78c0615c47780ed7fee518e88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  5 01:15:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ccb8ce5e0d4a0b1ba42974a0786132ff1d0ad8a1463ae9af456a734714d9852-merged.mount: Deactivated successfully.
Dec  5 01:15:03 compute-0 podman[214579]: 2025-12-05 01:15:03.256768893 +0000 UTC m=+0.307644382 container remove a0d068d9dadc36ebe93e16b56655f13514b20bf78c0615c47780ed7fee518e88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  5 01:15:03 compute-0 ceph-mon[192914]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  5 01:15:03 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/4206810067' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  5 01:15:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Dec  5 01:15:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Dec  5 01:15:03 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Dec  5 01:15:03 compute-0 systemd[1]: libpod-conmon-a0d068d9dadc36ebe93e16b56655f13514b20bf78c0615c47780ed7fee518e88.scope: Deactivated successfully.
Dec  5 01:15:03 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 20 pg[3.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [1] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:03 compute-0 podman[214642]: 2025-12-05 01:15:03.514213329 +0000 UTC m=+0.067529789 container create 48042c4961f2e6dbab0b607a27040235ccfeb2df730637833070882f1dc5df6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_yalow, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  5 01:15:03 compute-0 podman[214642]: 2025-12-05 01:15:03.488743637 +0000 UTC m=+0.042060137 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:15:03 compute-0 systemd[1]: Started libpod-conmon-48042c4961f2e6dbab0b607a27040235ccfeb2df730637833070882f1dc5df6d.scope.
Dec  5 01:15:03 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e656e2d67edda8fe3c80b496dd6eef5885afb055061af41791eb858ba21bffe9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e656e2d67edda8fe3c80b496dd6eef5885afb055061af41791eb858ba21bffe9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e656e2d67edda8fe3c80b496dd6eef5885afb055061af41791eb858ba21bffe9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e656e2d67edda8fe3c80b496dd6eef5885afb055061af41791eb858ba21bffe9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Dec  5 01:15:03 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/58862313' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  5 01:15:03 compute-0 podman[214642]: 2025-12-05 01:15:03.687130511 +0000 UTC m=+0.240447041 container init 48042c4961f2e6dbab0b607a27040235ccfeb2df730637833070882f1dc5df6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_yalow, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:15:03 compute-0 podman[214642]: 2025-12-05 01:15:03.69528776 +0000 UTC m=+0.248604220 container start 48042c4961f2e6dbab0b607a27040235ccfeb2df730637833070882f1dc5df6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:15:03 compute-0 podman[214642]: 2025-12-05 01:15:03.700912271 +0000 UTC m=+0.254228721 container attach 48042c4961f2e6dbab0b607a27040235ccfeb2df730637833070882f1dc5df6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec  5 01:15:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v65: 3 pgs: 1 unknown, 2 active+clean; 449 KiB data, 79 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:15:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Dec  5 01:15:04 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/58862313' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  5 01:15:04 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/58862313' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  5 01:15:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Dec  5 01:15:04 compute-0 clever_jones[214581]: pool 'backups' created
Dec  5 01:15:04 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Dec  5 01:15:04 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 21 pg[4.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:04 compute-0 systemd[1]: libpod-f3a6a726a5842eda42feecd8be7f645541634f1f47143fbc4df0418ecbe1120a.scope: Deactivated successfully.
Dec  5 01:15:04 compute-0 podman[214550]: 2025-12-05 01:15:04.376421585 +0000 UTC m=+1.565320491 container died f3a6a726a5842eda42feecd8be7f645541634f1f47143fbc4df0418ecbe1120a (image=quay.io/ceph/ceph:v18, name=clever_jones, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  5 01:15:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e20b1575d741839d61402c17f1e4b53f2c6c780e9f282d50f350e1d67ee95c8-merged.mount: Deactivated successfully.
Dec  5 01:15:04 compute-0 podman[214550]: 2025-12-05 01:15:04.476559598 +0000 UTC m=+1.665458474 container remove f3a6a726a5842eda42feecd8be7f645541634f1f47143fbc4df0418ecbe1120a (image=quay.io/ceph/ceph:v18, name=clever_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  5 01:15:04 compute-0 systemd[1]: libpod-conmon-f3a6a726a5842eda42feecd8be7f645541634f1f47143fbc4df0418ecbe1120a.scope: Deactivated successfully.
Dec  5 01:15:04 compute-0 python3[214720]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:15:04 compute-0 awesome_yalow[214659]: {
Dec  5 01:15:04 compute-0 awesome_yalow[214659]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 01:15:04 compute-0 awesome_yalow[214659]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:15:04 compute-0 awesome_yalow[214659]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 01:15:04 compute-0 awesome_yalow[214659]:        "osd_id": 0,
Dec  5 01:15:04 compute-0 awesome_yalow[214659]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:15:04 compute-0 awesome_yalow[214659]:        "type": "bluestore"
Dec  5 01:15:04 compute-0 awesome_yalow[214659]:    },
Dec  5 01:15:04 compute-0 awesome_yalow[214659]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 01:15:04 compute-0 awesome_yalow[214659]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:15:04 compute-0 awesome_yalow[214659]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 01:15:04 compute-0 awesome_yalow[214659]:        "osd_id": 1,
Dec  5 01:15:04 compute-0 awesome_yalow[214659]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:15:04 compute-0 awesome_yalow[214659]:        "type": "bluestore"
Dec  5 01:15:04 compute-0 awesome_yalow[214659]:    },
Dec  5 01:15:04 compute-0 awesome_yalow[214659]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 01:15:04 compute-0 awesome_yalow[214659]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:15:04 compute-0 awesome_yalow[214659]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 01:15:04 compute-0 awesome_yalow[214659]:        "osd_id": 2,
Dec  5 01:15:04 compute-0 awesome_yalow[214659]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:15:04 compute-0 awesome_yalow[214659]:        "type": "bluestore"
Dec  5 01:15:04 compute-0 awesome_yalow[214659]:    }
Dec  5 01:15:04 compute-0 awesome_yalow[214659]: }
Dec  5 01:15:04 compute-0 systemd[1]: libpod-48042c4961f2e6dbab0b607a27040235ccfeb2df730637833070882f1dc5df6d.scope: Deactivated successfully.
Dec  5 01:15:04 compute-0 systemd[1]: libpod-48042c4961f2e6dbab0b607a27040235ccfeb2df730637833070882f1dc5df6d.scope: Consumed 1.202s CPU time.
Dec  5 01:15:04 compute-0 podman[214642]: 2025-12-05 01:15:04.906740671 +0000 UTC m=+1.460057141 container died 48042c4961f2e6dbab0b607a27040235ccfeb2df730637833070882f1dc5df6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_yalow, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:15:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-e656e2d67edda8fe3c80b496dd6eef5885afb055061af41791eb858ba21bffe9-merged.mount: Deactivated successfully.
Dec  5 01:15:05 compute-0 podman[214731]: 2025-12-05 01:15:04.999138146 +0000 UTC m=+0.105814095 container create 2f92165b94d775012641be80c61201f2642ebad10ba52ad00c71b8387ed70faf (image=quay.io/ceph/ceph:v18, name=compassionate_brattain, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:15:05 compute-0 podman[214642]: 2025-12-05 01:15:05.007817979 +0000 UTC m=+1.561134429 container remove 48042c4961f2e6dbab0b607a27040235ccfeb2df730637833070882f1dc5df6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  5 01:15:05 compute-0 systemd[1]: libpod-conmon-48042c4961f2e6dbab0b607a27040235ccfeb2df730637833070882f1dc5df6d.scope: Deactivated successfully.
Dec  5 01:15:05 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:15:05 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:05 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:15:05 compute-0 systemd[1]: Started libpod-conmon-2f92165b94d775012641be80c61201f2642ebad10ba52ad00c71b8387ed70faf.scope.
Dec  5 01:15:05 compute-0 podman[214731]: 2025-12-05 01:15:04.977470636 +0000 UTC m=+0.084146605 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:15:05 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:05 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27335335aa2c08067bbc799fc4db4d271d8e8598a0a1c5ae0fc69b3899bf5edc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27335335aa2c08067bbc799fc4db4d271d8e8598a0a1c5ae0fc69b3899bf5edc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:05 compute-0 podman[214731]: 2025-12-05 01:15:05.136099305 +0000 UTC m=+0.242775274 container init 2f92165b94d775012641be80c61201f2642ebad10ba52ad00c71b8387ed70faf (image=quay.io/ceph/ceph:v18, name=compassionate_brattain, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:15:05 compute-0 podman[214731]: 2025-12-05 01:15:05.153523192 +0000 UTC m=+0.260199141 container start 2f92165b94d775012641be80c61201f2642ebad10ba52ad00c71b8387ed70faf (image=quay.io/ceph/ceph:v18, name=compassionate_brattain, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:15:05 compute-0 podman[214731]: 2025-12-05 01:15:05.161466904 +0000 UTC m=+0.268142873 container attach 2f92165b94d775012641be80c61201f2642ebad10ba52ad00c71b8387ed70faf (image=quay.io/ceph/ceph:v18, name=compassionate_brattain, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:15:05 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Dec  5 01:15:05 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/58862313' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  5 01:15:05 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:05 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:05 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Dec  5 01:15:05 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Dec  5 01:15:05 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 22 pg[4.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [0] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:05 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Dec  5 01:15:05 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/950505885' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  5 01:15:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v68: 4 pgs: 1 creating+peering, 1 unknown, 2 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:15:06 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Dec  5 01:15:06 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/950505885' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  5 01:15:06 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/950505885' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  5 01:15:06 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Dec  5 01:15:06 compute-0 compassionate_brattain[214762]: pool 'images' created
Dec  5 01:15:06 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Dec  5 01:15:06 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 23 pg[5.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [2] r=0 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:06 compute-0 systemd[1]: libpod-2f92165b94d775012641be80c61201f2642ebad10ba52ad00c71b8387ed70faf.scope: Deactivated successfully.
Dec  5 01:15:06 compute-0 podman[214731]: 2025-12-05 01:15:06.439821227 +0000 UTC m=+1.546497246 container died 2f92165b94d775012641be80c61201f2642ebad10ba52ad00c71b8387ed70faf (image=quay.io/ceph/ceph:v18, name=compassionate_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  5 01:15:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-27335335aa2c08067bbc799fc4db4d271d8e8598a0a1c5ae0fc69b3899bf5edc-merged.mount: Deactivated successfully.
Dec  5 01:15:06 compute-0 podman[214731]: 2025-12-05 01:15:06.527058174 +0000 UTC m=+1.633734163 container remove 2f92165b94d775012641be80c61201f2642ebad10ba52ad00c71b8387ed70faf (image=quay.io/ceph/ceph:v18, name=compassionate_brattain, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  5 01:15:06 compute-0 systemd[1]: libpod-conmon-2f92165b94d775012641be80c61201f2642ebad10ba52ad00c71b8387ed70faf.scope: Deactivated successfully.
Dec  5 01:15:06 compute-0 python3[214874]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:15:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e23 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:15:07 compute-0 podman[214875]: 2025-12-05 01:15:07.095780188 +0000 UTC m=+0.111147688 container create 74b35977e4fb45d2c604cf02897c97a3afead41d2b60c1dd1c9ae21c6d077f61 (image=quay.io/ceph/ceph:v18, name=modest_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  5 01:15:07 compute-0 podman[214875]: 2025-12-05 01:15:07.047130815 +0000 UTC m=+0.062498415 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:15:07 compute-0 systemd[1]: Started libpod-conmon-74b35977e4fb45d2c604cf02897c97a3afead41d2b60c1dd1c9ae21c6d077f61.scope.
Dec  5 01:15:07 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1108143c62ab745d34a00ad97739a4c5d42a707f551b7135f04c975258a3d6a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1108143c62ab745d34a00ad97739a4c5d42a707f551b7135f04c975258a3d6a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:07 compute-0 podman[214875]: 2025-12-05 01:15:07.23773113 +0000 UTC m=+0.253098710 container init 74b35977e4fb45d2c604cf02897c97a3afead41d2b60c1dd1c9ae21c6d077f61 (image=quay.io/ceph/ceph:v18, name=modest_meninsky, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  5 01:15:07 compute-0 podman[214875]: 2025-12-05 01:15:07.255978509 +0000 UTC m=+0.271346019 container start 74b35977e4fb45d2c604cf02897c97a3afead41d2b60c1dd1c9ae21c6d077f61 (image=quay.io/ceph/ceph:v18, name=modest_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  5 01:15:07 compute-0 podman[214875]: 2025-12-05 01:15:07.260839549 +0000 UTC m=+0.276207149 container attach 74b35977e4fb45d2c604cf02897c97a3afead41d2b60c1dd1c9ae21c6d077f61 (image=quay.io/ceph/ceph:v18, name=modest_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  5 01:15:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Dec  5 01:15:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Dec  5 01:15:07 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Dec  5 01:15:07 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/950505885' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  5 01:15:07 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 24 pg[5.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [2] r=0 lpr=23 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Dec  5 01:15:07 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/302135047' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  5 01:15:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v71: 5 pgs: 2 creating+peering, 3 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:15:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Dec  5 01:15:08 compute-0 ceph-mon[192914]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  5 01:15:08 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/302135047' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  5 01:15:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Dec  5 01:15:08 compute-0 modest_meninsky[214890]: pool 'cephfs.cephfs.meta' created
Dec  5 01:15:08 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Dec  5 01:15:08 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/302135047' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  5 01:15:08 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 25 pg[6.0( empty local-lis/les=0/0 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [0] r=0 lpr=25 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:08 compute-0 systemd[1]: libpod-74b35977e4fb45d2c604cf02897c97a3afead41d2b60c1dd1c9ae21c6d077f61.scope: Deactivated successfully.
Dec  5 01:15:08 compute-0 podman[214875]: 2025-12-05 01:15:08.456136558 +0000 UTC m=+1.471504108 container died 74b35977e4fb45d2c604cf02897c97a3afead41d2b60c1dd1c9ae21c6d077f61 (image=quay.io/ceph/ceph:v18, name=modest_meninsky, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:15:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1108143c62ab745d34a00ad97739a4c5d42a707f551b7135f04c975258a3d6a-merged.mount: Deactivated successfully.
Dec  5 01:15:08 compute-0 podman[214875]: 2025-12-05 01:15:08.552943671 +0000 UTC m=+1.568311191 container remove 74b35977e4fb45d2c604cf02897c97a3afead41d2b60c1dd1c9ae21c6d077f61 (image=quay.io/ceph/ceph:v18, name=modest_meninsky, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:15:08 compute-0 systemd[1]: libpod-conmon-74b35977e4fb45d2c604cf02897c97a3afead41d2b60c1dd1c9ae21c6d077f61.scope: Deactivated successfully.
Dec  5 01:15:08 compute-0 python3[214953]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:15:09 compute-0 podman[214954]: 2025-12-05 01:15:09.093189502 +0000 UTC m=+0.110723876 container create aedc1dec37176fcb1f8140f790e8a851ef2540117a1b141d84e0541d6a4a9476 (image=quay.io/ceph/ceph:v18, name=sharp_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:15:09 compute-0 podman[214954]: 2025-12-05 01:15:09.052755169 +0000 UTC m=+0.070289613 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:15:09 compute-0 systemd[1]: Started libpod-conmon-aedc1dec37176fcb1f8140f790e8a851ef2540117a1b141d84e0541d6a4a9476.scope.
Dec  5 01:15:09 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6e3affd19c76c44e7d3f4e732c4a0fc28bc70c1d28ad24952be432c3b93431e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6e3affd19c76c44e7d3f4e732c4a0fc28bc70c1d28ad24952be432c3b93431e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:09 compute-0 podman[214954]: 2025-12-05 01:15:09.234737304 +0000 UTC m=+0.252271678 container init aedc1dec37176fcb1f8140f790e8a851ef2540117a1b141d84e0541d6a4a9476 (image=quay.io/ceph/ceph:v18, name=sharp_galileo, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  5 01:15:09 compute-0 podman[214954]: 2025-12-05 01:15:09.247084485 +0000 UTC m=+0.264618839 container start aedc1dec37176fcb1f8140f790e8a851ef2540117a1b141d84e0541d6a4a9476 (image=quay.io/ceph/ceph:v18, name=sharp_galileo, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  5 01:15:09 compute-0 podman[214954]: 2025-12-05 01:15:09.252426648 +0000 UTC m=+0.269961002 container attach aedc1dec37176fcb1f8140f790e8a851ef2540117a1b141d84e0541d6a4a9476 (image=quay.io/ceph/ceph:v18, name=sharp_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  5 01:15:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Dec  5 01:15:09 compute-0 ceph-mon[192914]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  5 01:15:09 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/302135047' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  5 01:15:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Dec  5 01:15:09 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Dec  5 01:15:09 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 26 pg[6.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [0] r=0 lpr=25 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Dec  5 01:15:09 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3740689644' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  5 01:15:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v74: 6 pgs: 1 creating+peering, 5 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:15:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Dec  5 01:15:10 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3740689644' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  5 01:15:10 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3740689644' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  5 01:15:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Dec  5 01:15:10 compute-0 sharp_galileo[214969]: pool 'cephfs.cephfs.data' created
Dec  5 01:15:10 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Dec  5 01:15:10 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 27 pg[7.0( empty local-lis/les=0/0 n=0 ec=27/27 lis/c=0/0 les/c/f=0/0/0 sis=27) [1] r=0 lpr=27 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:10 compute-0 systemd[1]: libpod-aedc1dec37176fcb1f8140f790e8a851ef2540117a1b141d84e0541d6a4a9476.scope: Deactivated successfully.
Dec  5 01:15:10 compute-0 podman[214954]: 2025-12-05 01:15:10.526687661 +0000 UTC m=+1.544222035 container died aedc1dec37176fcb1f8140f790e8a851ef2540117a1b141d84e0541d6a4a9476 (image=quay.io/ceph/ceph:v18, name=sharp_galileo, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:15:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6e3affd19c76c44e7d3f4e732c4a0fc28bc70c1d28ad24952be432c3b93431e-merged.mount: Deactivated successfully.
Dec  5 01:15:10 compute-0 podman[214954]: 2025-12-05 01:15:10.620201726 +0000 UTC m=+1.637736080 container remove aedc1dec37176fcb1f8140f790e8a851ef2540117a1b141d84e0541d6a4a9476 (image=quay.io/ceph/ceph:v18, name=sharp_galileo, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  5 01:15:10 compute-0 systemd[1]: libpod-conmon-aedc1dec37176fcb1f8140f790e8a851ef2540117a1b141d84e0541d6a4a9476.scope: Deactivated successfully.
Dec  5 01:15:11 compute-0 python3[215033]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:15:11 compute-0 podman[215034]: 2025-12-05 01:15:11.179388435 +0000 UTC m=+0.086440997 container create 5555fb7280d15b72b99f82ac9aa84b72bdf89c66c64282f8625dd34b9c11c63a (image=quay.io/ceph/ceph:v18, name=kind_chaum, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:15:11 compute-0 systemd[1]: Started libpod-conmon-5555fb7280d15b72b99f82ac9aa84b72bdf89c66c64282f8625dd34b9c11c63a.scope.
Dec  5 01:15:11 compute-0 podman[215034]: 2025-12-05 01:15:11.147784108 +0000 UTC m=+0.054836670 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:15:11 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f241374f297f7ebfe64ecd24c971bd720d274e94c2d679635f594141956ace94/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f241374f297f7ebfe64ecd24c971bd720d274e94c2d679635f594141956ace94/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:11 compute-0 podman[215034]: 2025-12-05 01:15:11.326628249 +0000 UTC m=+0.233680821 container init 5555fb7280d15b72b99f82ac9aa84b72bdf89c66c64282f8625dd34b9c11c63a (image=quay.io/ceph/ceph:v18, name=kind_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  5 01:15:11 compute-0 podman[215034]: 2025-12-05 01:15:11.337572122 +0000 UTC m=+0.244624714 container start 5555fb7280d15b72b99f82ac9aa84b72bdf89c66c64282f8625dd34b9c11c63a (image=quay.io/ceph/ceph:v18, name=kind_chaum, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  5 01:15:11 compute-0 podman[215034]: 2025-12-05 01:15:11.343022898 +0000 UTC m=+0.250075470 container attach 5555fb7280d15b72b99f82ac9aa84b72bdf89c66c64282f8625dd34b9c11c63a (image=quay.io/ceph/ceph:v18, name=kind_chaum, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  5 01:15:11 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Dec  5 01:15:11 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3740689644' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  5 01:15:11 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Dec  5 01:15:11 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Dec  5 01:15:11 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 28 pg[7.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=0/0 les/c/f=0/0/0 sis=27) [1] r=0 lpr=27 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:11 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Dec  5 01:15:11 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2738035894' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Dec  5 01:15:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:15:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:15:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Dec  5 01:15:12 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2738035894' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Dec  5 01:15:12 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2738035894' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec  5 01:15:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Dec  5 01:15:12 compute-0 kind_chaum[215049]: enabled application 'rbd' on pool 'vms'
Dec  5 01:15:12 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Dec  5 01:15:12 compute-0 systemd[1]: libpod-5555fb7280d15b72b99f82ac9aa84b72bdf89c66c64282f8625dd34b9c11c63a.scope: Deactivated successfully.
Dec  5 01:15:12 compute-0 podman[215034]: 2025-12-05 01:15:12.586550928 +0000 UTC m=+1.493603530 container died 5555fb7280d15b72b99f82ac9aa84b72bdf89c66c64282f8625dd34b9c11c63a (image=quay.io/ceph/ceph:v18, name=kind_chaum, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  5 01:15:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-f241374f297f7ebfe64ecd24c971bd720d274e94c2d679635f594141956ace94-merged.mount: Deactivated successfully.
Dec  5 01:15:12 compute-0 podman[215034]: 2025-12-05 01:15:12.668560635 +0000 UTC m=+1.575613227 container remove 5555fb7280d15b72b99f82ac9aa84b72bdf89c66c64282f8625dd34b9c11c63a (image=quay.io/ceph/ceph:v18, name=kind_chaum, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:15:12 compute-0 systemd[1]: libpod-conmon-5555fb7280d15b72b99f82ac9aa84b72bdf89c66c64282f8625dd34b9c11c63a.scope: Deactivated successfully.
Dec  5 01:15:13 compute-0 python3[215109]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:15:13 compute-0 podman[215110]: 2025-12-05 01:15:13.219642406 +0000 UTC m=+0.097290477 container create cfd86456f67edc3fe2d90bde1a783b523b33c5283f4b2ffbf4ddf3bef4182b7e (image=quay.io/ceph/ceph:v18, name=magical_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:15:13 compute-0 podman[215110]: 2025-12-05 01:15:13.183160709 +0000 UTC m=+0.060808850 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:15:13 compute-0 systemd[1]: Started libpod-conmon-cfd86456f67edc3fe2d90bde1a783b523b33c5283f4b2ffbf4ddf3bef4182b7e.scope.
Dec  5 01:15:13 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96590ee5135165e1403de94eb34100db7cf2f971ad916585d6a7a98a7c6d55fd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96590ee5135165e1403de94eb34100db7cf2f971ad916585d6a7a98a7c6d55fd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:13 compute-0 podman[215110]: 2025-12-05 01:15:13.408223157 +0000 UTC m=+0.285871208 container init cfd86456f67edc3fe2d90bde1a783b523b33c5283f4b2ffbf4ddf3bef4182b7e (image=quay.io/ceph/ceph:v18, name=magical_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:15:13 compute-0 podman[215110]: 2025-12-05 01:15:13.425076679 +0000 UTC m=+0.302724760 container start cfd86456f67edc3fe2d90bde1a783b523b33c5283f4b2ffbf4ddf3bef4182b7e (image=quay.io/ceph/ceph:v18, name=magical_murdock, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  5 01:15:13 compute-0 podman[215110]: 2025-12-05 01:15:13.432845687 +0000 UTC m=+0.310493818 container attach cfd86456f67edc3fe2d90bde1a783b523b33c5283f4b2ffbf4ddf3bef4182b7e (image=quay.io/ceph/ceph:v18, name=magical_murdock, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Dec  5 01:15:13 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2738035894' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec  5 01:15:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Dec  5 01:15:13 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3659675886' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Dec  5 01:15:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v79: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:15:14 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Dec  5 01:15:14 compute-0 ceph-mon[192914]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  5 01:15:14 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3659675886' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Dec  5 01:15:14 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3659675886' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec  5 01:15:14 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Dec  5 01:15:14 compute-0 magical_murdock[215125]: enabled application 'rbd' on pool 'volumes'
Dec  5 01:15:14 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Dec  5 01:15:14 compute-0 systemd[1]: libpod-cfd86456f67edc3fe2d90bde1a783b523b33c5283f4b2ffbf4ddf3bef4182b7e.scope: Deactivated successfully.
Dec  5 01:15:14 compute-0 podman[215110]: 2025-12-05 01:15:14.588125343 +0000 UTC m=+1.465773384 container died cfd86456f67edc3fe2d90bde1a783b523b33c5283f4b2ffbf4ddf3bef4182b7e (image=quay.io/ceph/ceph:v18, name=magical_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:15:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-96590ee5135165e1403de94eb34100db7cf2f971ad916585d6a7a98a7c6d55fd-merged.mount: Deactivated successfully.
Dec  5 01:15:14 compute-0 podman[215110]: 2025-12-05 01:15:14.648968803 +0000 UTC m=+1.526616844 container remove cfd86456f67edc3fe2d90bde1a783b523b33c5283f4b2ffbf4ddf3bef4182b7e (image=quay.io/ceph/ceph:v18, name=magical_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  5 01:15:14 compute-0 systemd[1]: libpod-conmon-cfd86456f67edc3fe2d90bde1a783b523b33c5283f4b2ffbf4ddf3bef4182b7e.scope: Deactivated successfully.
Dec  5 01:15:15 compute-0 python3[215188]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:15:15 compute-0 podman[215189]: 2025-12-05 01:15:15.178717483 +0000 UTC m=+0.083470506 container create 99d5b2ef557d9eb826809da38cf63c1f8c936867f01acfdc1937a3e616ff6ae6 (image=quay.io/ceph/ceph:v18, name=distracted_matsumoto, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  5 01:15:15 compute-0 podman[215189]: 2025-12-05 01:15:15.145502984 +0000 UTC m=+0.050256087 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:15:15 compute-0 systemd[1]: Started libpod-conmon-99d5b2ef557d9eb826809da38cf63c1f8c936867f01acfdc1937a3e616ff6ae6.scope.
Dec  5 01:15:15 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a500b52efa8c432a23ba815a1a9f7f5c93367d2ea0ee07984993bd40b8d427b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a500b52efa8c432a23ba815a1a9f7f5c93367d2ea0ee07984993bd40b8d427b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:15 compute-0 podman[215189]: 2025-12-05 01:15:15.323538023 +0000 UTC m=+0.228291136 container init 99d5b2ef557d9eb826809da38cf63c1f8c936867f01acfdc1937a3e616ff6ae6 (image=quay.io/ceph/ceph:v18, name=distracted_matsumoto, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:15:15 compute-0 podman[215189]: 2025-12-05 01:15:15.339636454 +0000 UTC m=+0.244389517 container start 99d5b2ef557d9eb826809da38cf63c1f8c936867f01acfdc1937a3e616ff6ae6 (image=quay.io/ceph/ceph:v18, name=distracted_matsumoto, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  5 01:15:15 compute-0 podman[215189]: 2025-12-05 01:15:15.347231517 +0000 UTC m=+0.251984570 container attach 99d5b2ef557d9eb826809da38cf63c1f8c936867f01acfdc1937a3e616ff6ae6 (image=quay.io/ceph/ceph:v18, name=distracted_matsumoto, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Dec  5 01:15:15 compute-0 ceph-mon[192914]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  5 01:15:15 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3659675886' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec  5 01:15:15 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Dec  5 01:15:15 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/895646168' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Dec  5 01:15:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v81: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:15:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:15:16
Dec  5 01:15:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 01:15:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 01:15:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', 'backups', 'volumes', 'vms', 'cephfs.cephfs.meta', '.mgr']
Dec  5 01:15:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 01:15:16 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 01:15:16 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:15:16 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 01:15:16 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:15:16 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  5 01:15:16 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:15:16 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  5 01:15:16 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:15:16 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  5 01:15:16 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:15:16 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  5 01:15:16 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:15:16 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  5 01:15:16 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:15:16 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  5 01:15:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Dec  5 01:15:16 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Dec  5 01:15:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:15:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:15:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 01:15:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:15:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:15:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 01:15:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:15:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:15:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:15:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:15:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:15:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:15:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Dec  5 01:15:16 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/895646168' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Dec  5 01:15:16 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Dec  5 01:15:16 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/895646168' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec  5 01:15:16 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec  5 01:15:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Dec  5 01:15:16 compute-0 distracted_matsumoto[215204]: enabled application 'rbd' on pool 'backups'
Dec  5 01:15:16 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Dec  5 01:15:16 compute-0 ceph-mgr[193209]: [progress INFO root] update: starting ev c9af2dd1-de48-4226-86e7-e655905eba6a (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec  5 01:15:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Dec  5 01:15:16 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Dec  5 01:15:16 compute-0 systemd[1]: libpod-99d5b2ef557d9eb826809da38cf63c1f8c936867f01acfdc1937a3e616ff6ae6.scope: Deactivated successfully.
Dec  5 01:15:16 compute-0 podman[215189]: 2025-12-05 01:15:16.63526964 +0000 UTC m=+1.540022703 container died 99d5b2ef557d9eb826809da38cf63c1f8c936867f01acfdc1937a3e616ff6ae6 (image=quay.io/ceph/ceph:v18, name=distracted_matsumoto, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  5 01:15:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a500b52efa8c432a23ba815a1a9f7f5c93367d2ea0ee07984993bd40b8d427b-merged.mount: Deactivated successfully.
Dec  5 01:15:16 compute-0 podman[215189]: 2025-12-05 01:15:16.742765088 +0000 UTC m=+1.647518111 container remove 99d5b2ef557d9eb826809da38cf63c1f8c936867f01acfdc1937a3e616ff6ae6 (image=quay.io/ceph/ceph:v18, name=distracted_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:15:16 compute-0 systemd[1]: libpod-conmon-99d5b2ef557d9eb826809da38cf63c1f8c936867f01acfdc1937a3e616ff6ae6.scope: Deactivated successfully.
Dec  5 01:15:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:15:17 compute-0 python3[215264]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:15:17 compute-0 podman[215265]: 2025-12-05 01:15:17.182321712 +0000 UTC m=+0.091817780 container create bd74be3a0fed2d9d10e963d6e4fc1b07d1dd3a7894726445adc5cac138a1367a (image=quay.io/ceph/ceph:v18, name=elated_cori, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:15:17 compute-0 podman[215265]: 2025-12-05 01:15:17.142160697 +0000 UTC m=+0.051656835 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:15:17 compute-0 systemd[1]: Started libpod-conmon-bd74be3a0fed2d9d10e963d6e4fc1b07d1dd3a7894726445adc5cac138a1367a.scope.
Dec  5 01:15:17 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81b2fe447dc443a62b4f74c4a7d187be0037d67dc4d57f3764e33c35f927c78a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81b2fe447dc443a62b4f74c4a7d187be0037d67dc4d57f3764e33c35f927c78a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:17 compute-0 podman[215265]: 2025-12-05 01:15:17.362309714 +0000 UTC m=+0.271805832 container init bd74be3a0fed2d9d10e963d6e4fc1b07d1dd3a7894726445adc5cac138a1367a (image=quay.io/ceph/ceph:v18, name=elated_cori, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  5 01:15:17 compute-0 podman[215265]: 2025-12-05 01:15:17.381018805 +0000 UTC m=+0.290514863 container start bd74be3a0fed2d9d10e963d6e4fc1b07d1dd3a7894726445adc5cac138a1367a (image=quay.io/ceph/ceph:v18, name=elated_cori, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:15:17 compute-0 podman[215265]: 2025-12-05 01:15:17.390999872 +0000 UTC m=+0.300495980 container attach bd74be3a0fed2d9d10e963d6e4fc1b07d1dd3a7894726445adc5cac138a1367a (image=quay.io/ceph/ceph:v18, name=elated_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  5 01:15:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Dec  5 01:15:17 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec  5 01:15:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Dec  5 01:15:17 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Dec  5 01:15:17 compute-0 ceph-mgr[193209]: [progress INFO root] update: starting ev 0de8065e-0050-4e26-b97b-0b8799bec160 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec  5 01:15:17 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/895646168' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec  5 01:15:17 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec  5 01:15:17 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Dec  5 01:15:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Dec  5 01:15:17 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Dec  5 01:15:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Dec  5 01:15:18 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1272040897' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Dec  5 01:15:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v84: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:15:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec  5 01:15:18 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  5 01:15:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec  5 01:15:18 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  5 01:15:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Dec  5 01:15:18 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec  5 01:15:18 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1272040897' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec  5 01:15:18 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec  5 01:15:18 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec  5 01:15:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Dec  5 01:15:18 compute-0 elated_cori[215281]: enabled application 'rbd' on pool 'images'
Dec  5 01:15:18 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Dec  5 01:15:18 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 33 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=33 pruub=14.372560501s) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active pruub 57.631557465s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:18 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 33 pg[2.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=33 pruub=14.372560501s) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown pruub 57.631557465s@ mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:18 compute-0 ceph-mgr[193209]: [progress INFO root] update: starting ev 339ad7be-a8a2-4a9e-8c52-0669a043c0a3 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec  5 01:15:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec  5 01:15:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Dec  5 01:15:18 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1272040897' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Dec  5 01:15:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  5 01:15:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  5 01:15:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Dec  5 01:15:18 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Dec  5 01:15:18 compute-0 podman[215265]: 2025-12-05 01:15:18.928264031 +0000 UTC m=+1.837760099 container died bd74be3a0fed2d9d10e963d6e4fc1b07d1dd3a7894726445adc5cac138a1367a (image=quay.io/ceph/ceph:v18, name=elated_cori, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:15:18 compute-0 systemd[1]: libpod-bd74be3a0fed2d9d10e963d6e4fc1b07d1dd3a7894726445adc5cac138a1367a.scope: Deactivated successfully.
Dec  5 01:15:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-81b2fe447dc443a62b4f74c4a7d187be0037d67dc4d57f3764e33c35f927c78a-merged.mount: Deactivated successfully.
Dec  5 01:15:19 compute-0 podman[215265]: 2025-12-05 01:15:19.039582943 +0000 UTC m=+1.949078991 container remove bd74be3a0fed2d9d10e963d6e4fc1b07d1dd3a7894726445adc5cac138a1367a (image=quay.io/ceph/ceph:v18, name=elated_cori, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:15:19 compute-0 systemd[1]: libpod-conmon-bd74be3a0fed2d9d10e963d6e4fc1b07d1dd3a7894726445adc5cac138a1367a.scope: Deactivated successfully.
Dec  5 01:15:19 compute-0 python3[215343]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:15:19 compute-0 podman[215344]: 2025-12-05 01:15:19.599496291 +0000 UTC m=+0.100652857 container create 08e13b24ec485888c5ab734d2e26564900b0938c094f87c370524d5324e8c5d0 (image=quay.io/ceph/ceph:v18, name=recursing_faraday, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  5 01:15:19 compute-0 podman[215344]: 2025-12-05 01:15:19.555083092 +0000 UTC m=+0.056239718 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:15:19 compute-0 systemd[1]: Started libpod-conmon-08e13b24ec485888c5ab734d2e26564900b0938c094f87c370524d5324e8c5d0.scope.
Dec  5 01:15:19 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7403e190a00127c2ea1491a61795c1b6cd83841b3830add86a0ff7caab67c06/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7403e190a00127c2ea1491a61795c1b6cd83841b3830add86a0ff7caab67c06/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:19 compute-0 podman[215357]: 2025-12-05 01:15:19.77339208 +0000 UTC m=+0.180834406 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  5 01:15:19 compute-0 podman[215344]: 2025-12-05 01:15:19.791298169 +0000 UTC m=+0.292454765 container init 08e13b24ec485888c5ab734d2e26564900b0938c094f87c370524d5324e8c5d0 (image=quay.io/ceph/ceph:v18, name=recursing_faraday, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:15:19 compute-0 podman[215344]: 2025-12-05 01:15:19.809856866 +0000 UTC m=+0.311013412 container start 08e13b24ec485888c5ab734d2e26564900b0938c094f87c370524d5324e8c5d0 (image=quay.io/ceph/ceph:v18, name=recursing_faraday, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  5 01:15:19 compute-0 podman[215344]: 2025-12-05 01:15:19.818797316 +0000 UTC m=+0.319953952 container attach 08e13b24ec485888c5ab734d2e26564900b0938c094f87c370524d5324e8c5d0 (image=quay.io/ceph/ceph:v18, name=recursing_faraday, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Dec  5 01:15:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Dec  5 01:15:19 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec  5 01:15:19 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1272040897' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec  5 01:15:19 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec  5 01:15:19 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec  5 01:15:19 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Dec  5 01:15:19 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec  5 01:15:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Dec  5 01:15:19 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Dec  5 01:15:19 compute-0 ceph-mgr[193209]: [progress INFO root] update: starting ev 5bf9daa4-deb2-4a7f-82fa-7555bfe35f05 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec  5 01:15:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0) v1
Dec  5 01:15:19 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.1e( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.1d( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.1c( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.1f( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.b( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.a( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.9( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.8( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.6( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.3( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.4( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.2( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.1( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.7( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.c( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.e( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.d( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.5( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.10( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.f( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.12( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.13( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.14( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.15( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.11( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.16( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.17( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.18( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.19( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.1a( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.1b( empty local-lis/les=17/18 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.1e( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.1( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.c( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.0( empty local-lis/les=33/34 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.e( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.10( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.12( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.14( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.1a( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:19 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 34 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=17/17 les/c/f=18/18/0 sis=33) [2] r=0 lpr=33 pi=[17,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v87: 69 pgs: 62 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:15:20 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec  5 01:15:20 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  5 01:15:20 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec  5 01:15:20 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 33 pg[3.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=33 pruub=14.931506157s) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active pruub 66.275405884s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=33 pruub=14.931506157s) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown pruub 66.275405884s@ mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.e( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.d( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.9( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.a( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.7( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.8( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.1f( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.f( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.10( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.11( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.12( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.13( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.14( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.17( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.18( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.15( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.16( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.1b( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.1c( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.19( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.1a( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.1e( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.2( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.1d( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.1( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.5( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.6( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.3( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.4( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.c( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 34 pg[3.b( empty local-lis/les=19/20 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:20 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Dec  5 01:15:20 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/35306483' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Dec  5 01:15:20 compute-0 podman[215402]: 2025-12-05 01:15:20.754254333 +0000 UTC m=+0.156137894 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 01:15:20 compute-0 podman[215403]: 2025-12-05 01:15:20.795651362 +0000 UTC m=+0.194285536 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  5 01:15:20 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Dec  5 01:15:20 compute-0 ceph-mon[192914]: log_channel(cluster) log [WRN] : Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  5 01:15:20 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Dec  5 01:15:20 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec  5 01:15:20 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec  5 01:15:20 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/35306483' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec  5 01:15:20 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Dec  5 01:15:20 compute-0 recursing_faraday[215377]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Dec  5 01:15:20 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Dec  5 01:15:20 compute-0 ceph-mgr[193209]: [progress INFO root] update: starting ev 7028de3a-64c4-4577-87c4-9a3152076f1d (PG autoscaler increasing pool 6 PGs from 1 to 32)
Dec  5 01:15:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec  5 01:15:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec  5 01:15:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  5 01:15:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  5 01:15:20 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/35306483' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Dec  5 01:15:20 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 35 pg[5.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=10.460465431s) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active pruub 55.776924133s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:20 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 35 pg[5.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=10.460465431s) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown pruub 55.776924133s@ mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:20 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Dec  5 01:15:20 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.1e( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.1f( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.1c( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.a( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.1d( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.9( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.8( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.7( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.6( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.1( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.3( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.4( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.2( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.0( empty local-lis/les=33/35 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.b( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.d( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.f( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.e( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.11( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.12( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.10( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.1b( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.15( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.16( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.17( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.18( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.19( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.c( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.5( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.14( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.13( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:20 compute-0 systemd[1]: libpod-08e13b24ec485888c5ab734d2e26564900b0938c094f87c370524d5324e8c5d0.scope: Deactivated successfully.
Dec  5 01:15:20 compute-0 podman[215344]: 2025-12-05 01:15:20.977133093 +0000 UTC m=+1.478289669 container died 08e13b24ec485888c5ab734d2e26564900b0938c094f87c370524d5324e8c5d0 (image=quay.io/ceph/ceph:v18, name=recursing_faraday, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:15:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 35 pg[3.1a( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=19/19 les/c/f=20/20/0 sis=33) [1] r=0 lpr=33 pi=[19,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7403e190a00127c2ea1491a61795c1b6cd83841b3830add86a0ff7caab67c06-merged.mount: Deactivated successfully.
Dec  5 01:15:21 compute-0 podman[215344]: 2025-12-05 01:15:21.062632633 +0000 UTC m=+1.563789179 container remove 08e13b24ec485888c5ab734d2e26564900b0938c094f87c370524d5324e8c5d0 (image=quay.io/ceph/ceph:v18, name=recursing_faraday, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  5 01:15:21 compute-0 systemd[1]: libpod-conmon-08e13b24ec485888c5ab734d2e26564900b0938c094f87c370524d5324e8c5d0.scope: Deactivated successfully.
Dec  5 01:15:21 compute-0 ceph-mgr[193209]: [progress WARNING root] Starting Global Recovery Event,124 pgs not in active + clean state
Dec  5 01:15:21 compute-0 python3[215487]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:15:21 compute-0 podman[215488]: 2025-12-05 01:15:21.565048712 +0000 UTC m=+0.057031059 container create 46e6d6a41c47a037567bd980e76f64a0266a114b370b7bee2dd13fd941b3f783 (image=quay.io/ceph/ceph:v18, name=sleepy_spence, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  5 01:15:21 compute-0 podman[215488]: 2025-12-05 01:15:21.542655072 +0000 UTC m=+0.034637439 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:15:21 compute-0 systemd[1]: Started libpod-conmon-46e6d6a41c47a037567bd980e76f64a0266a114b370b7bee2dd13fd941b3f783.scope.
Dec  5 01:15:21 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/557b335779dfe6275eb5e2820deb655f19244ae102f7a4813043e07c0f2c3155/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/557b335779dfe6275eb5e2820deb655f19244ae102f7a4813043e07c0f2c3155/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:21 compute-0 podman[215488]: 2025-12-05 01:15:21.743683217 +0000 UTC m=+0.235665664 container init 46e6d6a41c47a037567bd980e76f64a0266a114b370b7bee2dd13fd941b3f783 (image=quay.io/ceph/ceph:v18, name=sleepy_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  5 01:15:21 compute-0 podman[215488]: 2025-12-05 01:15:21.756956632 +0000 UTC m=+0.248938979 container start 46e6d6a41c47a037567bd980e76f64a0266a114b370b7bee2dd13fd941b3f783 (image=quay.io/ceph/ceph:v18, name=sleepy_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:15:21 compute-0 podman[215488]: 2025-12-05 01:15:21.76210626 +0000 UTC m=+0.254088707 container attach 46e6d6a41c47a037567bd980e76f64a0266a114b370b7bee2dd13fd941b3f783 (image=quay.io/ceph/ceph:v18, name=sleepy_spence, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:15:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Dec  5 01:15:21 compute-0 ceph-mon[192914]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  5 01:15:21 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Dec  5 01:15:21 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec  5 01:15:21 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec  5 01:15:21 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/35306483' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec  5 01:15:21 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Dec  5 01:15:21 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec  5 01:15:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Dec  5 01:15:21 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Dec  5 01:15:21 compute-0 ceph-mgr[193209]: [progress INFO root] update: starting ev 21763038-846a-4407-8073-b5a4c2947a4f (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec  5 01:15:21 compute-0 ceph-mgr[193209]: [progress INFO root] complete: finished ev c9af2dd1-de48-4226-86e7-e655905eba6a (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec  5 01:15:21 compute-0 ceph-mgr[193209]: [progress INFO root] Completed event c9af2dd1-de48-4226-86e7-e655905eba6a (PG autoscaler increasing pool 2 PGs from 1 to 32) in 5 seconds
Dec  5 01:15:21 compute-0 ceph-mgr[193209]: [progress INFO root] complete: finished ev 0de8065e-0050-4e26-b97b-0b8799bec160 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec  5 01:15:21 compute-0 ceph-mgr[193209]: [progress INFO root] Completed event 0de8065e-0050-4e26-b97b-0b8799bec160 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Dec  5 01:15:21 compute-0 ceph-mgr[193209]: [progress INFO root] complete: finished ev 339ad7be-a8a2-4a9e-8c52-0669a043c0a3 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec  5 01:15:21 compute-0 ceph-mgr[193209]: [progress INFO root] Completed event 339ad7be-a8a2-4a9e-8c52-0669a043c0a3 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Dec  5 01:15:21 compute-0 ceph-mgr[193209]: [progress INFO root] complete: finished ev 5bf9daa4-deb2-4a7f-82fa-7555bfe35f05 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec  5 01:15:21 compute-0 ceph-mgr[193209]: [progress INFO root] Completed event 5bf9daa4-deb2-4a7f-82fa-7555bfe35f05 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Dec  5 01:15:21 compute-0 ceph-mgr[193209]: [progress INFO root] complete: finished ev 7028de3a-64c4-4577-87c4-9a3152076f1d (PG autoscaler increasing pool 6 PGs from 1 to 32)
Dec  5 01:15:21 compute-0 ceph-mgr[193209]: [progress INFO root] Completed event 7028de3a-64c4-4577-87c4-9a3152076f1d (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Dec  5 01:15:21 compute-0 ceph-mgr[193209]: [progress INFO root] complete: finished ev 21763038-846a-4407-8073-b5a4c2947a4f (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec  5 01:15:21 compute-0 ceph-mgr[193209]: [progress INFO root] Completed event 21763038-846a-4407-8073-b5a4c2947a4f (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Dec  5 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.1c( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.1f( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.1e( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.1d( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.11( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.12( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.10( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.13( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.14( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.15( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.17( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.8( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.16( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.9( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.a( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.7( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.b( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.5( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.6( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.4( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.3( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.2( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.1( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.e( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.c( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.f( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.1b( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.1a( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.d( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.19( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:21 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.18( empty local-lis/les=23/24 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.1c( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.1f( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.1e( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.11( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.1d( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.13( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.10( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.12( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.14( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.17( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.15( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.16( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.9( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.a( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.8( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.7( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.0( empty local-lis/les=35/36 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.b( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.5( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.6( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.3( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.4( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.2( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.1( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.e( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.c( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.f( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.1a( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.1b( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.19( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.18( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 36 pg[5.d( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=23/23 les/c/f=24/24/0 sis=35) [2] r=0 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:15:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v90: 131 pgs: 1 peering, 62 unknown, 68 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:15:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec  5 01:15:22 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  5 01:15:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec  5 01:15:22 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  5 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 35 pg[4.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=15.096145630s) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active pruub 74.387512207s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=15.096145630s) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown pruub 74.387512207s@ mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.1b( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.1c( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.1d( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.1e( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.3( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.4( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.13( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.1( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.2( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.1f( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.17( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.18( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.15( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.16( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.11( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.12( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.f( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.10( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.19( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.1a( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.5( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.6( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.14( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.7( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.8( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.9( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.a( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.d( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.e( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.b( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 36 pg[4.c( empty local-lis/les=21/22 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Dec  5 01:15:22 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3942063658' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Dec  5 01:15:22 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Dec  5 01:15:22 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Dec  5 01:15:22 compute-0 podman[215527]: 2025-12-05 01:15:22.742838371 +0000 UTC m=+0.150289487 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_ipmi)
Dec  5 01:15:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Dec  5 01:15:22 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec  5 01:15:22 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec  5 01:15:22 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3942063658' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec  5 01:15:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Dec  5 01:15:22 compute-0 sleepy_spence[215502]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Dec  5 01:15:22 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Dec  5 01:15:23 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 37 pg[7.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=37 pruub=12.528459549s) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active pruub 66.495002747s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:23 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec  5 01:15:23 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  5 01:15:23 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  5 01:15:23 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3942063658' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Dec  5 01:15:23 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 37 pg[7.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=37 pruub=12.528459549s) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown pruub 66.495002747s@ mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[6.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=37 pruub=10.464612007s) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active pruub 70.495025635s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[6.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=37 pruub=10.464612007s) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown pruub 70.495025635s@ mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.15( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.13( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.18( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.16( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.12( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.17( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.14( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.10( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.f( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.e( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.c( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.d( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.1( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.2( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.11( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.3( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.9( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.4( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.1a( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.5( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.a( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.0( empty local-lis/les=35/37 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.1b( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.6( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.b( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.8( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.1d( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.7( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.1e( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.1f( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.19( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 37 pg[4.1c( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=21/21 les/c/f=22/22/0 sis=35) [0] r=0 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:23 compute-0 systemd[1]: libpod-46e6d6a41c47a037567bd980e76f64a0266a114b370b7bee2dd13fd941b3f783.scope: Deactivated successfully.
Dec  5 01:15:23 compute-0 podman[215488]: 2025-12-05 01:15:23.044389729 +0000 UTC m=+1.536372156 container died 46e6d6a41c47a037567bd980e76f64a0266a114b370b7bee2dd13fd941b3f783 (image=quay.io/ceph/ceph:v18, name=sleepy_spence, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  5 01:15:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-557b335779dfe6275eb5e2820deb655f19244ae102f7a4813043e07c0f2c3155-merged.mount: Deactivated successfully.
Dec  5 01:15:23 compute-0 podman[215488]: 2025-12-05 01:15:23.141540991 +0000 UTC m=+1.633523378 container remove 46e6d6a41c47a037567bd980e76f64a0266a114b370b7bee2dd13fd941b3f783 (image=quay.io/ceph/ceph:v18, name=sleepy_spence, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  5 01:15:23 compute-0 systemd[1]: libpod-conmon-46e6d6a41c47a037567bd980e76f64a0266a114b370b7bee2dd13fd941b3f783.scope: Deactivated successfully.
Dec  5 01:15:23 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Dec  5 01:15:23 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Dec  5 01:15:24 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Dec  5 01:15:24 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec  5 01:15:24 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec  5 01:15:24 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/3942063658' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec  5 01:15:24 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Dec  5 01:15:24 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.1e( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.1d( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.1c( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.13( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.12( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.11( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.10( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.17( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.15( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.14( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.b( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.a( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.9( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.8( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.f( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.6( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.4( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.16( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.5( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.7( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.1( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.2( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.3( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.c( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.e( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.1f( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.d( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.18( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.19( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.1a( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.1b( empty local-lis/les=27/28 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.1a( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.15( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.14( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.17( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.16( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.11( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.10( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.13( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.12( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.d( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.f( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.e( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.2( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.3( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.1( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.c( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.1b( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.6( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.b( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.18( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.7( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.8( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.19( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.4( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.9( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.5( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.1e( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.a( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.1c( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.1f( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.1d( empty local-lis/les=25/26 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.1e( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.1c( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.12( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.13( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.11( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.10( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.17( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.15( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.14( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.b( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.9( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.a( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.f( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.8( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.0( empty local-lis/les=37/38 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.1a( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.17( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.16( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.5( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.4( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.7( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.1( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.2( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.1d( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.e( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.c( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.1f( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.d( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.3( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.15( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v93: 193 pgs: 1 peering, 124 unknown, 68 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.18( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.1a( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.19( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.1b( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 38 pg[7.6( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=27/27 les/c/f=28/28/0 sis=37) [1] r=0 lpr=37 pi=[27,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.11( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.14( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.16( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.13( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.10( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.12( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.d( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.e( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.f( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.0( empty local-lis/les=37/38 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.c( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.2( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.3( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.b( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.18( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.7( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.8( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.19( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.9( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.4( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.1e( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.a( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.1d( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.1f( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.5( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.1( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.6( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.1c( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 38 pg[6.1b( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=25/25 les/c/f=26/26/0 sis=37) [0] r=0 lpr=37 pi=[25,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:24 compute-0 python3[215633]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  5 01:15:24 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Dec  5 01:15:24 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Dec  5 01:15:24 compute-0 podman[215702]: 2025-12-05 01:15:24.705391161 +0000 UTC m=+0.111937299 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, com.redhat.component=ubi9-container, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.4, container_name=kepler, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc.)
Dec  5 01:15:24 compute-0 python3[215705]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764897323.8143196-37123-56126538119197/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:15:24 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Dec  5 01:15:24 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Dec  5 01:15:25 compute-0 ceph-mon[192914]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec  5 01:15:25 compute-0 ceph-mon[192914]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec  5 01:15:25 compute-0 python3[215826]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  5 01:15:25 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 2.2 deep-scrub starts
Dec  5 01:15:25 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 2.2 deep-scrub ok
Dec  5 01:15:26 compute-0 ceph-mon[192914]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec  5 01:15:26 compute-0 ceph-mon[192914]: Cluster is now healthy
Dec  5 01:15:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v94: 193 pgs: 1 peering, 62 unknown, 130 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:15:26 compute-0 python3[215901]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764897325.1030889-37137-125420570080905/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=a07363e650807b3400a8fed2d84c9a8d6bf803ad backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:15:26 compute-0 ceph-mgr[193209]: [progress INFO root] Writing back 9 completed events
Dec  5 01:15:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec  5 01:15:26 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:26 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Dec  5 01:15:26 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Dec  5 01:15:26 compute-0 python3[215951]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:15:26 compute-0 podman[215952]: 2025-12-05 01:15:26.870156169 +0000 UTC m=+0.077322362 container create 85fc6ef32907e3e1329f0edde408ab604bb5022e865cbb9f51b750f8b363ff30 (image=quay.io/ceph/ceph:v18, name=festive_mendeleev, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:15:26 compute-0 systemd[194721]: Starting Mark boot as successful...
Dec  5 01:15:26 compute-0 systemd[1]: Started libpod-conmon-85fc6ef32907e3e1329f0edde408ab604bb5022e865cbb9f51b750f8b363ff30.scope.
Dec  5 01:15:26 compute-0 systemd[194721]: Finished Mark boot as successful.
Dec  5 01:15:26 compute-0 podman[215952]: 2025-12-05 01:15:26.841963204 +0000 UTC m=+0.049129417 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:15:26 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3f1f278fad6650683304631436df438fe49e1309ac42571878003a68e7433ad/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3f1f278fad6650683304631436df438fe49e1309ac42571878003a68e7433ad/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3f1f278fad6650683304631436df438fe49e1309ac42571878003a68e7433ad/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:26 compute-0 podman[215952]: 2025-12-05 01:15:26.987708208 +0000 UTC m=+0.194874431 container init 85fc6ef32907e3e1329f0edde408ab604bb5022e865cbb9f51b750f8b363ff30 (image=quay.io/ceph/ceph:v18, name=festive_mendeleev, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  5 01:15:27 compute-0 podman[215952]: 2025-12-05 01:15:27.003329356 +0000 UTC m=+0.210495559 container start 85fc6ef32907e3e1329f0edde408ab604bb5022e865cbb9f51b750f8b363ff30 (image=quay.io/ceph/ceph:v18, name=festive_mendeleev, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  5 01:15:27 compute-0 podman[215952]: 2025-12-05 01:15:27.008654849 +0000 UTC m=+0.215821062 container attach 85fc6ef32907e3e1329f0edde408ab604bb5022e865cbb9f51b750f8b363ff30 (image=quay.io/ceph/ceph:v18, name=festive_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  5 01:15:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:15:27 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:27 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Dec  5 01:15:27 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Dec  5 01:15:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Dec  5 01:15:27 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1465093527' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  5 01:15:27 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1465093527' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec  5 01:15:27 compute-0 festive_mendeleev[215968]: 
Dec  5 01:15:27 compute-0 festive_mendeleev[215968]: [global]
Dec  5 01:15:27 compute-0 festive_mendeleev[215968]: #011fsid = cbd280d3-cbd8-528b-ace6-2b3a887cdcee
Dec  5 01:15:27 compute-0 festive_mendeleev[215968]: #011mon_host = 192.168.122.100
Dec  5 01:15:27 compute-0 systemd[1]: libpod-85fc6ef32907e3e1329f0edde408ab604bb5022e865cbb9f51b750f8b363ff30.scope: Deactivated successfully.
Dec  5 01:15:27 compute-0 podman[215952]: 2025-12-05 01:15:27.651772245 +0000 UTC m=+0.858938478 container died 85fc6ef32907e3e1329f0edde408ab604bb5022e865cbb9f51b750f8b363ff30 (image=quay.io/ceph/ceph:v18, name=festive_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  5 01:15:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3f1f278fad6650683304631436df438fe49e1309ac42571878003a68e7433ad-merged.mount: Deactivated successfully.
Dec  5 01:15:27 compute-0 podman[215952]: 2025-12-05 01:15:27.746472872 +0000 UTC m=+0.953639085 container remove 85fc6ef32907e3e1329f0edde408ab604bb5022e865cbb9f51b750f8b363ff30 (image=quay.io/ceph/ceph:v18, name=festive_mendeleev, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  5 01:15:27 compute-0 systemd[1]: libpod-conmon-85fc6ef32907e3e1329f0edde408ab604bb5022e865cbb9f51b750f8b363ff30.scope: Deactivated successfully.
Dec  5 01:15:27 compute-0 podman[215991]: 2025-12-05 01:15:27.786792522 +0000 UTC m=+0.182597712 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  5 01:15:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v95: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:15:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec  5 01:15:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  5 01:15:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec  5 01:15:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  5 01:15:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec  5 01:15:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  5 01:15:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec  5 01:15:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  5 01:15:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec  5 01:15:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  5 01:15:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec  5 01:15:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  5 01:15:28 compute-0 python3[216111]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:15:28 compute-0 podman[216149]: 2025-12-05 01:15:28.233450397 +0000 UTC m=+0.062697541 container create ea203986fa33ae5c4dbc0400f4bad6b10b795341e942fa9247878b999b57ffc9 (image=quay.io/ceph/ceph:v18, name=unruffled_carver, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:15:28 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1465093527' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  5 01:15:28 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1465093527' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec  5 01:15:28 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  5 01:15:28 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  5 01:15:28 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  5 01:15:28 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  5 01:15:28 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  5 01:15:28 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  5 01:15:28 compute-0 systemd[1]: Started libpod-conmon-ea203986fa33ae5c4dbc0400f4bad6b10b795341e942fa9247878b999b57ffc9.scope.
Dec  5 01:15:28 compute-0 podman[216149]: 2025-12-05 01:15:28.210158903 +0000 UTC m=+0.039406067 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:15:28 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dfceeae49b9b6d666c968bd9900e3599ed0556b9ba95f56b89b3cabf2b738c9/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dfceeae49b9b6d666c968bd9900e3599ed0556b9ba95f56b89b3cabf2b738c9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dfceeae49b9b6d666c968bd9900e3599ed0556b9ba95f56b89b3cabf2b738c9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:28 compute-0 podman[216149]: 2025-12-05 01:15:28.365482193 +0000 UTC m=+0.194729357 container init ea203986fa33ae5c4dbc0400f4bad6b10b795341e942fa9247878b999b57ffc9 (image=quay.io/ceph/ceph:v18, name=unruffled_carver, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:15:28 compute-0 podman[216149]: 2025-12-05 01:15:28.385272903 +0000 UTC m=+0.214520047 container start ea203986fa33ae5c4dbc0400f4bad6b10b795341e942fa9247878b999b57ffc9 (image=quay.io/ceph/ceph:v18, name=unruffled_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:15:28 compute-0 podman[216149]: 2025-12-05 01:15:28.389383864 +0000 UTC m=+0.218631058 container attach ea203986fa33ae5c4dbc0400f4bad6b10b795341e942fa9247878b999b57ffc9 (image=quay.io/ceph/ceph:v18, name=unruffled_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  5 01:15:28 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Dec  5 01:15:28 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Dec  5 01:15:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Dec  5 01:15:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  5 01:15:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  5 01:15:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  5 01:15:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  5 01:15:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  5 01:15:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  5 01:15:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.304944992s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.322792053s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.304885864s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.322792053s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.1d( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.357736588s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 62.375885010s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.1e( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.357615471s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 62.375858307s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.1e( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.357591629s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.375858307s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.1d( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.357636452s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.375885010s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.304480553s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.322868347s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.304277420s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.322738647s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.304409981s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.322868347s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.304246902s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.322738647s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.304057121s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.322738647s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.304033279s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.322738647s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.304913521s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.323722839s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.304894447s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.323722839s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.303823471s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.322731018s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.12( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.357022285s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 62.375934601s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.12( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.356994629s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.375934601s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.303780556s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.322731018s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.11( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.356890678s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 62.375877380s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.11( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.356790543s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.375877380s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.13( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.356760025s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 62.375900269s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.303393364s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.322540283s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.303343773s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.322540283s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.13( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.356688499s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.375900269s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.14( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.356653214s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 62.375949860s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.15( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.356674194s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 62.375972748s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.15( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.356656075s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.375972748s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.9( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.356337547s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 62.375995636s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.302814484s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.322509766s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.9( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.356300354s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.375995636s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.302789688s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.322509766s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.14( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.356621742s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.375949860s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.301838875s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.321701050s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.301820755s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.321701050s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.16( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.355987549s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 62.375980377s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.16( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.355951309s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.375980377s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.7( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.355987549s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 62.376041412s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.7( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.355964661s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.376041412s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.303320885s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.323463440s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.301454544s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.321678162s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.301431656s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.321678162s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.303228378s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.323463440s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.301286697s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.321655273s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.301271439s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.321655273s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.5( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.355638504s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 62.376068115s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.5( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.355610847s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.376068115s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.4( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.355605125s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 62.376102448s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.300979614s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.321487427s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.4( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.355587959s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.376102448s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.300941467s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.321495056s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.301899910s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.322502136s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.300894737s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.321495056s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.301883698s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.322502136s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.300880432s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.321487427s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.3( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.355349541s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 62.376091003s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.300849915s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.321617126s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.3( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.355325699s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.376091003s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.300819397s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.321617126s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.1( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.355272293s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 62.376148224s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.2( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.355251312s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 62.376132965s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.2( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.355217934s) [0] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.376132965s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.1( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.355228424s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.376148224s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.300493240s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.321487427s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.300465584s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.321487427s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.f( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.355172157s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 62.376213074s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.f( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.355142593s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.376213074s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.300222397s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.321434021s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.300314903s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.321472168s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.300189018s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.321434021s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.300136566s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.321418762s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.300115585s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.321418762s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.300202370s) [1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.321472168s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.c( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.354790688s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 62.376213074s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.299877167s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.321311951s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.c( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.354733467s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.376213074s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.301344872s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.322875977s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.301317215s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.322875977s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.1a( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.354547501s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 62.376239777s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.19( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.355680466s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 62.377380371s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.1a( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.354521751s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.376239777s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.19( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.355651855s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.377380371s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.299545288s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 68.321334839s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.18( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.356031418s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 62.377906799s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.299497604s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.321334839s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[5.18( empty local-lis/les=35/36 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39 pruub=9.356008530s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 62.377906799s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=15.299395561s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 68.321311951s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[5.1e( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[2.19( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[2.18( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[5.7( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[2.1d( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[5.4( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[2.1c( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[2.f( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[2.2( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[5.5( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[2.1f( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[5.2( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[5.3( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[5.19( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[2.b( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[2.8( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[2.16( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[5.15( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[2.13( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[5.14( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[2.11( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.18( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.364226341s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 76.055831909s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.18( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.364199638s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.055831909s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.15( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.392975807s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 77.084739685s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.15( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.392960548s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.084739685s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.14( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.403947830s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 77.095840454s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.14( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.403932571s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.095840454s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.17( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.392865181s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 77.084884644s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.17( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.392849922s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.084884644s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.14( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.363949776s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 76.056091309s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.14( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.363932610s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.056091309s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.13( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.363529205s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 76.055809021s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.13( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.363513947s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.055809021s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.11( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.403358459s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 77.095748901s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.11( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.403343201s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.095748901s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.12( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.363568306s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 76.056037903s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.12( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.363554955s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.056037903s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.11( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.363934517s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 76.056488037s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.11( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.363919258s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.056488037s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.13( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.403327942s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 77.095993042s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.13( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.403310776s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.095993042s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.10( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.363390923s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 76.056144714s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.10( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.363377571s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.056144714s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.f( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.363306999s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 76.056182861s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.f( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.363291740s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.056182861s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.d( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.403141022s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 77.096138000s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.d( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.403121948s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.096138000s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.e( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.363162041s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 76.056259155s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.e( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.363145828s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.056259155s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.c( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.403162003s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 77.096397400s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.c( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.403143883s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.096397400s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.d( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.363057137s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 76.056381226s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.d( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.363042831s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.056381226s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[5.18( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[5.1a( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[4.18( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[6.15( empty local-lis/les=0/0 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.e( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.402201653s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 77.096275330s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.e( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.402175903s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.096275330s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.2( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.402244568s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 77.096427917s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.2( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.402199745s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.096427917s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.2( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.362100601s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 76.056411743s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.2( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.362083435s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.056411743s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.f( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.401975632s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 77.096343994s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.1( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.362030029s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 76.056427002s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.1( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.362015724s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.056427002s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.f( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.401942253s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.096343994s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.1( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.401949883s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 77.096481323s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.1( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.401930809s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.096481323s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.6( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.401866913s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 77.096496582s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.6( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.401853561s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.096496582s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.4( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.361931801s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 76.056587219s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.9( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.361819267s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 76.056533813s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.4( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.361874580s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.056587219s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.9( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.361803055s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.056533813s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.b( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.401807785s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 77.096626282s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.b( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.401793480s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.096626282s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.1a( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.361680984s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 76.056594849s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.1a( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.361666679s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.056594849s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.5( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.361672401s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 76.056632996s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.5( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.361646652s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.056632996s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.a( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.361621857s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 76.056655884s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.a( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.361604691s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.056655884s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.8( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.401548386s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 77.096679688s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.1b( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.361520767s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 76.056678772s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.1b( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.361498833s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.056678772s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.8( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.401518822s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.096679688s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.4( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.401325226s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 77.096733093s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.4( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.401301384s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.096733093s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.8( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.361418724s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 76.057006836s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.8( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.361398697s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.057006836s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.1e( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.401041031s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 77.096755981s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.1e( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.401024818s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.096755981s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.1f( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.401050568s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 77.096855164s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.1c( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.361319542s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 76.057121277s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.1f( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.401032448s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.096855164s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.1c( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.361282349s) [2] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.057121277s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.1c( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.399742126s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 77.097023010s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.1c( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.399720192s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.097023010s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.7( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.359388351s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active pruub 76.057029724s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[4.7( empty local-lis/les=35/37 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39 pruub=10.359352112s) [1] r=-1 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.057029724s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.1d( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.398802757s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 77.096832275s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[6.1d( empty local-lis/les=37/38 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.398769379s) [1] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.096832275s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[6.14( empty local-lis/les=0/0 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[4.13( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[6.11( empty local-lis/les=0/0 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[4.11( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[6.13( empty local-lis/les=0/0 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[5.1d( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[5.c( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[5.f( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[2.9( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[2.6( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[5.1( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[2.7( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[2.4( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[2.5( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[4.e( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[2.3( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[2.a( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[2.d( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[5.9( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[5.16( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[5.12( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[2.15( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[5.13( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[2.17( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[5.11( empty local-lis/les=0/0 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[2.1b( empty local-lis/les=0/0 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.1c( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.381144524s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 71.028083801s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.1c( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.381120682s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.028083801s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.18( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.287405968s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 67.934448242s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.18( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.287388802s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.934448242s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.17( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.287194252s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 67.934394836s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.17( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.287174225s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.934394836s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.13( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.380805016s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 71.028121948s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.13( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.380765915s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.028121948s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.16( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.286743164s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 67.934387207s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.16( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.286722183s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.934387207s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[4.1( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.15( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.286543846s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 67.934341431s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.15( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.286526680s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.934341431s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[4.1a( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.11( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.380309105s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 71.028228760s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.11( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.380291939s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.028228760s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[4.a( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.12( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.285936356s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 67.934265137s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.15( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.379876137s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 71.028305054s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.12( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.285881996s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.934265137s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.15( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.379821777s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.028305054s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.f( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.285408020s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 67.934082031s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.f( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.285367966s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.934082031s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.e( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.285311699s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 67.934066772s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.e( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.285275459s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.934066772s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.11( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.285237312s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 67.934127808s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.11( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.285213470s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.934127808s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.a( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.379448891s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 71.028373718s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.9( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.379390717s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 71.028381348s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.a( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.379405975s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.028373718s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.9( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.379357338s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.028381348s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.8( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.379235268s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 71.028465271s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.8( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.379196167s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.028465271s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.c( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.284764290s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 67.934135437s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.c( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.284743309s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.934135437s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.f( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.378978729s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 71.028442383s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.f( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.378941536s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.028442383s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.4( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.378835678s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 71.028564453s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[4.1b( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[6.8( empty local-lis/les=0/0 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[6.f( empty local-lis/les=0/0 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[6.1f( empty local-lis/les=0/0 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[4.1c( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[7.1c( empty local-lis/les=0/0 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[3.18( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[3.16( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[7.11( empty local-lis/les=0/0 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[7.15( empty local-lis/les=0/0 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[3.17( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.1( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.284173012s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 67.933906555s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.4( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.378814697s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.028564453s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[3.e( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.1( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.284139633s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.933906555s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.5( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.378671646s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 71.028564453s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.5( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.378643990s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.028564453s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.3( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.283966064s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 67.933914185s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.3( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.283926964s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.933914185s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.5( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.284580231s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 67.934646606s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.5( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.284555435s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.934646606s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.1( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.378441811s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 71.028648376s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.1( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.378406525s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.028648376s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.2( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.378314972s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 71.028694153s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[3.11( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.2( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.378288269s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.028694153s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.6( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.388133049s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 71.037796021s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.6( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.387015343s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.037796021s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.6( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.282447815s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 67.933822632s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.6( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.282421112s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.933822632s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.3( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.376772881s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 71.028694153s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.7( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.281116486s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 67.933090210s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.3( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.376710892s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.028694153s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.8( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.280876160s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 67.933074951s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.8( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.280844688s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.933074951s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.c( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.376416206s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 71.028823853s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.c( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.376382828s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.028823853s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.7( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.280618668s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.933090210s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.9( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.280435562s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 67.933052063s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.9( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.280406952s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.933052063s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.a( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.280331612s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 67.933029175s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.a( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.280296326s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.933029175s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.e( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.376008034s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 71.028816223s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.e( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.375980377s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.028816223s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.1f( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.375961304s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 71.028892517s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.1f( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.375928879s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.028892517s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.1b( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.281331062s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 67.934333801s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.1b( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.281299591s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.934333801s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.18( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.384404182s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 71.037620544s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.18( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.384373665s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.037620544s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.1d( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.279585838s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 67.933006287s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.1d( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.279357910s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.933006287s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.1a( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.383955002s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 71.037704468s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.1a( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.383920670s) [2] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.037704468s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.1e( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.278944969s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 67.932914734s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.1e( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.278918266s) [2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.932914734s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.1b( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.383519173s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active pruub 71.037635803s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[7.1b( empty local-lis/les=37/38 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39 pruub=11.383499146s) [0] r=-1 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.037635803s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.1f( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.278614998s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 67.932945251s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[3.1f( empty local-lis/les=33/35 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39 pruub=8.278591156s) [0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 67.932945251s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[6.17( empty local-lis/les=0/0 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[4.14( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[4.12( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[4.10( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[4.f( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[6.d( empty local-lis/les=0/0 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[7.a( empty local-lis/les=0/0 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[7.8( empty local-lis/les=0/0 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[7.5( empty local-lis/les=0/0 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[3.5( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[7.13( empty local-lis/les=0/0 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[3.15( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[3.12( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[7.1( empty local-lis/les=0/0 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[3.f( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[7.9( empty local-lis/les=0/0 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[3.c( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[7.f( empty local-lis/les=0/0 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[7.4( empty local-lis/les=0/0 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[3.1( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[3.3( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[6.c( empty local-lis/les=0/0 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[4.d( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[7.2( empty local-lis/les=0/0 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[6.e( empty local-lis/les=0/0 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[6.2( empty local-lis/les=0/0 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[4.2( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[6.1( empty local-lis/les=0/0 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[6.6( empty local-lis/les=0/0 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[4.9( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[4.4( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[6.b( empty local-lis/les=0/0 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[4.5( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[6.4( empty local-lis/les=0/0 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[4.8( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[6.1e( empty local-lis/les=0/0 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[6.1c( empty local-lis/les=0/0 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[6.1d( empty local-lis/les=0/0 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 39 pg[4.7( empty local-lis/les=0/0 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[3.6( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[7.6( empty local-lis/les=0/0 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[7.3( empty local-lis/les=0/0 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[3.9( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[3.8( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[3.a( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[7.1f( empty local-lis/les=0/0 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[7.c( empty local-lis/les=0/0 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[3.1b( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[7.18( empty local-lis/les=0/0 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[3.7( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[7.1b( empty local-lis/les=0/0 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 39 pg[3.1f( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[7.e( empty local-lis/les=0/0 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[7.1a( empty local-lis/les=0/0 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[3.1e( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 39 pg[3.1d( empty local-lis/les=0/0 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:28 compute-0 podman[216253]: 2025-12-05 01:15:28.907878443 +0000 UTC m=+0.088239055 container exec aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  5 01:15:29 compute-0 podman[216253]: 2025-12-05 01:15:29.004573253 +0000 UTC m=+0.184933855 container exec_died aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:15:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Dec  5 01:15:29 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1578651117' entity='client.admin' 
Dec  5 01:15:29 compute-0 unruffled_carver[216167]: set ssl_option
Dec  5 01:15:29 compute-0 systemd[1]: libpod-ea203986fa33ae5c4dbc0400f4bad6b10b795341e942fa9247878b999b57ffc9.scope: Deactivated successfully.
Dec  5 01:15:29 compute-0 podman[216149]: 2025-12-05 01:15:29.099777893 +0000 UTC m=+0.929025037 container died ea203986fa33ae5c4dbc0400f4bad6b10b795341e942fa9247878b999b57ffc9 (image=quay.io/ceph/ceph:v18, name=unruffled_carver, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  5 01:15:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-8dfceeae49b9b6d666c968bd9900e3599ed0556b9ba95f56b89b3cabf2b738c9-merged.mount: Deactivated successfully.
Dec  5 01:15:29 compute-0 podman[216149]: 2025-12-05 01:15:29.15940204 +0000 UTC m=+0.988649184 container remove ea203986fa33ae5c4dbc0400f4bad6b10b795341e942fa9247878b999b57ffc9 (image=quay.io/ceph/ceph:v18, name=unruffled_carver, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  5 01:15:29 compute-0 systemd[1]: libpod-conmon-ea203986fa33ae5c4dbc0400f4bad6b10b795341e942fa9247878b999b57ffc9.scope: Deactivated successfully.
Dec  5 01:15:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  5 01:15:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  5 01:15:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  5 01:15:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  5 01:15:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  5 01:15:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  5 01:15:29 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/1578651117' entity='client.admin' 
Dec  5 01:15:29 compute-0 python3[216380]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:15:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:15:29 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:15:29 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:15:29 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:15:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 01:15:29 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:15:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 01:15:29 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:29 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 4f1d045a-418e-4ab9-b318-bbb001c4ef3a does not exist
Dec  5 01:15:29 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 3866c7e1-196f-4fb3-b294-f73adfaa9aa4 does not exist
Dec  5 01:15:29 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 6dec5ec6-d67c-4585-b1bd-0ffdf2fd0d5e does not exist
Dec  5 01:15:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 01:15:29 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 01:15:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 01:15:29 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:15:29 compute-0 podman[216412]: 2025-12-05 01:15:29.605660904 +0000 UTC m=+0.054167662 container create 1ee16aa2a9d3edd34fcaab706bae41910a533ef2c34b0f9fed8fdfc9e1abd0a6 (image=quay.io/ceph/ceph:v18, name=unruffled_sammet, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  5 01:15:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:15:29 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:15:29 compute-0 systemd[1]: Started libpod-conmon-1ee16aa2a9d3edd34fcaab706bae41910a533ef2c34b0f9fed8fdfc9e1abd0a6.scope.
Dec  5 01:15:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Dec  5 01:15:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Dec  5 01:15:29 compute-0 podman[216412]: 2025-12-05 01:15:29.584776674 +0000 UTC m=+0.033283442 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:15:29 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Dec  5 01:15:29 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[7.1b( empty local-lis/les=39/40 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[3.1f( empty local-lis/les=39/40 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[2.11( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07e06a101a428cc6ee40bdbad045c7723800bc93ae133c19180e9b1674430fea/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07e06a101a428cc6ee40bdbad045c7723800bc93ae133c19180e9b1674430fea/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07e06a101a428cc6ee40bdbad045c7723800bc93ae133c19180e9b1674430fea/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[4.18( empty local-lis/les=39/40 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[4.1b( empty local-lis/les=39/40 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[5.14( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[3.12( empty local-lis/les=39/40 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[2.13( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[3.15( empty local-lis/les=39/40 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[5.15( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[7.13( empty local-lis/les=39/40 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[3.17( empty local-lis/les=39/40 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[2.16( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[3.9( empty local-lis/les=39/40 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[2.b( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[3.a( empty local-lis/les=39/40 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[2.8( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[7.f( empty local-lis/les=39/40 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[5.3( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[5.2( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[3.6( empty local-lis/les=39/40 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[2.1f( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[3.3( empty local-lis/les=39/40 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[2.2( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[2.f( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[2.1c( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[5.5( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[7.6( empty local-lis/les=39/40 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[7.18( empty local-lis/les=39/40 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[5.7( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[2.1d( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[3.1( empty local-lis/les=39/40 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[7.3( empty local-lis/les=39/40 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[7.9( empty local-lis/les=39/40 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[3.c( empty local-lis/les=39/40 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[5.4( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[7.4( empty local-lis/les=39/40 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[3.f( empty local-lis/les=39/40 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[3.1b( empty local-lis/les=39/40 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[7.1f( empty local-lis/les=39/40 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [0] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[2.18( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[5.1e( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [0] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 40 pg[2.19( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[6.1e( empty local-lis/les=39/40 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[4.f( empty local-lis/les=39/40 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[6.2( empty local-lis/les=39/40 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[4.2( empty local-lis/les=39/40 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[4.4( empty local-lis/les=39/40 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[6.6( empty local-lis/les=39/40 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[6.1( empty local-lis/les=39/40 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[4.d( empty local-lis/les=39/40 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[4.5( empty local-lis/les=39/40 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[4.9( empty local-lis/les=39/40 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[6.d( empty local-lis/les=39/40 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[6.b( empty local-lis/les=39/40 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[6.4( empty local-lis/les=39/40 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[4.1a( empty local-lis/les=39/40 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[6.f( empty local-lis/les=39/40 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[4.1( empty local-lis/les=39/40 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[6.14( empty local-lis/les=39/40 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[4.a( empty local-lis/les=39/40 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[6.15( empty local-lis/les=39/40 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[4.13( empty local-lis/les=39/40 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[6.11( empty local-lis/les=39/40 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[6.13( empty local-lis/les=39/40 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[4.1c( empty local-lis/les=39/40 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[6.1f( empty local-lis/les=39/40 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[4.11( empty local-lis/les=39/40 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[7.1c( empty local-lis/les=39/40 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[3.18( empty local-lis/les=39/40 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[3.11( empty local-lis/les=39/40 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[7.15( empty local-lis/les=39/40 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[3.e( empty local-lis/les=39/40 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[7.11( empty local-lis/les=39/40 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[7.8( empty local-lis/les=39/40 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[7.5( empty local-lis/les=39/40 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[7.2( empty local-lis/les=39/40 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[7.1( empty local-lis/les=39/40 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[3.5( empty local-lis/les=39/40 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[7.a( empty local-lis/les=39/40 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[3.7( empty local-lis/les=39/40 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[3.8( empty local-lis/les=39/40 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[7.e( empty local-lis/les=39/40 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[3.1d( empty local-lis/les=39/40 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[7.1a( empty local-lis/les=39/40 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[3.1e( empty local-lis/les=39/40 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[6.8( empty local-lis/les=39/40 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[7.c( empty local-lis/les=39/40 n=0 ec=37/27 lis/c=37/37 les/c/f=38/38/0 sis=39) [2] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[3.16( empty local-lis/les=39/40 n=0 ec=33/19 lis/c=33/33 les/c/f=35/35/0 sis=39) [2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 40 pg[4.e( empty local-lis/les=39/40 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [2] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 podman[216412]: 2025-12-05 01:15:29.729018818 +0000 UTC m=+0.177525566 container init 1ee16aa2a9d3edd34fcaab706bae41910a533ef2c34b0f9fed8fdfc9e1abd0a6 (image=quay.io/ceph/ceph:v18, name=unruffled_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[6.17( empty local-lis/les=39/40 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[4.8( empty local-lis/les=39/40 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[4.14( empty local-lis/les=39/40 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[4.10( empty local-lis/les=39/40 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[6.1d( empty local-lis/les=39/40 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[6.1c( empty local-lis/les=39/40 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[2.1b( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[6.c( empty local-lis/les=39/40 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[4.7( empty local-lis/les=39/40 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[4.12( empty local-lis/les=39/40 n=0 ec=35/21 lis/c=35/35 les/c/f=37/37/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[5.11( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[5.12( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[5.16( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[6.e( empty local-lis/les=39/40 n=0 ec=37/25 lis/c=37/37 les/c/f=38/38/0 sis=39) [1] r=0 lpr=39 pi=[37,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[5.9( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[2.a( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[2.d( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[2.3( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[2.5( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[2.4( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[2.17( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[2.7( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[5.1( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[2.6( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[2.9( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[5.f( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[5.1d( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[5.1a( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[5.18( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[5.13( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[2.15( empty local-lis/les=39/40 n=0 ec=33/17 lis/c=33/33 les/c/f=34/34/0 sis=39) [1] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[5.19( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 40 pg[5.c( empty local-lis/les=39/40 n=0 ec=35/23 lis/c=35/35 les/c/f=36/36/0 sis=39) [1] r=0 lpr=39 pi=[35,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:29 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Dec  5 01:15:29 compute-0 podman[216412]: 2025-12-05 01:15:29.743614049 +0000 UTC m=+0.192120817 container start 1ee16aa2a9d3edd34fcaab706bae41910a533ef2c34b0f9fed8fdfc9e1abd0a6 (image=quay.io/ceph/ceph:v18, name=unruffled_sammet, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  5 01:15:29 compute-0 podman[158197]: time="2025-12-05T01:15:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:15:29 compute-0 podman[216412]: 2025-12-05 01:15:29.761310943 +0000 UTC m=+0.209817721 container attach 1ee16aa2a9d3edd34fcaab706bae41910a533ef2c34b0f9fed8fdfc9e1abd0a6 (image=quay.io/ceph/ceph:v18, name=unruffled_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:15:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:15:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30687 "" "Go-http-client/1.1"
Dec  5 01:15:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:15:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6250 "" "Go-http-client/1.1"
Dec  5 01:15:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v98: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:15:30 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec  5 01:15:30 compute-0 ceph-mgr[193209]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Dec  5 01:15:30 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Dec  5 01:15:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Dec  5 01:15:30 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:30 compute-0 unruffled_sammet[216450]: Scheduled rgw.rgw update...
Dec  5 01:15:30 compute-0 systemd[1]: libpod-1ee16aa2a9d3edd34fcaab706bae41910a533ef2c34b0f9fed8fdfc9e1abd0a6.scope: Deactivated successfully.
Dec  5 01:15:30 compute-0 podman[216412]: 2025-12-05 01:15:30.370122751 +0000 UTC m=+0.818629539 container died 1ee16aa2a9d3edd34fcaab706bae41910a533ef2c34b0f9fed8fdfc9e1abd0a6 (image=quay.io/ceph/ceph:v18, name=unruffled_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  5 01:15:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-07e06a101a428cc6ee40bdbad045c7723800bc93ae133c19180e9b1674430fea-merged.mount: Deactivated successfully.
Dec  5 01:15:30 compute-0 podman[216589]: 2025-12-05 01:15:30.444162394 +0000 UTC m=+0.065470875 container create e6fd9570eea0620248d9cfab4783e0c77e83f3d18f095b54cb4566ce0507bff5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_kare, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:15:30 compute-0 podman[216412]: 2025-12-05 01:15:30.449397454 +0000 UTC m=+0.897904202 container remove 1ee16aa2a9d3edd34fcaab706bae41910a533ef2c34b0f9fed8fdfc9e1abd0a6 (image=quay.io/ceph/ceph:v18, name=unruffled_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  5 01:15:30 compute-0 systemd[1]: libpod-conmon-1ee16aa2a9d3edd34fcaab706bae41910a533ef2c34b0f9fed8fdfc9e1abd0a6.scope: Deactivated successfully.
Dec  5 01:15:30 compute-0 systemd[1]: Started libpod-conmon-e6fd9570eea0620248d9cfab4783e0c77e83f3d18f095b54cb4566ce0507bff5.scope.
Dec  5 01:15:30 compute-0 podman[216589]: 2025-12-05 01:15:30.403439083 +0000 UTC m=+0.024747594 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:15:30 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:30 compute-0 podman[216589]: 2025-12-05 01:15:30.554381586 +0000 UTC m=+0.175690087 container init e6fd9570eea0620248d9cfab4783e0c77e83f3d18f095b54cb4566ce0507bff5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  5 01:15:30 compute-0 podman[216589]: 2025-12-05 01:15:30.566286075 +0000 UTC m=+0.187594566 container start e6fd9570eea0620248d9cfab4783e0c77e83f3d18f095b54cb4566ce0507bff5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_kare, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:15:30 compute-0 romantic_kare[216616]: 167 167
Dec  5 01:15:30 compute-0 systemd[1]: libpod-e6fd9570eea0620248d9cfab4783e0c77e83f3d18f095b54cb4566ce0507bff5.scope: Deactivated successfully.
Dec  5 01:15:30 compute-0 podman[216589]: 2025-12-05 01:15:30.577220808 +0000 UTC m=+0.198529319 container attach e6fd9570eea0620248d9cfab4783e0c77e83f3d18f095b54cb4566ce0507bff5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_kare, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  5 01:15:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:15:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:15:30 compute-0 ceph-mon[192914]: Saving service rgw.rgw spec with placement compute-0
Dec  5 01:15:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:30 compute-0 podman[216589]: 2025-12-05 01:15:30.585851069 +0000 UTC m=+0.207159600 container died e6fd9570eea0620248d9cfab4783e0c77e83f3d18f095b54cb4566ce0507bff5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_kare, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:15:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d7c56e79f87e082195a02808e68476d5e50026a5ffaa0b558b96713fccffe29-merged.mount: Deactivated successfully.
Dec  5 01:15:30 compute-0 podman[216589]: 2025-12-05 01:15:30.637097612 +0000 UTC m=+0.258406093 container remove e6fd9570eea0620248d9cfab4783e0c77e83f3d18f095b54cb4566ce0507bff5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_kare, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:15:30 compute-0 systemd[1]: libpod-conmon-e6fd9570eea0620248d9cfab4783e0c77e83f3d18f095b54cb4566ce0507bff5.scope: Deactivated successfully.
Dec  5 01:15:30 compute-0 podman[216638]: 2025-12-05 01:15:30.84756659 +0000 UTC m=+0.088984135 container create b34dc3564fc362578d4b1bfe1bff1a891765a42d3942342914bd01cb515d5b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  5 01:15:30 compute-0 systemd[1]: Started libpod-conmon-b34dc3564fc362578d4b1bfe1bff1a891765a42d3942342914bd01cb515d5b9e.scope.
Dec  5 01:15:30 compute-0 podman[216638]: 2025-12-05 01:15:30.812536571 +0000 UTC m=+0.053954076 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:15:30 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b11f0ab7ab6212ada1308720dbb3c1ad0cbff9dfa890222b47bddfc15fff817/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b11f0ab7ab6212ada1308720dbb3c1ad0cbff9dfa890222b47bddfc15fff817/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b11f0ab7ab6212ada1308720dbb3c1ad0cbff9dfa890222b47bddfc15fff817/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b11f0ab7ab6212ada1308720dbb3c1ad0cbff9dfa890222b47bddfc15fff817/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b11f0ab7ab6212ada1308720dbb3c1ad0cbff9dfa890222b47bddfc15fff817/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:30 compute-0 podman[216638]: 2025-12-05 01:15:30.968103038 +0000 UTC m=+0.209520593 container init b34dc3564fc362578d4b1bfe1bff1a891765a42d3942342914bd01cb515d5b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_feistel, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  5 01:15:30 compute-0 podman[216638]: 2025-12-05 01:15:30.98012765 +0000 UTC m=+0.221545165 container start b34dc3564fc362578d4b1bfe1bff1a891765a42d3942342914bd01cb515d5b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  5 01:15:30 compute-0 podman[216638]: 2025-12-05 01:15:30.984383075 +0000 UTC m=+0.225800610 container attach b34dc3564fc362578d4b1bfe1bff1a891765a42d3942342914bd01cb515d5b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Dec  5 01:15:31 compute-0 ceph-mgr[193209]: [progress INFO root] Completed event a81ac3fe-503f-4ecd-a35d-c802451fb572 (Global Recovery Event) in 10 seconds
Dec  5 01:15:31 compute-0 openstack_network_exporter[160350]: ERROR   01:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:15:31 compute-0 openstack_network_exporter[160350]: ERROR   01:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:15:31 compute-0 openstack_network_exporter[160350]: ERROR   01:15:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:15:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:15:31 compute-0 openstack_network_exporter[160350]: ERROR   01:15:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:15:31 compute-0 openstack_network_exporter[160350]: ERROR   01:15:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:15:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:15:31 compute-0 python3[216733]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  5 01:15:31 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 2.c scrub starts
Dec  5 01:15:31 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 2.c scrub ok
Dec  5 01:15:31 compute-0 python3[216805]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764897331.1680703-37178-57175415099049/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:15:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:15:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v99: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:15:32 compute-0 podman[216819]: 2025-12-05 01:15:32.167591178 +0000 UTC m=+0.137365961 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 01:15:32 compute-0 modest_feistel[216653]: --> passed data devices: 0 physical, 3 LVM
Dec  5 01:15:32 compute-0 modest_feistel[216653]: --> relative data size: 1.0
Dec  5 01:15:32 compute-0 modest_feistel[216653]: --> All data devices are unavailable
Dec  5 01:15:32 compute-0 systemd[1]: libpod-b34dc3564fc362578d4b1bfe1bff1a891765a42d3942342914bd01cb515d5b9e.scope: Deactivated successfully.
Dec  5 01:15:32 compute-0 systemd[1]: libpod-b34dc3564fc362578d4b1bfe1bff1a891765a42d3942342914bd01cb515d5b9e.scope: Consumed 1.236s CPU time.
Dec  5 01:15:32 compute-0 podman[216874]: 2025-12-05 01:15:32.369041974 +0000 UTC m=+0.051908361 container died b34dc3564fc362578d4b1bfe1bff1a891765a42d3942342914bd01cb515d5b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_feistel, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  5 01:15:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b11f0ab7ab6212ada1308720dbb3c1ad0cbff9dfa890222b47bddfc15fff817-merged.mount: Deactivated successfully.
Dec  5 01:15:32 compute-0 podman[216874]: 2025-12-05 01:15:32.468460497 +0000 UTC m=+0.151326844 container remove b34dc3564fc362578d4b1bfe1bff1a891765a42d3942342914bd01cb515d5b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  5 01:15:32 compute-0 systemd[1]: libpod-conmon-b34dc3564fc362578d4b1bfe1bff1a891765a42d3942342914bd01cb515d5b9e.scope: Deactivated successfully.
Dec  5 01:15:32 compute-0 python3[216912]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:15:32 compute-0 podman[216938]: 2025-12-05 01:15:32.806259746 +0000 UTC m=+0.083595500 container create 58fb71bcce52932cda50079677ef169672480d585fa671c538de41eeaa1a0eca (image=quay.io/ceph/ceph:v18, name=great_wu, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:15:32 compute-0 podman[216938]: 2025-12-05 01:15:32.772557783 +0000 UTC m=+0.049893617 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:15:32 compute-0 systemd[1]: Started libpod-conmon-58fb71bcce52932cda50079677ef169672480d585fa671c538de41eeaa1a0eca.scope.
Dec  5 01:15:32 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77b8ae5867e02a910009cc9ab1b78b14eefb966316a223f0b0ba476359d0eda7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77b8ae5867e02a910009cc9ab1b78b14eefb966316a223f0b0ba476359d0eda7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77b8ae5867e02a910009cc9ab1b78b14eefb966316a223f0b0ba476359d0eda7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:32 compute-0 podman[216938]: 2025-12-05 01:15:32.948582538 +0000 UTC m=+0.225918382 container init 58fb71bcce52932cda50079677ef169672480d585fa671c538de41eeaa1a0eca (image=quay.io/ceph/ceph:v18, name=great_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:15:32 compute-0 podman[216938]: 2025-12-05 01:15:32.966329924 +0000 UTC m=+0.243665708 container start 58fb71bcce52932cda50079677ef169672480d585fa671c538de41eeaa1a0eca (image=quay.io/ceph/ceph:v18, name=great_wu, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Dec  5 01:15:32 compute-0 podman[216938]: 2025-12-05 01:15:32.97439966 +0000 UTC m=+0.251735554 container attach 58fb71bcce52932cda50079677ef169672480d585fa671c538de41eeaa1a0eca (image=quay.io/ceph/ceph:v18, name=great_wu, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  5 01:15:33 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Dec  5 01:15:33 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Dec  5 01:15:33 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14246 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Dec  5 01:15:33 compute-0 ceph-mgr[193209]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec  5 01:15:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Dec  5 01:15:33 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Dec  5 01:15:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Dec  5 01:15:33 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Dec  5 01:15:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Dec  5 01:15:33 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Dec  5 01:15:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Dec  5 01:15:33 compute-0 ceph-mon[192914]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec  5 01:15:33 compute-0 ceph-mon[192914]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec  5 01:15:33 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0[192910]: 2025-12-05T01:15:33.601+0000 7f6c12c58640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec  5 01:15:33 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec  5 01:15:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).mds e2 new map
Dec  5 01:15:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).mds e2 print_map#012e2#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-05T01:15:33.603075+0000#012modified#0112025-12-05T01:15:33.603118+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 
Dec  5 01:15:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Dec  5 01:15:33 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Dec  5 01:15:33 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Dec  5 01:15:33 compute-0 ceph-mgr[193209]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Dec  5 01:15:33 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Dec  5 01:15:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Dec  5 01:15:33 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:33 compute-0 ceph-mgr[193209]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec  5 01:15:33 compute-0 podman[217090]: 2025-12-05 01:15:33.638610072 +0000 UTC m=+0.079516721 container create 27c9858eae97c91ce3e5922ac9881a2982007f7d0f76ac29910e54a63ae5d212 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  5 01:15:33 compute-0 systemd[1]: libpod-58fb71bcce52932cda50079677ef169672480d585fa671c538de41eeaa1a0eca.scope: Deactivated successfully.
Dec  5 01:15:33 compute-0 podman[216938]: 2025-12-05 01:15:33.672618283 +0000 UTC m=+0.949954067 container died 58fb71bcce52932cda50079677ef169672480d585fa671c538de41eeaa1a0eca (image=quay.io/ceph/ceph:v18, name=great_wu, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:15:33 compute-0 podman[217090]: 2025-12-05 01:15:33.597717497 +0000 UTC m=+0.038624156 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:15:33 compute-0 systemd[1]: Started libpod-conmon-27c9858eae97c91ce3e5922ac9881a2982007f7d0f76ac29910e54a63ae5d212.scope.
Dec  5 01:15:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-77b8ae5867e02a910009cc9ab1b78b14eefb966316a223f0b0ba476359d0eda7-merged.mount: Deactivated successfully.
Dec  5 01:15:33 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:33 compute-0 podman[216938]: 2025-12-05 01:15:33.756774117 +0000 UTC m=+1.034109851 container remove 58fb71bcce52932cda50079677ef169672480d585fa671c538de41eeaa1a0eca (image=quay.io/ceph/ceph:v18, name=great_wu, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  5 01:15:33 compute-0 systemd[1]: libpod-conmon-58fb71bcce52932cda50079677ef169672480d585fa671c538de41eeaa1a0eca.scope: Deactivated successfully.
Dec  5 01:15:33 compute-0 podman[217090]: 2025-12-05 01:15:33.774377059 +0000 UTC m=+0.215283688 container init 27c9858eae97c91ce3e5922ac9881a2982007f7d0f76ac29910e54a63ae5d212 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_buck, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:15:33 compute-0 podman[217090]: 2025-12-05 01:15:33.784775917 +0000 UTC m=+0.225682526 container start 27c9858eae97c91ce3e5922ac9881a2982007f7d0f76ac29910e54a63ae5d212 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:15:33 compute-0 podman[217090]: 2025-12-05 01:15:33.790555562 +0000 UTC m=+0.231462211 container attach 27c9858eae97c91ce3e5922ac9881a2982007f7d0f76ac29910e54a63ae5d212 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_buck, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:15:33 compute-0 systemd[1]: libpod-27c9858eae97c91ce3e5922ac9881a2982007f7d0f76ac29910e54a63ae5d212.scope: Deactivated successfully.
Dec  5 01:15:33 compute-0 awesome_buck[217115]: 167 167
Dec  5 01:15:33 compute-0 conmon[217115]: conmon 27c9858eae97c91ce3e5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-27c9858eae97c91ce3e5922ac9881a2982007f7d0f76ac29910e54a63ae5d212.scope/container/memory.events
Dec  5 01:15:33 compute-0 podman[217090]: 2025-12-05 01:15:33.795874805 +0000 UTC m=+0.236781414 container died 27c9858eae97c91ce3e5922ac9881a2982007f7d0f76ac29910e54a63ae5d212 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_buck, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  5 01:15:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b8d2580085c10a2aa179d5e2e433f036500b39e879518d773625da04e353c4c-merged.mount: Deactivated successfully.
Dec  5 01:15:33 compute-0 podman[217090]: 2025-12-05 01:15:33.845535495 +0000 UTC m=+0.286442104 container remove 27c9858eae97c91ce3e5922ac9881a2982007f7d0f76ac29910e54a63ae5d212 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  5 01:15:33 compute-0 systemd[1]: libpod-conmon-27c9858eae97c91ce3e5922ac9881a2982007f7d0f76ac29910e54a63ae5d212.scope: Deactivated successfully.
Dec  5 01:15:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v101: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:15:34 compute-0 podman[217168]: 2025-12-05 01:15:34.149334123 +0000 UTC m=+0.097247496 container create ea0c2de8eccf8d7afe54b3466951670598e35a066957af668809c5814254c443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_newton, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  5 01:15:34 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Dec  5 01:15:34 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Dec  5 01:15:34 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Dec  5 01:15:34 compute-0 ceph-mon[192914]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec  5 01:15:34 compute-0 ceph-mon[192914]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec  5 01:15:34 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec  5 01:15:34 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:34 compute-0 python3[217167]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:15:34 compute-0 podman[217168]: 2025-12-05 01:15:34.116326679 +0000 UTC m=+0.064240092 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:15:34 compute-0 systemd[1]: Started libpod-conmon-ea0c2de8eccf8d7afe54b3466951670598e35a066957af668809c5814254c443.scope.
Dec  5 01:15:34 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/082b2c0237057011c32f66ece8924b27c85b9560767cfc0f67dd19af71ee99d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/082b2c0237057011c32f66ece8924b27c85b9560767cfc0f67dd19af71ee99d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/082b2c0237057011c32f66ece8924b27c85b9560767cfc0f67dd19af71ee99d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/082b2c0237057011c32f66ece8924b27c85b9560767cfc0f67dd19af71ee99d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:34 compute-0 podman[217168]: 2025-12-05 01:15:34.289461936 +0000 UTC m=+0.237375349 container init ea0c2de8eccf8d7afe54b3466951670598e35a066957af668809c5814254c443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_newton, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:15:34 compute-0 podman[217168]: 2025-12-05 01:15:34.31015036 +0000 UTC m=+0.258063733 container start ea0c2de8eccf8d7afe54b3466951670598e35a066957af668809c5814254c443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_newton, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:15:34 compute-0 podman[217168]: 2025-12-05 01:15:34.315651708 +0000 UTC m=+0.263565081 container attach ea0c2de8eccf8d7afe54b3466951670598e35a066957af668809c5814254c443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Dec  5 01:15:34 compute-0 podman[217183]: 2025-12-05 01:15:34.322526472 +0000 UTC m=+0.105069495 container create a6319edc16ec898c696e1d207378c2e362f6524012ea7717d211768c9bccf3df (image=quay.io/ceph/ceph:v18, name=hungry_saha, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:15:34 compute-0 podman[217183]: 2025-12-05 01:15:34.275565784 +0000 UTC m=+0.058108807 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:15:34 compute-0 systemd[1]: Started libpod-conmon-a6319edc16ec898c696e1d207378c2e362f6524012ea7717d211768c9bccf3df.scope.
Dec  5 01:15:34 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dc84a11aa9bc944c649606a90ba8878e4085b017f3237cef40ef429836ee356/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dc84a11aa9bc944c649606a90ba8878e4085b017f3237cef40ef429836ee356/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dc84a11aa9bc944c649606a90ba8878e4085b017f3237cef40ef429836ee356/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:34 compute-0 podman[217183]: 2025-12-05 01:15:34.471343718 +0000 UTC m=+0.253886711 container init a6319edc16ec898c696e1d207378c2e362f6524012ea7717d211768c9bccf3df (image=quay.io/ceph/ceph:v18, name=hungry_saha, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Dec  5 01:15:34 compute-0 podman[217183]: 2025-12-05 01:15:34.478852629 +0000 UTC m=+0.261395632 container start a6319edc16ec898c696e1d207378c2e362f6524012ea7717d211768c9bccf3df (image=quay.io/ceph/ceph:v18, name=hungry_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:15:34 compute-0 podman[217183]: 2025-12-05 01:15:34.484421139 +0000 UTC m=+0.266964132 container attach a6319edc16ec898c696e1d207378c2e362f6524012ea7717d211768c9bccf3df (image=quay.io/ceph/ceph:v18, name=hungry_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:15:34 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.b scrub starts
Dec  5 01:15:34 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.b scrub ok
Dec  5 01:15:35 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec  5 01:15:35 compute-0 ceph-mgr[193209]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Dec  5 01:15:35 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Dec  5 01:15:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Dec  5 01:15:35 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:35 compute-0 hopeful_newton[217186]: {
Dec  5 01:15:35 compute-0 hungry_saha[217204]: Scheduled mds.cephfs update...
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:    "0": [
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:        {
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:            "devices": [
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:                "/dev/loop3"
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:            ],
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:            "lv_name": "ceph_lv0",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:            "lv_size": "21470642176",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:            "name": "ceph_lv0",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:            "tags": {
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:                "ceph.cluster_name": "ceph",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:                "ceph.crush_device_class": "",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:                "ceph.encrypted": "0",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:                "ceph.osd_id": "0",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:                "ceph.type": "block",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:                "ceph.vdo": "0"
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:            },
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:            "type": "block",
Dec  5 01:15:35 compute-0 ceph-mon[192914]: Saving service mds.cephfs spec with placement compute-0
Dec  5 01:15:35 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:            "vg_name": "ceph_vg0"
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:        }
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:    ],
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:    "1": [
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:        {
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:            "devices": [
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:                "/dev/loop4"
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:            ],
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:            "lv_name": "ceph_lv1",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:            "lv_size": "21470642176",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:            "name": "ceph_lv1",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:            "tags": {
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:                "ceph.cluster_name": "ceph",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:                "ceph.crush_device_class": "",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:                "ceph.encrypted": "0",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:                "ceph.osd_id": "1",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:                "ceph.type": "block",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:                "ceph.vdo": "0"
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:            },
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:            "type": "block",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:            "vg_name": "ceph_vg1"
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:        }
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:    ],
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:    "2": [
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:        {
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:            "devices": [
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:                "/dev/loop5"
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:            ],
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:            "lv_name": "ceph_lv2",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:            "lv_size": "21470642176",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:            "name": "ceph_lv2",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:            "tags": {
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:                "ceph.cluster_name": "ceph",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:                "ceph.crush_device_class": "",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:                "ceph.encrypted": "0",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:                "ceph.osd_id": "2",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:                "ceph.type": "block",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:                "ceph.vdo": "0"
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:            },
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:            "type": "block",
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:            "vg_name": "ceph_vg2"
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:        }
Dec  5 01:15:35 compute-0 hopeful_newton[217186]:    ]
Dec  5 01:15:35 compute-0 hopeful_newton[217186]: }
Dec  5 01:15:35 compute-0 systemd[1]: libpod-a6319edc16ec898c696e1d207378c2e362f6524012ea7717d211768c9bccf3df.scope: Deactivated successfully.
Dec  5 01:15:35 compute-0 conmon[217204]: conmon a6319edc16ec898c696e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a6319edc16ec898c696e1d207378c2e362f6524012ea7717d211768c9bccf3df.scope/container/memory.events
Dec  5 01:15:35 compute-0 podman[217183]: 2025-12-05 01:15:35.197329564 +0000 UTC m=+0.979872607 container died a6319edc16ec898c696e1d207378c2e362f6524012ea7717d211768c9bccf3df (image=quay.io/ceph/ceph:v18, name=hungry_saha, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  5 01:15:35 compute-0 systemd[1]: libpod-ea0c2de8eccf8d7afe54b3466951670598e35a066957af668809c5814254c443.scope: Deactivated successfully.
Dec  5 01:15:35 compute-0 podman[217168]: 2025-12-05 01:15:35.220475114 +0000 UTC m=+1.168388517 container died ea0c2de8eccf8d7afe54b3466951670598e35a066957af668809c5814254c443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  5 01:15:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-5dc84a11aa9bc944c649606a90ba8878e4085b017f3237cef40ef429836ee356-merged.mount: Deactivated successfully.
Dec  5 01:15:35 compute-0 podman[217183]: 2025-12-05 01:15:35.27408379 +0000 UTC m=+1.056626783 container remove a6319edc16ec898c696e1d207378c2e362f6524012ea7717d211768c9bccf3df (image=quay.io/ceph/ceph:v18, name=hungry_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  5 01:15:35 compute-0 systemd[1]: libpod-conmon-a6319edc16ec898c696e1d207378c2e362f6524012ea7717d211768c9bccf3df.scope: Deactivated successfully.
Dec  5 01:15:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-082b2c0237057011c32f66ece8924b27c85b9560767cfc0f67dd19af71ee99d3-merged.mount: Deactivated successfully.
Dec  5 01:15:35 compute-0 podman[217168]: 2025-12-05 01:15:35.32634018 +0000 UTC m=+1.274253543 container remove ea0c2de8eccf8d7afe54b3466951670598e35a066957af668809c5814254c443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_newton, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:15:35 compute-0 systemd[1]: libpod-conmon-ea0c2de8eccf8d7afe54b3466951670598e35a066957af668809c5814254c443.scope: Deactivated successfully.
Dec  5 01:15:35 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.c scrub starts
Dec  5 01:15:35 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.c scrub ok
Dec  5 01:15:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v102: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:15:36 compute-0 python3[217432]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  5 01:15:36 compute-0 ceph-mon[192914]: Saving service mds.cephfs spec with placement compute-0
Dec  5 01:15:36 compute-0 ceph-mgr[193209]: [progress INFO root] Writing back 10 completed events
Dec  5 01:15:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec  5 01:15:36 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:36 compute-0 podman[217515]: 2025-12-05 01:15:36.484110853 +0000 UTC m=+0.078712729 container create 80ed08e20a96e84b4c3c0cacd4050f0a34956ab116e8946b979d2fcff51ef59e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ritchie, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  5 01:15:36 compute-0 podman[217515]: 2025-12-05 01:15:36.45150171 +0000 UTC m=+0.046103596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:15:36 compute-0 systemd[1]: Started libpod-conmon-80ed08e20a96e84b4c3c0cacd4050f0a34956ab116e8946b979d2fcff51ef59e.scope.
Dec  5 01:15:36 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:36 compute-0 podman[217515]: 2025-12-05 01:15:36.638062807 +0000 UTC m=+0.232664713 container init 80ed08e20a96e84b4c3c0cacd4050f0a34956ab116e8946b979d2fcff51ef59e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ritchie, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:15:36 compute-0 podman[217515]: 2025-12-05 01:15:36.658184696 +0000 UTC m=+0.252786572 container start 80ed08e20a96e84b4c3c0cacd4050f0a34956ab116e8946b979d2fcff51ef59e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  5 01:15:36 compute-0 podman[217515]: 2025-12-05 01:15:36.665577984 +0000 UTC m=+0.260180040 container attach 80ed08e20a96e84b4c3c0cacd4050f0a34956ab116e8946b979d2fcff51ef59e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ritchie, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:15:36 compute-0 elated_ritchie[217557]: 167 167
Dec  5 01:15:36 compute-0 systemd[1]: libpod-80ed08e20a96e84b4c3c0cacd4050f0a34956ab116e8946b979d2fcff51ef59e.scope: Deactivated successfully.
Dec  5 01:15:36 compute-0 podman[217515]: 2025-12-05 01:15:36.670622559 +0000 UTC m=+0.265224465 container died 80ed08e20a96e84b4c3c0cacd4050f0a34956ab116e8946b979d2fcff51ef59e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  5 01:15:36 compute-0 python3[217556]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764897335.7043757-37208-109404068672281/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=1ccf2af1c4d9cd0d8c5f12e3a57b95f6f703bc49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:15:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f2d3413a1c0ee88046defb6be889886c5e0f7eb31bfb1b3cefc3a4f03d480bf-merged.mount: Deactivated successfully.
Dec  5 01:15:36 compute-0 podman[217515]: 2025-12-05 01:15:36.745965087 +0000 UTC m=+0.340566933 container remove 80ed08e20a96e84b4c3c0cacd4050f0a34956ab116e8946b979d2fcff51ef59e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  5 01:15:36 compute-0 systemd[1]: libpod-conmon-80ed08e20a96e84b4c3c0cacd4050f0a34956ab116e8946b979d2fcff51ef59e.scope: Deactivated successfully.
Dec  5 01:15:36 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 2.e scrub starts
Dec  5 01:15:36 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 2.e scrub ok
Dec  5 01:15:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:15:37 compute-0 podman[217603]: 2025-12-05 01:15:37.037942929 +0000 UTC m=+0.087540856 container create 8e162ba5525dc443cf72740a661fdc415687d3e857729061b3e0b68ad00e6cb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_buck, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  5 01:15:37 compute-0 podman[217603]: 2025-12-05 01:15:37.003189508 +0000 UTC m=+0.052787505 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:15:37 compute-0 systemd[1]: Started libpod-conmon-8e162ba5525dc443cf72740a661fdc415687d3e857729061b3e0b68ad00e6cb7.scope.
Dec  5 01:15:37 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309a38b47357b93495fcfb629ed71819c1ca0971bbea5b8d072c8b1a325bf050/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309a38b47357b93495fcfb629ed71819c1ca0971bbea5b8d072c8b1a325bf050/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309a38b47357b93495fcfb629ed71819c1ca0971bbea5b8d072c8b1a325bf050/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309a38b47357b93495fcfb629ed71819c1ca0971bbea5b8d072c8b1a325bf050/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:37 compute-0 podman[217603]: 2025-12-05 01:15:37.18992901 +0000 UTC m=+0.239526957 container init 8e162ba5525dc443cf72740a661fdc415687d3e857729061b3e0b68ad00e6cb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_buck, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:15:37 compute-0 podman[217603]: 2025-12-05 01:15:37.205086866 +0000 UTC m=+0.254684793 container start 8e162ba5525dc443cf72740a661fdc415687d3e857729061b3e0b68ad00e6cb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Dec  5 01:15:37 compute-0 podman[217603]: 2025-12-05 01:15:37.210509721 +0000 UTC m=+0.260107658 container attach 8e162ba5525dc443cf72740a661fdc415687d3e857729061b3e0b68ad00e6cb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_buck, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:15:37 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:37 compute-0 python3[217647]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:15:37 compute-0 podman[217650]: 2025-12-05 01:15:37.424220246 +0000 UTC m=+0.113763649 container create 2065a5b11f871314f3ef6765147ac3792feaae0723f2f9e2a045077235bf0a7d (image=quay.io/ceph/ceph:v18, name=strange_lamport, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:15:37 compute-0 podman[217650]: 2025-12-05 01:15:37.389263389 +0000 UTC m=+0.078806842 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:15:37 compute-0 systemd[1]: Started libpod-conmon-2065a5b11f871314f3ef6765147ac3792feaae0723f2f9e2a045077235bf0a7d.scope.
Dec  5 01:15:37 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6b11af1af30d66c68fbf874990f6587c28b8f42849ef265759206c485d35a89/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6b11af1af30d66c68fbf874990f6587c28b8f42849ef265759206c485d35a89/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:37 compute-0 podman[217650]: 2025-12-05 01:15:37.604146835 +0000 UTC m=+0.293690308 container init 2065a5b11f871314f3ef6765147ac3792feaae0723f2f9e2a045077235bf0a7d (image=quay.io/ceph/ceph:v18, name=strange_lamport, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  5 01:15:37 compute-0 podman[217650]: 2025-12-05 01:15:37.624381507 +0000 UTC m=+0.313924900 container start 2065a5b11f871314f3ef6765147ac3792feaae0723f2f9e2a045077235bf0a7d (image=quay.io/ceph/ceph:v18, name=strange_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Dec  5 01:15:37 compute-0 podman[217650]: 2025-12-05 01:15:37.631311793 +0000 UTC m=+0.320855236 container attach 2065a5b11f871314f3ef6765147ac3792feaae0723f2f9e2a045077235bf0a7d (image=quay.io/ceph/ceph:v18, name=strange_lamport, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:15:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v103: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:15:38 compute-0 stupefied_buck[217644]: {
Dec  5 01:15:38 compute-0 stupefied_buck[217644]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 01:15:38 compute-0 stupefied_buck[217644]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:15:38 compute-0 stupefied_buck[217644]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 01:15:38 compute-0 stupefied_buck[217644]:        "osd_id": 0,
Dec  5 01:15:38 compute-0 stupefied_buck[217644]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:15:38 compute-0 stupefied_buck[217644]:        "type": "bluestore"
Dec  5 01:15:38 compute-0 stupefied_buck[217644]:    },
Dec  5 01:15:38 compute-0 stupefied_buck[217644]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 01:15:38 compute-0 stupefied_buck[217644]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:15:38 compute-0 stupefied_buck[217644]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 01:15:38 compute-0 stupefied_buck[217644]:        "osd_id": 1,
Dec  5 01:15:38 compute-0 stupefied_buck[217644]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:15:38 compute-0 stupefied_buck[217644]:        "type": "bluestore"
Dec  5 01:15:38 compute-0 stupefied_buck[217644]:    },
Dec  5 01:15:38 compute-0 stupefied_buck[217644]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 01:15:38 compute-0 stupefied_buck[217644]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:15:38 compute-0 stupefied_buck[217644]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 01:15:38 compute-0 stupefied_buck[217644]:        "osd_id": 2,
Dec  5 01:15:38 compute-0 stupefied_buck[217644]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:15:38 compute-0 stupefied_buck[217644]:        "type": "bluestore"
Dec  5 01:15:38 compute-0 stupefied_buck[217644]:    }
Dec  5 01:15:38 compute-0 stupefied_buck[217644]: }
Dec  5 01:15:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0) v1
Dec  5 01:15:38 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2168049675' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Dec  5 01:15:38 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2168049675' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec  5 01:15:38 compute-0 systemd[1]: libpod-8e162ba5525dc443cf72740a661fdc415687d3e857729061b3e0b68ad00e6cb7.scope: Deactivated successfully.
Dec  5 01:15:38 compute-0 systemd[1]: libpod-8e162ba5525dc443cf72740a661fdc415687d3e857729061b3e0b68ad00e6cb7.scope: Consumed 1.161s CPU time.
Dec  5 01:15:38 compute-0 systemd[1]: libpod-2065a5b11f871314f3ef6765147ac3792feaae0723f2f9e2a045077235bf0a7d.scope: Deactivated successfully.
Dec  5 01:15:38 compute-0 podman[217650]: 2025-12-05 01:15:38.413615458 +0000 UTC m=+1.103158811 container died 2065a5b11f871314f3ef6765147ac3792feaae0723f2f9e2a045077235bf0a7d (image=quay.io/ceph/ceph:v18, name=strange_lamport, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:15:38 compute-0 podman[217718]: 2025-12-05 01:15:38.452022127 +0000 UTC m=+0.053063673 container died 8e162ba5525dc443cf72740a661fdc415687d3e857729061b3e0b68ad00e6cb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_buck, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  5 01:15:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6b11af1af30d66c68fbf874990f6587c28b8f42849ef265759206c485d35a89-merged.mount: Deactivated successfully.
Dec  5 01:15:38 compute-0 podman[217650]: 2025-12-05 01:15:38.508998933 +0000 UTC m=+1.198542296 container remove 2065a5b11f871314f3ef6765147ac3792feaae0723f2f9e2a045077235bf0a7d (image=quay.io/ceph/ceph:v18, name=strange_lamport, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:15:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-309a38b47357b93495fcfb629ed71819c1ca0971bbea5b8d072c8b1a325bf050-merged.mount: Deactivated successfully.
Dec  5 01:15:38 compute-0 systemd[1]: libpod-conmon-2065a5b11f871314f3ef6765147ac3792feaae0723f2f9e2a045077235bf0a7d.scope: Deactivated successfully.
Dec  5 01:15:38 compute-0 podman[217718]: 2025-12-05 01:15:38.561480319 +0000 UTC m=+0.162521835 container remove 8e162ba5525dc443cf72740a661fdc415687d3e857729061b3e0b68ad00e6cb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  5 01:15:38 compute-0 systemd[1]: libpod-conmon-8e162ba5525dc443cf72740a661fdc415687d3e857729061b3e0b68ad00e6cb7.scope: Deactivated successfully.
Dec  5 01:15:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:15:38 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:15:38 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:39 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2168049675' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Dec  5 01:15:39 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2168049675' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec  5 01:15:39 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:39 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:39 compute-0 python3[217919]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:15:39 compute-0 podman[217935]: 2025-12-05 01:15:39.910057613 +0000 UTC m=+0.083407555 container create 67fb4245e88e3f947555e13c643bafec3f2e2822d90ec040af5b867a4c7515cc (image=quay.io/ceph/ceph:v18, name=pensive_zhukovsky, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:15:39 compute-0 podman[217935]: 2025-12-05 01:15:39.87857452 +0000 UTC m=+0.051924472 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:15:39 compute-0 systemd[1]: Started libpod-conmon-67fb4245e88e3f947555e13c643bafec3f2e2822d90ec040af5b867a4c7515cc.scope.
Dec  5 01:15:40 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c83cd88815889775860378630d862ee15a356550e287c88045273a0b231a7c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c83cd88815889775860378630d862ee15a356550e287c88045273a0b231a7c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:40 compute-0 podman[217935]: 2025-12-05 01:15:40.055747946 +0000 UTC m=+0.229097958 container init 67fb4245e88e3f947555e13c643bafec3f2e2822d90ec040af5b867a4c7515cc (image=quay.io/ceph/ceph:v18, name=pensive_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  5 01:15:40 compute-0 podman[217935]: 2025-12-05 01:15:40.072961787 +0000 UTC m=+0.246311699 container start 67fb4245e88e3f947555e13c643bafec3f2e2822d90ec040af5b867a4c7515cc (image=quay.io/ceph/ceph:v18, name=pensive_zhukovsky, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  5 01:15:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v104: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:15:40 compute-0 podman[217935]: 2025-12-05 01:15:40.081429304 +0000 UTC m=+0.254779266 container attach 67fb4245e88e3f947555e13c643bafec3f2e2822d90ec040af5b867a4c7515cc (image=quay.io/ceph/ceph:v18, name=pensive_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  5 01:15:40 compute-0 podman[218009]: 2025-12-05 01:15:40.524705867 +0000 UTC m=+0.131704188 container exec aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  5 01:15:40 compute-0 podman[218009]: 2025-12-05 01:15:40.647171558 +0000 UTC m=+0.254169869 container exec_died aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  5 01:15:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Dec  5 01:15:40 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4077586626' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  5 01:15:40 compute-0 pensive_zhukovsky[217973]: 
Dec  5 01:15:40 compute-0 pensive_zhukovsky[217973]: {"fsid":"cbd280d3-cbd8-528b-ace6-2b3a887cdcee","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":193,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":41,"num_osds":3,"num_up_osds":3,"osd_up_since":1764897281,"num_in_osds":3,"osd_in_since":1764897248,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":193}],"num_pgs":193,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":84148224,"bytes_avail":64327778304,"bytes_total":64411926528},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":4,"modified":"2025-12-05T01:15:38.079813+0000","services":{"osd":{"daemons":{"summary":"","2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Dec  5 01:15:40 compute-0 systemd[1]: libpod-67fb4245e88e3f947555e13c643bafec3f2e2822d90ec040af5b867a4c7515cc.scope: Deactivated successfully.
Dec  5 01:15:40 compute-0 podman[217935]: 2025-12-05 01:15:40.799294213 +0000 UTC m=+0.972644125 container died 67fb4245e88e3f947555e13c643bafec3f2e2822d90ec040af5b867a4c7515cc (image=quay.io/ceph/ceph:v18, name=pensive_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  5 01:15:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9c83cd88815889775860378630d862ee15a356550e287c88045273a0b231a7c-merged.mount: Deactivated successfully.
Dec  5 01:15:40 compute-0 podman[217935]: 2025-12-05 01:15:40.855292483 +0000 UTC m=+1.028642395 container remove 67fb4245e88e3f947555e13c643bafec3f2e2822d90ec040af5b867a4c7515cc (image=quay.io/ceph/ceph:v18, name=pensive_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  5 01:15:40 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 2.10 deep-scrub starts
Dec  5 01:15:40 compute-0 systemd[1]: libpod-conmon-67fb4245e88e3f947555e13c643bafec3f2e2822d90ec040af5b867a4c7515cc.scope: Deactivated successfully.
Dec  5 01:15:40 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 2.10 deep-scrub ok
Dec  5 01:15:40 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.b scrub starts
Dec  5 01:15:40 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.b scrub ok
Dec  5 01:15:41 compute-0 python3[218144]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:15:41 compute-0 podman[218162]: 2025-12-05 01:15:41.515778845 +0000 UTC m=+0.128623646 container create 7aa9c4e07984bb1ab60d288e64dea6a7af70034b764a49bf76a483e9042b6e72 (image=quay.io/ceph/ceph:v18, name=busy_goldwasser, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:15:41 compute-0 podman[218162]: 2025-12-05 01:15:41.469061954 +0000 UTC m=+0.081906805 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:15:41 compute-0 systemd[1]: Started libpod-conmon-7aa9c4e07984bb1ab60d288e64dea6a7af70034b764a49bf76a483e9042b6e72.scope.
Dec  5 01:15:41 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:15:41 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:41 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:15:41 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:41 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f971d02b62a0431988fa6ff8d09b8d2abed4302b5971877ed74e2fe6aa0c7435/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f971d02b62a0431988fa6ff8d09b8d2abed4302b5971877ed74e2fe6aa0c7435/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:41 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Dec  5 01:15:41 compute-0 podman[218162]: 2025-12-05 01:15:41.686225121 +0000 UTC m=+0.299069892 container init 7aa9c4e07984bb1ab60d288e64dea6a7af70034b764a49bf76a483e9042b6e72 (image=quay.io/ceph/ceph:v18, name=busy_goldwasser, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Dec  5 01:15:41 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Dec  5 01:15:41 compute-0 podman[218162]: 2025-12-05 01:15:41.708363784 +0000 UTC m=+0.321208545 container start 7aa9c4e07984bb1ab60d288e64dea6a7af70034b764a49bf76a483e9042b6e72 (image=quay.io/ceph/ceph:v18, name=busy_goldwasser, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:15:41 compute-0 podman[218162]: 2025-12-05 01:15:41.712807173 +0000 UTC m=+0.325651974 container attach 7aa9c4e07984bb1ab60d288e64dea6a7af70034b764a49bf76a483e9042b6e72 (image=quay.io/ceph/ceph:v18, name=busy_goldwasser, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:15:41 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.d deep-scrub starts
Dec  5 01:15:41 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.d deep-scrub ok
Dec  5 01:15:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:15:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v105: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:15:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  5 01:15:42 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1849309770' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  5 01:15:42 compute-0 busy_goldwasser[218194]: 
Dec  5 01:15:42 compute-0 busy_goldwasser[218194]: {"epoch":1,"fsid":"cbd280d3-cbd8-528b-ace6-2b3a887cdcee","modified":"2025-12-05T01:12:19.563284Z","created":"2025-12-05T01:12:19.563284Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Dec  5 01:15:42 compute-0 busy_goldwasser[218194]: dumped monmap epoch 1
Dec  5 01:15:42 compute-0 systemd[1]: libpod-7aa9c4e07984bb1ab60d288e64dea6a7af70034b764a49bf76a483e9042b6e72.scope: Deactivated successfully.
Dec  5 01:15:42 compute-0 podman[218162]: 2025-12-05 01:15:42.449450555 +0000 UTC m=+1.062295346 container died 7aa9c4e07984bb1ab60d288e64dea6a7af70034b764a49bf76a483e9042b6e72 (image=quay.io/ceph/ceph:v18, name=busy_goldwasser, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:15:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-f971d02b62a0431988fa6ff8d09b8d2abed4302b5971877ed74e2fe6aa0c7435-merged.mount: Deactivated successfully.
Dec  5 01:15:42 compute-0 podman[218162]: 2025-12-05 01:15:42.522553723 +0000 UTC m=+1.135398484 container remove 7aa9c4e07984bb1ab60d288e64dea6a7af70034b764a49bf76a483e9042b6e72 (image=quay.io/ceph/ceph:v18, name=busy_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec  5 01:15:42 compute-0 systemd[1]: libpod-conmon-7aa9c4e07984bb1ab60d288e64dea6a7af70034b764a49bf76a483e9042b6e72.scope: Deactivated successfully.
Dec  5 01:15:42 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:42 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:15:42 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:15:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 01:15:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:15:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 01:15:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:42 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 1ee6ef4c-f652-40db-ae28-fefd31679028 does not exist
Dec  5 01:15:42 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 7339d669-5b71-4f3d-a29e-0485ced1233e does not exist
Dec  5 01:15:42 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 4d80e170-cb23-4dd5-9b82-ffe46d5fed91 does not exist
Dec  5 01:15:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 01:15:42 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 01:15:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 01:15:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:15:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:15:42 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:15:42 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Dec  5 01:15:42 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Dec  5 01:15:43 compute-0 python3[218409]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:15:43 compute-0 podman[218443]: 2025-12-05 01:15:43.371693619 +0000 UTC m=+0.103349739 container create d1ce8efef730217ace4044429dd10edbf118ad3fb4ee9baf59cde34f458c5e1d (image=quay.io/ceph/ceph:v18, name=competent_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  5 01:15:43 compute-0 podman[218443]: 2025-12-05 01:15:43.338822938 +0000 UTC m=+0.070479128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:15:43 compute-0 systemd[1]: Started libpod-conmon-d1ce8efef730217ace4044429dd10edbf118ad3fb4ee9baf59cde34f458c5e1d.scope.
Dec  5 01:15:43 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ea8d63b8dd87f6d8680411412c5b69ba7a02e3e2cd67a16e1bf7a3b5fea87fc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ea8d63b8dd87f6d8680411412c5b69ba7a02e3e2cd67a16e1bf7a3b5fea87fc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:43 compute-0 podman[218443]: 2025-12-05 01:15:43.525632102 +0000 UTC m=+0.257288252 container init d1ce8efef730217ace4044429dd10edbf118ad3fb4ee9baf59cde34f458c5e1d (image=quay.io/ceph/ceph:v18, name=competent_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  5 01:15:43 compute-0 podman[218443]: 2025-12-05 01:15:43.536298278 +0000 UTC m=+0.267954388 container start d1ce8efef730217ace4044429dd10edbf118ad3fb4ee9baf59cde34f458c5e1d (image=quay.io/ceph/ceph:v18, name=competent_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  5 01:15:43 compute-0 podman[218443]: 2025-12-05 01:15:43.541499848 +0000 UTC m=+0.273156008 container attach d1ce8efef730217ace4044429dd10edbf118ad3fb4ee9baf59cde34f458c5e1d (image=quay.io/ceph/ceph:v18, name=competent_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:15:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:15:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:15:43 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Dec  5 01:15:43 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Dec  5 01:15:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v106: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:15:44 compute-0 podman[218560]: 2025-12-05 01:15:44.115188825 +0000 UTC m=+0.074363153 container create 8b557eec9f1a766e6d47bb8e562c8f4fdc9a4fdd8bc1dbf48f557ffdff3d868b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_haibt, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:15:44 compute-0 podman[218560]: 2025-12-05 01:15:44.087642847 +0000 UTC m=+0.046817195 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:15:44 compute-0 systemd[1]: Started libpod-conmon-8b557eec9f1a766e6d47bb8e562c8f4fdc9a4fdd8bc1dbf48f557ffdff3d868b.scope.
Dec  5 01:15:44 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Dec  5 01:15:44 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2977895372' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Dec  5 01:15:44 compute-0 competent_bhaskara[218491]: [client.openstack]
Dec  5 01:15:44 compute-0 competent_bhaskara[218491]: #011key = AQBBMTJpAAAAABAAQWv2lkQhfZ74+C7m+rCDZA==
Dec  5 01:15:44 compute-0 competent_bhaskara[218491]: #011caps mgr = "allow *"
Dec  5 01:15:44 compute-0 competent_bhaskara[218491]: #011caps mon = "profile rbd"
Dec  5 01:15:44 compute-0 competent_bhaskara[218491]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Dec  5 01:15:44 compute-0 podman[218560]: 2025-12-05 01:15:44.282408844 +0000 UTC m=+0.241583202 container init 8b557eec9f1a766e6d47bb8e562c8f4fdc9a4fdd8bc1dbf48f557ffdff3d868b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_haibt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  5 01:15:44 compute-0 systemd[1]: libpod-d1ce8efef730217ace4044429dd10edbf118ad3fb4ee9baf59cde34f458c5e1d.scope: Deactivated successfully.
Dec  5 01:15:44 compute-0 podman[218443]: 2025-12-05 01:15:44.288946319 +0000 UTC m=+1.020602429 container died d1ce8efef730217ace4044429dd10edbf118ad3fb4ee9baf59cde34f458c5e1d (image=quay.io/ceph/ceph:v18, name=competent_bhaskara, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:15:44 compute-0 podman[218560]: 2025-12-05 01:15:44.301355082 +0000 UTC m=+0.260529410 container start 8b557eec9f1a766e6d47bb8e562c8f4fdc9a4fdd8bc1dbf48f557ffdff3d868b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:15:44 compute-0 podman[218560]: 2025-12-05 01:15:44.308519654 +0000 UTC m=+0.267694022 container attach 8b557eec9f1a766e6d47bb8e562c8f4fdc9a4fdd8bc1dbf48f557ffdff3d868b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:15:44 compute-0 quizzical_haibt[218576]: 167 167
Dec  5 01:15:44 compute-0 systemd[1]: libpod-8b557eec9f1a766e6d47bb8e562c8f4fdc9a4fdd8bc1dbf48f557ffdff3d868b.scope: Deactivated successfully.
Dec  5 01:15:44 compute-0 podman[218560]: 2025-12-05 01:15:44.313849107 +0000 UTC m=+0.273023445 container died 8b557eec9f1a766e6d47bb8e562c8f4fdc9a4fdd8bc1dbf48f557ffdff3d868b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  5 01:15:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ea8d63b8dd87f6d8680411412c5b69ba7a02e3e2cd67a16e1bf7a3b5fea87fc-merged.mount: Deactivated successfully.
Dec  5 01:15:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-baf6429769974d326af0ab38a28e6f5ed29e07f51aed2f8141f42326a9a99c76-merged.mount: Deactivated successfully.
Dec  5 01:15:44 compute-0 podman[218560]: 2025-12-05 01:15:44.403130858 +0000 UTC m=+0.362305156 container remove 8b557eec9f1a766e6d47bb8e562c8f4fdc9a4fdd8bc1dbf48f557ffdff3d868b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_haibt, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec  5 01:15:44 compute-0 podman[218443]: 2025-12-05 01:15:44.412485689 +0000 UTC m=+1.144141809 container remove d1ce8efef730217ace4044429dd10edbf118ad3fb4ee9baf59cde34f458c5e1d (image=quay.io/ceph/ceph:v18, name=competent_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:15:44 compute-0 systemd[1]: libpod-conmon-d1ce8efef730217ace4044429dd10edbf118ad3fb4ee9baf59cde34f458c5e1d.scope: Deactivated successfully.
Dec  5 01:15:44 compute-0 systemd[1]: libpod-conmon-8b557eec9f1a766e6d47bb8e562c8f4fdc9a4fdd8bc1dbf48f557ffdff3d868b.scope: Deactivated successfully.
Dec  5 01:15:44 compute-0 podman[218614]: 2025-12-05 01:15:44.614680015 +0000 UTC m=+0.068221459 container create 8da448f965a8dee74ee578bc251dce9f037b87456420f57f51236ddd143ced1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_blackwell, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:15:44 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2977895372' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Dec  5 01:15:44 compute-0 podman[218614]: 2025-12-05 01:15:44.590725163 +0000 UTC m=+0.044266647 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:15:44 compute-0 systemd[1]: Started libpod-conmon-8da448f965a8dee74ee578bc251dce9f037b87456420f57f51236ddd143ced1c.scope.
Dec  5 01:15:44 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b1e078342abc7603479bac8005322d11c9747485e9193885133b727057dca1a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b1e078342abc7603479bac8005322d11c9747485e9193885133b727057dca1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b1e078342abc7603479bac8005322d11c9747485e9193885133b727057dca1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b1e078342abc7603479bac8005322d11c9747485e9193885133b727057dca1a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b1e078342abc7603479bac8005322d11c9747485e9193885133b727057dca1a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:44 compute-0 podman[218614]: 2025-12-05 01:15:44.781542875 +0000 UTC m=+0.235084369 container init 8da448f965a8dee74ee578bc251dce9f037b87456420f57f51236ddd143ced1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:15:44 compute-0 podman[218614]: 2025-12-05 01:15:44.794853881 +0000 UTC m=+0.248395355 container start 8da448f965a8dee74ee578bc251dce9f037b87456420f57f51236ddd143ced1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_blackwell, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  5 01:15:44 compute-0 podman[218614]: 2025-12-05 01:15:44.801663194 +0000 UTC m=+0.255204658 container attach 8da448f965a8dee74ee578bc251dce9f037b87456420f57f51236ddd143ced1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_blackwell, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  5 01:15:44 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Dec  5 01:15:44 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Dec  5 01:15:46 compute-0 awesome_blackwell[218630]: --> passed data devices: 0 physical, 3 LVM
Dec  5 01:15:46 compute-0 awesome_blackwell[218630]: --> relative data size: 1.0
Dec  5 01:15:46 compute-0 awesome_blackwell[218630]: --> All data devices are unavailable
Dec  5 01:15:46 compute-0 systemd[1]: libpod-8da448f965a8dee74ee578bc251dce9f037b87456420f57f51236ddd143ced1c.scope: Deactivated successfully.
Dec  5 01:15:46 compute-0 systemd[1]: libpod-8da448f965a8dee74ee578bc251dce9f037b87456420f57f51236ddd143ced1c.scope: Consumed 1.195s CPU time.
Dec  5 01:15:46 compute-0 podman[218614]: 2025-12-05 01:15:46.07643716 +0000 UTC m=+1.529978634 container died 8da448f965a8dee74ee578bc251dce9f037b87456420f57f51236ddd143ced1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:15:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v107: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:15:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b1e078342abc7603479bac8005322d11c9747485e9193885133b727057dca1a-merged.mount: Deactivated successfully.
Dec  5 01:15:46 compute-0 podman[218614]: 2025-12-05 01:15:46.15519602 +0000 UTC m=+1.608737474 container remove 8da448f965a8dee74ee578bc251dce9f037b87456420f57f51236ddd143ced1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_blackwell, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  5 01:15:46 compute-0 systemd[1]: libpod-conmon-8da448f965a8dee74ee578bc251dce9f037b87456420f57f51236ddd143ced1c.scope: Deactivated successfully.
Dec  5 01:15:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:15:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:15:46 compute-0 ansible-async_wrapper.py[218808]: Invoked with j19528271487 30 /home/zuul/.ansible/tmp/ansible-tmp-1764897345.4624417-37280-117956692638760/AnsiballZ_command.py _
Dec  5 01:15:46 compute-0 ansible-async_wrapper.py[218826]: Starting module and watcher
Dec  5 01:15:46 compute-0 ansible-async_wrapper.py[218826]: Start watching 218829 (30)
Dec  5 01:15:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:15:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:15:46 compute-0 ansible-async_wrapper.py[218829]: Start module (218829)
Dec  5 01:15:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:15:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:15:46 compute-0 ansible-async_wrapper.py[218808]: Return async_wrapper task started.
Dec  5 01:15:46 compute-0 python3[218832]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:15:46 compute-0 podman[218876]: 2025-12-05 01:15:46.452314099 +0000 UTC m=+0.064861039 container create e1325acb9e5a8d2f77a78fd66874cee3218596535e8b5fd8438b85c396bdb446 (image=quay.io/ceph/ceph:v18, name=unruffled_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:15:46 compute-0 systemd[1]: Started libpod-conmon-e1325acb9e5a8d2f77a78fd66874cee3218596535e8b5fd8438b85c396bdb446.scope.
Dec  5 01:15:46 compute-0 podman[218876]: 2025-12-05 01:15:46.422753087 +0000 UTC m=+0.035300057 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:15:46 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe17f2ac7ccfb22364720e3ad9e9c700aa19f5927f972c1e32fda8b8975d9304/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe17f2ac7ccfb22364720e3ad9e9c700aa19f5927f972c1e32fda8b8975d9304/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:46 compute-0 podman[218876]: 2025-12-05 01:15:46.556805278 +0000 UTC m=+0.169352248 container init e1325acb9e5a8d2f77a78fd66874cee3218596535e8b5fd8438b85c396bdb446 (image=quay.io/ceph/ceph:v18, name=unruffled_satoshi, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:15:46 compute-0 podman[218876]: 2025-12-05 01:15:46.567260468 +0000 UTC m=+0.179807408 container start e1325acb9e5a8d2f77a78fd66874cee3218596535e8b5fd8438b85c396bdb446 (image=quay.io/ceph/ceph:v18, name=unruffled_satoshi, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  5 01:15:46 compute-0 podman[218876]: 2025-12-05 01:15:46.572135498 +0000 UTC m=+0.184682438 container attach e1325acb9e5a8d2f77a78fd66874cee3218596535e8b5fd8438b85c396bdb446 (image=quay.io/ceph/ceph:v18, name=unruffled_satoshi, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Dec  5 01:15:46 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Dec  5 01:15:46 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Dec  5 01:15:46 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Dec  5 01:15:46 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Dec  5 01:15:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:15:47 compute-0 podman[219003]: 2025-12-05 01:15:47.080391283 +0000 UTC m=+0.064280143 container create 53c21974d6c02c512a629bd1cb9be7d7bf93041833f6e2bdfdf0a62ac871614f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dhawan, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  5 01:15:47 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  5 01:15:47 compute-0 unruffled_satoshi[218917]: 
Dec  5 01:15:47 compute-0 unruffled_satoshi[218917]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  5 01:15:47 compute-0 systemd[1]: Started libpod-conmon-53c21974d6c02c512a629bd1cb9be7d7bf93041833f6e2bdfdf0a62ac871614f.scope.
Dec  5 01:15:47 compute-0 podman[219003]: 2025-12-05 01:15:47.059473123 +0000 UTC m=+0.043361993 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:15:47 compute-0 systemd[1]: libpod-e1325acb9e5a8d2f77a78fd66874cee3218596535e8b5fd8438b85c396bdb446.scope: Deactivated successfully.
Dec  5 01:15:47 compute-0 podman[218876]: 2025-12-05 01:15:47.162314687 +0000 UTC m=+0.774861627 container died e1325acb9e5a8d2f77a78fd66874cee3218596535e8b5fd8438b85c396bdb446 (image=quay.io/ceph/ceph:v18, name=unruffled_satoshi, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:15:47 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe17f2ac7ccfb22364720e3ad9e9c700aa19f5927f972c1e32fda8b8975d9304-merged.mount: Deactivated successfully.
Dec  5 01:15:47 compute-0 podman[219003]: 2025-12-05 01:15:47.226285161 +0000 UTC m=+0.210174051 container init 53c21974d6c02c512a629bd1cb9be7d7bf93041833f6e2bdfdf0a62ac871614f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dhawan, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:15:47 compute-0 podman[219003]: 2025-12-05 01:15:47.233395251 +0000 UTC m=+0.217284101 container start 53c21974d6c02c512a629bd1cb9be7d7bf93041833f6e2bdfdf0a62ac871614f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dhawan, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:15:47 compute-0 podman[218876]: 2025-12-05 01:15:47.238112308 +0000 UTC m=+0.850659248 container remove e1325acb9e5a8d2f77a78fd66874cee3218596535e8b5fd8438b85c396bdb446 (image=quay.io/ceph/ceph:v18, name=unruffled_satoshi, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  5 01:15:47 compute-0 cranky_dhawan[219021]: 167 167
Dec  5 01:15:47 compute-0 podman[219003]: 2025-12-05 01:15:47.246241786 +0000 UTC m=+0.230130666 container attach 53c21974d6c02c512a629bd1cb9be7d7bf93041833f6e2bdfdf0a62ac871614f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dhawan, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:15:47 compute-0 systemd[1]: libpod-53c21974d6c02c512a629bd1cb9be7d7bf93041833f6e2bdfdf0a62ac871614f.scope: Deactivated successfully.
Dec  5 01:15:47 compute-0 podman[219003]: 2025-12-05 01:15:47.248407034 +0000 UTC m=+0.232295924 container died 53c21974d6c02c512a629bd1cb9be7d7bf93041833f6e2bdfdf0a62ac871614f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  5 01:15:47 compute-0 systemd[1]: libpod-conmon-e1325acb9e5a8d2f77a78fd66874cee3218596535e8b5fd8438b85c396bdb446.scope: Deactivated successfully.
Dec  5 01:15:47 compute-0 ansible-async_wrapper.py[218829]: Module complete (218829)
Dec  5 01:15:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7bdb37e729878f1f7be674a19b8c55c91938140e1feaccfdaee56011b75dd56-merged.mount: Deactivated successfully.
Dec  5 01:15:47 compute-0 podman[219003]: 2025-12-05 01:15:47.331427767 +0000 UTC m=+0.315316617 container remove 53c21974d6c02c512a629bd1cb9be7d7bf93041833f6e2bdfdf0a62ac871614f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_dhawan, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:15:47 compute-0 systemd[1]: libpod-conmon-53c21974d6c02c512a629bd1cb9be7d7bf93041833f6e2bdfdf0a62ac871614f.scope: Deactivated successfully.
Dec  5 01:15:47 compute-0 podman[219106]: 2025-12-05 01:15:47.533608993 +0000 UTC m=+0.072089652 container create 3cb63d98c4d9d958d82d492fdadbe5d88c5b887c430a1325c54b74af98f8622d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_villani, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  5 01:15:47 compute-0 python3[219103]: ansible-ansible.legacy.async_status Invoked with jid=j19528271487.218808 mode=status _async_dir=/root/.ansible_async
Dec  5 01:15:47 compute-0 podman[219106]: 2025-12-05 01:15:47.507604547 +0000 UTC m=+0.046085236 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:15:47 compute-0 systemd[1]: Started libpod-conmon-3cb63d98c4d9d958d82d492fdadbe5d88c5b887c430a1325c54b74af98f8622d.scope.
Dec  5 01:15:47 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b27453b5a72d59fa4c75f838f2d42de6b6b1e107d77690bdac541629caab9ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b27453b5a72d59fa4c75f838f2d42de6b6b1e107d77690bdac541629caab9ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b27453b5a72d59fa4c75f838f2d42de6b6b1e107d77690bdac541629caab9ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b27453b5a72d59fa4c75f838f2d42de6b6b1e107d77690bdac541629caab9ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:47 compute-0 podman[219106]: 2025-12-05 01:15:47.709870885 +0000 UTC m=+0.248351544 container init 3cb63d98c4d9d958d82d492fdadbe5d88c5b887c430a1325c54b74af98f8622d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_villani, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  5 01:15:47 compute-0 podman[219106]: 2025-12-05 01:15:47.736985771 +0000 UTC m=+0.275466410 container start 3cb63d98c4d9d958d82d492fdadbe5d88c5b887c430a1325c54b74af98f8622d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  5 01:15:47 compute-0 podman[219106]: 2025-12-05 01:15:47.743139436 +0000 UTC m=+0.281620125 container attach 3cb63d98c4d9d958d82d492fdadbe5d88c5b887c430a1325c54b74af98f8622d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_villani, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  5 01:15:47 compute-0 python3[219176]: ansible-ansible.legacy.async_status Invoked with jid=j19528271487.218808 mode=cleanup _async_dir=/root/.ansible_async
Dec  5 01:15:48 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Dec  5 01:15:48 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Dec  5 01:15:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v108: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:15:48 compute-0 eloquent_villani[219123]: {
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:    "0": [
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:        {
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:            "devices": [
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:                "/dev/loop3"
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:            ],
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:            "lv_name": "ceph_lv0",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:            "lv_size": "21470642176",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:            "name": "ceph_lv0",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:            "tags": {
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:                "ceph.cluster_name": "ceph",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:                "ceph.crush_device_class": "",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:                "ceph.encrypted": "0",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:                "ceph.osd_id": "0",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:                "ceph.type": "block",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:                "ceph.vdo": "0"
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:            },
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:            "type": "block",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:            "vg_name": "ceph_vg0"
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:        }
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:    ],
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:    "1": [
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:        {
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:            "devices": [
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:                "/dev/loop4"
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:            ],
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:            "lv_name": "ceph_lv1",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:            "lv_size": "21470642176",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:            "name": "ceph_lv1",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:            "tags": {
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:                "ceph.cluster_name": "ceph",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:                "ceph.crush_device_class": "",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:                "ceph.encrypted": "0",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:                "ceph.osd_id": "1",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:                "ceph.type": "block",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:                "ceph.vdo": "0"
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:            },
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:            "type": "block",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:            "vg_name": "ceph_vg1"
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:        }
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:    ],
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:    "2": [
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:        {
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:            "devices": [
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:                "/dev/loop5"
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:            ],
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:            "lv_name": "ceph_lv2",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:            "lv_size": "21470642176",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:            "name": "ceph_lv2",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:            "tags": {
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:                "ceph.cluster_name": "ceph",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:                "ceph.crush_device_class": "",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:                "ceph.encrypted": "0",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:                "ceph.osd_id": "2",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:                "ceph.type": "block",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:                "ceph.vdo": "0"
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:            },
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:            "type": "block",
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:            "vg_name": "ceph_vg2"
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:        }
Dec  5 01:15:48 compute-0 eloquent_villani[219123]:    ]
Dec  5 01:15:48 compute-0 eloquent_villani[219123]: }
Dec  5 01:15:48 compute-0 systemd[1]: libpod-3cb63d98c4d9d958d82d492fdadbe5d88c5b887c430a1325c54b74af98f8622d.scope: Deactivated successfully.
Dec  5 01:15:48 compute-0 podman[219106]: 2025-12-05 01:15:48.584014741 +0000 UTC m=+1.122495390 container died 3cb63d98c4d9d958d82d492fdadbe5d88c5b887c430a1325c54b74af98f8622d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_villani, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:15:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b27453b5a72d59fa4c75f838f2d42de6b6b1e107d77690bdac541629caab9ad-merged.mount: Deactivated successfully.
Dec  5 01:15:48 compute-0 python3[219206]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:15:48 compute-0 podman[219106]: 2025-12-05 01:15:48.668219716 +0000 UTC m=+1.206700355 container remove 3cb63d98c4d9d958d82d492fdadbe5d88c5b887c430a1325c54b74af98f8622d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  5 01:15:48 compute-0 systemd[1]: libpod-conmon-3cb63d98c4d9d958d82d492fdadbe5d88c5b887c430a1325c54b74af98f8622d.scope: Deactivated successfully.
Dec  5 01:15:48 compute-0 podman[219218]: 2025-12-05 01:15:48.777633197 +0000 UTC m=+0.095815548 container create bddfc2c7f0561b7a494872e0c8fabb6d87d6536e62f19bad66d022e423753d48 (image=quay.io/ceph/ceph:v18, name=interesting_diffie, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  5 01:15:48 compute-0 podman[219218]: 2025-12-05 01:15:48.741132549 +0000 UTC m=+0.059314930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:15:48 compute-0 systemd[1]: Started libpod-conmon-bddfc2c7f0561b7a494872e0c8fabb6d87d6536e62f19bad66d022e423753d48.scope.
Dec  5 01:15:48 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71b06989004edc21ce131ffa038d7e089b299d8b54d613c11b3db71849ba74ef/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71b06989004edc21ce131ffa038d7e089b299d8b54d613c11b3db71849ba74ef/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:48 compute-0 podman[219218]: 2025-12-05 01:15:48.939142582 +0000 UTC m=+0.257324973 container init bddfc2c7f0561b7a494872e0c8fabb6d87d6536e62f19bad66d022e423753d48 (image=quay.io/ceph/ceph:v18, name=interesting_diffie, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:15:48 compute-0 podman[219218]: 2025-12-05 01:15:48.95774185 +0000 UTC m=+0.275924211 container start bddfc2c7f0561b7a494872e0c8fabb6d87d6536e62f19bad66d022e423753d48 (image=quay.io/ceph/ceph:v18, name=interesting_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  5 01:15:48 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Dec  5 01:15:48 compute-0 podman[219218]: 2025-12-05 01:15:48.965111518 +0000 UTC m=+0.283293889 container attach bddfc2c7f0561b7a494872e0c8fabb6d87d6536e62f19bad66d022e423753d48 (image=quay.io/ceph/ceph:v18, name=interesting_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:15:48 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Dec  5 01:15:49 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  5 01:15:49 compute-0 interesting_diffie[219260]: 
Dec  5 01:15:49 compute-0 interesting_diffie[219260]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  5 01:15:49 compute-0 systemd[1]: libpod-bddfc2c7f0561b7a494872e0c8fabb6d87d6536e62f19bad66d022e423753d48.scope: Deactivated successfully.
Dec  5 01:15:49 compute-0 podman[219388]: 2025-12-05 01:15:49.64051856 +0000 UTC m=+0.041890923 container died bddfc2c7f0561b7a494872e0c8fabb6d87d6536e62f19bad66d022e423753d48 (image=quay.io/ceph/ceph:v18, name=interesting_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:15:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-71b06989004edc21ce131ffa038d7e089b299d8b54d613c11b3db71849ba74ef-merged.mount: Deactivated successfully.
Dec  5 01:15:49 compute-0 podman[219388]: 2025-12-05 01:15:49.711106661 +0000 UTC m=+0.112479014 container remove bddfc2c7f0561b7a494872e0c8fabb6d87d6536e62f19bad66d022e423753d48 (image=quay.io/ceph/ceph:v18, name=interesting_diffie, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:15:49 compute-0 systemd[1]: libpod-conmon-bddfc2c7f0561b7a494872e0c8fabb6d87d6536e62f19bad66d022e423753d48.scope: Deactivated successfully.
Dec  5 01:15:49 compute-0 podman[219414]: 2025-12-05 01:15:49.831788634 +0000 UTC m=+0.067571281 container create a8f12a6bcf8a124aa99479064b770036249a2d9926746455fb4507f77e2d776f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  5 01:15:49 compute-0 podman[219414]: 2025-12-05 01:15:49.80028868 +0000 UTC m=+0.036071387 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:15:49 compute-0 systemd[1]: Started libpod-conmon-a8f12a6bcf8a124aa99479064b770036249a2d9926746455fb4507f77e2d776f.scope.
Dec  5 01:15:49 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Dec  5 01:15:49 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:49 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Dec  5 01:15:49 compute-0 podman[219414]: 2025-12-05 01:15:49.98321542 +0000 UTC m=+0.218998087 container init a8f12a6bcf8a124aa99479064b770036249a2d9926746455fb4507f77e2d776f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_franklin, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:15:50 compute-0 podman[219414]: 2025-12-05 01:15:50.009956676 +0000 UTC m=+0.245739293 container start a8f12a6bcf8a124aa99479064b770036249a2d9926746455fb4507f77e2d776f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  5 01:15:50 compute-0 podman[219414]: 2025-12-05 01:15:50.014420876 +0000 UTC m=+0.250203543 container attach a8f12a6bcf8a124aa99479064b770036249a2d9926746455fb4507f77e2d776f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  5 01:15:50 compute-0 sweet_franklin[219436]: 167 167
Dec  5 01:15:50 compute-0 systemd[1]: libpod-a8f12a6bcf8a124aa99479064b770036249a2d9926746455fb4507f77e2d776f.scope: Deactivated successfully.
Dec  5 01:15:50 compute-0 conmon[219436]: conmon a8f12a6bcf8a124aa994 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a8f12a6bcf8a124aa99479064b770036249a2d9926746455fb4507f77e2d776f.scope/container/memory.events
Dec  5 01:15:50 compute-0 podman[219414]: 2025-12-05 01:15:50.023000995 +0000 UTC m=+0.258783732 container died a8f12a6bcf8a124aa99479064b770036249a2d9926746455fb4507f77e2d776f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:15:50 compute-0 podman[219427]: 2025-12-05 01:15:50.037829803 +0000 UTC m=+0.150358539 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  5 01:15:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ca6d39e744ead1c31b2e048923f13404b3ba97a6fa011a37801093368b1437c-merged.mount: Deactivated successfully.
Dec  5 01:15:50 compute-0 podman[219414]: 2025-12-05 01:15:50.082861639 +0000 UTC m=+0.318644256 container remove a8f12a6bcf8a124aa99479064b770036249a2d9926746455fb4507f77e2d776f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  5 01:15:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v109: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:15:50 compute-0 systemd[1]: libpod-conmon-a8f12a6bcf8a124aa99479064b770036249a2d9926746455fb4507f77e2d776f.scope: Deactivated successfully.
Dec  5 01:15:50 compute-0 podman[219473]: 2025-12-05 01:15:50.277870683 +0000 UTC m=+0.060273376 container create 11f5457ef9e28925cd831f76d32957ae69b52b0c4f0ff9c0cca8373acd8e0c1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_benz, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:15:50 compute-0 systemd[1]: Started libpod-conmon-11f5457ef9e28925cd831f76d32957ae69b52b0c4f0ff9c0cca8373acd8e0c1e.scope.
Dec  5 01:15:50 compute-0 podman[219473]: 2025-12-05 01:15:50.253008027 +0000 UTC m=+0.035410770 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:15:50 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/434cc006023a6be239467f2db25fd80183359630c2aaba5fed3657383af4956b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/434cc006023a6be239467f2db25fd80183359630c2aaba5fed3657383af4956b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/434cc006023a6be239467f2db25fd80183359630c2aaba5fed3657383af4956b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/434cc006023a6be239467f2db25fd80183359630c2aaba5fed3657383af4956b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:50 compute-0 podman[219473]: 2025-12-05 01:15:50.426099073 +0000 UTC m=+0.208501826 container init 11f5457ef9e28925cd831f76d32957ae69b52b0c4f0ff9c0cca8373acd8e0c1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_benz, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:15:50 compute-0 podman[219473]: 2025-12-05 01:15:50.458665106 +0000 UTC m=+0.241067809 container start 11f5457ef9e28925cd831f76d32957ae69b52b0c4f0ff9c0cca8373acd8e0c1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_benz, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:15:50 compute-0 podman[219473]: 2025-12-05 01:15:50.464643826 +0000 UTC m=+0.247046559 container attach 11f5457ef9e28925cd831f76d32957ae69b52b0c4f0ff9c0cca8373acd8e0c1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Dec  5 01:15:50 compute-0 python3[219520]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:15:50 compute-0 podman[219521]: 2025-12-05 01:15:50.72799671 +0000 UTC m=+0.077535788 container create cc926f4f91f8e8eb90ef040c0152ac9d486b5875f7681dfddb9f62f83fba86eb (image=quay.io/ceph/ceph:v18, name=cool_yonath, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Dec  5 01:15:50 compute-0 podman[219521]: 2025-12-05 01:15:50.688537953 +0000 UTC m=+0.038077111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:15:50 compute-0 systemd[1]: Started libpod-conmon-cc926f4f91f8e8eb90ef040c0152ac9d486b5875f7681dfddb9f62f83fba86eb.scope.
Dec  5 01:15:50 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d50106afa69687cb4a0160de4eef11caa1eff4dedd220cf2d50988e8e4e0b3d9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d50106afa69687cb4a0160de4eef11caa1eff4dedd220cf2d50988e8e4e0b3d9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:50 compute-0 podman[219521]: 2025-12-05 01:15:50.892112516 +0000 UTC m=+0.241651624 container init cc926f4f91f8e8eb90ef040c0152ac9d486b5875f7681dfddb9f62f83fba86eb (image=quay.io/ceph/ceph:v18, name=cool_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Dec  5 01:15:50 compute-0 podman[219521]: 2025-12-05 01:15:50.905265149 +0000 UTC m=+0.254804217 container start cc926f4f91f8e8eb90ef040c0152ac9d486b5875f7681dfddb9f62f83fba86eb (image=quay.io/ceph/ceph:v18, name=cool_yonath, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec  5 01:15:50 compute-0 podman[219521]: 2025-12-05 01:15:50.911056464 +0000 UTC m=+0.260595572 container attach cc926f4f91f8e8eb90ef040c0152ac9d486b5875f7681dfddb9f62f83fba86eb (image=quay.io/ceph/ceph:v18, name=cool_yonath, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  5 01:15:50 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Dec  5 01:15:50 compute-0 podman[219538]: 2025-12-05 01:15:50.933586447 +0000 UTC m=+0.105572189 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  5 01:15:50 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Dec  5 01:15:51 compute-0 podman[219545]: 2025-12-05 01:15:51.013689803 +0000 UTC m=+0.146726461 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  5 01:15:51 compute-0 ansible-async_wrapper.py[218826]: Done in kid B.
Dec  5 01:15:51 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  5 01:15:51 compute-0 cool_yonath[219536]: 
Dec  5 01:15:51 compute-0 cool_yonath[219536]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Dec  5 01:15:51 compute-0 systemd[1]: libpod-cc926f4f91f8e8eb90ef040c0152ac9d486b5875f7681dfddb9f62f83fba86eb.scope: Deactivated successfully.
Dec  5 01:15:51 compute-0 podman[219521]: 2025-12-05 01:15:51.51380705 +0000 UTC m=+0.863346128 container died cc926f4f91f8e8eb90ef040c0152ac9d486b5875f7681dfddb9f62f83fba86eb (image=quay.io/ceph/ceph:v18, name=cool_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:15:51 compute-0 peaceful_benz[219490]: {
Dec  5 01:15:51 compute-0 peaceful_benz[219490]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 01:15:51 compute-0 peaceful_benz[219490]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:15:51 compute-0 peaceful_benz[219490]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 01:15:51 compute-0 peaceful_benz[219490]:        "osd_id": 0,
Dec  5 01:15:51 compute-0 peaceful_benz[219490]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:15:51 compute-0 peaceful_benz[219490]:        "type": "bluestore"
Dec  5 01:15:51 compute-0 peaceful_benz[219490]:    },
Dec  5 01:15:51 compute-0 peaceful_benz[219490]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 01:15:51 compute-0 peaceful_benz[219490]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:15:51 compute-0 peaceful_benz[219490]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 01:15:51 compute-0 peaceful_benz[219490]:        "osd_id": 1,
Dec  5 01:15:51 compute-0 peaceful_benz[219490]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:15:51 compute-0 peaceful_benz[219490]:        "type": "bluestore"
Dec  5 01:15:51 compute-0 peaceful_benz[219490]:    },
Dec  5 01:15:51 compute-0 peaceful_benz[219490]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 01:15:51 compute-0 peaceful_benz[219490]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:15:51 compute-0 peaceful_benz[219490]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 01:15:51 compute-0 peaceful_benz[219490]:        "osd_id": 2,
Dec  5 01:15:51 compute-0 peaceful_benz[219490]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:15:51 compute-0 peaceful_benz[219490]:        "type": "bluestore"
Dec  5 01:15:51 compute-0 peaceful_benz[219490]:    }
Dec  5 01:15:51 compute-0 peaceful_benz[219490]: }
Dec  5 01:15:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-d50106afa69687cb4a0160de4eef11caa1eff4dedd220cf2d50988e8e4e0b3d9-merged.mount: Deactivated successfully.
Dec  5 01:15:51 compute-0 systemd[1]: libpod-11f5457ef9e28925cd831f76d32957ae69b52b0c4f0ff9c0cca8373acd8e0c1e.scope: Deactivated successfully.
Dec  5 01:15:51 compute-0 systemd[1]: libpod-11f5457ef9e28925cd831f76d32957ae69b52b0c4f0ff9c0cca8373acd8e0c1e.scope: Consumed 1.114s CPU time.
Dec  5 01:15:51 compute-0 podman[219473]: 2025-12-05 01:15:51.592624801 +0000 UTC m=+1.375027484 container died 11f5457ef9e28925cd831f76d32957ae69b52b0c4f0ff9c0cca8373acd8e0c1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_benz, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:15:51 compute-0 podman[219521]: 2025-12-05 01:15:51.609214685 +0000 UTC m=+0.958753743 container remove cc926f4f91f8e8eb90ef040c0152ac9d486b5875f7681dfddb9f62f83fba86eb (image=quay.io/ceph/ceph:v18, name=cool_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  5 01:15:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-434cc006023a6be239467f2db25fd80183359630c2aaba5fed3657383af4956b-merged.mount: Deactivated successfully.
Dec  5 01:15:51 compute-0 systemd[1]: libpod-conmon-cc926f4f91f8e8eb90ef040c0152ac9d486b5875f7681dfddb9f62f83fba86eb.scope: Deactivated successfully.
Dec  5 01:15:51 compute-0 podman[219473]: 2025-12-05 01:15:51.675608214 +0000 UTC m=+1.458010907 container remove 11f5457ef9e28925cd831f76d32957ae69b52b0c4f0ff9c0cca8373acd8e0c1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  5 01:15:51 compute-0 systemd[1]: libpod-conmon-11f5457ef9e28925cd831f76d32957ae69b52b0c4f0ff9c0cca8373acd8e0c1e.scope: Deactivated successfully.
Dec  5 01:15:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:15:51 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:15:51 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:51 compute-0 ceph-mgr[193209]: [progress INFO root] update: starting ev 90453ab0-db65-46d9-9577-6791a8ecefd3 (Updating rgw.rgw deployment (+1 -> 1))
Dec  5 01:15:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.umynax", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Dec  5 01:15:51 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.umynax", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  5 01:15:51 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:51 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:51 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.umynax", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  5 01:15:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Dec  5 01:15:51 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:15:51 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:15:51 compute-0 ceph-mgr[193209]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.umynax on compute-0
Dec  5 01:15:51 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.umynax on compute-0
Dec  5 01:15:51 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Dec  5 01:15:51 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Dec  5 01:15:51 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Dec  5 01:15:51 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Dec  5 01:15:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:15:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v110: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:15:52 compute-0 podman[219823]: 2025-12-05 01:15:52.589805362 +0000 UTC m=+0.075388481 container create c7a3899da3cacb25be6332624bd7c9c2f8d4729a473f88c972ad1eeff9ddd126 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec  5 01:15:52 compute-0 python3[219817]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:15:52 compute-0 podman[219823]: 2025-12-05 01:15:52.552101692 +0000 UTC m=+0.037684851 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:15:52 compute-0 systemd[1]: Started libpod-conmon-c7a3899da3cacb25be6332624bd7c9c2f8d4729a473f88c972ad1eeff9ddd126.scope.
Dec  5 01:15:52 compute-0 podman[219837]: 2025-12-05 01:15:52.686935463 +0000 UTC m=+0.068173567 container create ed264eaefb19e8c6061b9807fa84ef7afb728aac6e9f62906cb8619552c0fe4d (image=quay.io/ceph/ceph:v18, name=jovial_rosalind, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  5 01:15:52 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:52 compute-0 podman[219823]: 2025-12-05 01:15:52.727643364 +0000 UTC m=+0.213226443 container init c7a3899da3cacb25be6332624bd7c9c2f8d4729a473f88c972ad1eeff9ddd126 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  5 01:15:52 compute-0 podman[219823]: 2025-12-05 01:15:52.74017297 +0000 UTC m=+0.225756089 container start c7a3899da3cacb25be6332624bd7c9c2f8d4729a473f88c972ad1eeff9ddd126 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_chandrasekhar, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:15:52 compute-0 systemd[1]: Started libpod-conmon-ed264eaefb19e8c6061b9807fa84ef7afb728aac6e9f62906cb8619552c0fe4d.scope.
Dec  5 01:15:52 compute-0 podman[219823]: 2025-12-05 01:15:52.745970225 +0000 UTC m=+0.231553334 container attach c7a3899da3cacb25be6332624bd7c9c2f8d4729a473f88c972ad1eeff9ddd126 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  5 01:15:52 compute-0 objective_chandrasekhar[219849]: 167 167
Dec  5 01:15:52 compute-0 systemd[1]: libpod-c7a3899da3cacb25be6332624bd7c9c2f8d4729a473f88c972ad1eeff9ddd126.scope: Deactivated successfully.
Dec  5 01:15:52 compute-0 podman[219823]: 2025-12-05 01:15:52.748923194 +0000 UTC m=+0.234506283 container died c7a3899da3cacb25be6332624bd7c9c2f8d4729a473f88c972ad1eeff9ddd126 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  5 01:15:52 compute-0 podman[219837]: 2025-12-05 01:15:52.661077531 +0000 UTC m=+0.042315645 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:15:52 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.umynax", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  5 01:15:52 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.umynax", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  5 01:15:52 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:52 compute-0 ceph-mon[192914]: Deploying daemon rgw.rgw.compute-0.umynax on compute-0
Dec  5 01:15:52 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f608feb84c02cf26cdd195146e22c6d2f7cc26d80110f959a6c47bd6209526b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f608feb84c02cf26cdd195146e22c6d2f7cc26d80110f959a6c47bd6209526b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:52 compute-0 podman[219837]: 2025-12-05 01:15:52.808381747 +0000 UTC m=+0.189619891 container init ed264eaefb19e8c6061b9807fa84ef7afb728aac6e9f62906cb8619552c0fe4d (image=quay.io/ceph/ceph:v18, name=jovial_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  5 01:15:52 compute-0 podman[219837]: 2025-12-05 01:15:52.819512715 +0000 UTC m=+0.200750819 container start ed264eaefb19e8c6061b9807fa84ef7afb728aac6e9f62906cb8619552c0fe4d (image=quay.io/ceph/ceph:v18, name=jovial_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Dec  5 01:15:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb7aaf0a234a89a073c504672e20a0a83314dacb3cdbc08eed491ae5429c17a5-merged.mount: Deactivated successfully.
Dec  5 01:15:52 compute-0 podman[219837]: 2025-12-05 01:15:52.82494517 +0000 UTC m=+0.206183324 container attach ed264eaefb19e8c6061b9807fa84ef7afb728aac6e9f62906cb8619552c0fe4d (image=quay.io/ceph/ceph:v18, name=jovial_rosalind, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:15:52 compute-0 podman[219823]: 2025-12-05 01:15:52.849173029 +0000 UTC m=+0.334756118 container remove c7a3899da3cacb25be6332624bd7c9c2f8d4729a473f88c972ad1eeff9ddd126 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_chandrasekhar, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:15:52 compute-0 systemd[1]: libpod-conmon-c7a3899da3cacb25be6332624bd7c9c2f8d4729a473f88c972ad1eeff9ddd126.scope: Deactivated successfully.
Dec  5 01:15:52 compute-0 podman[219873]: 2025-12-05 01:15:52.915417844 +0000 UTC m=+0.114289473 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  5 01:15:52 compute-0 systemd[1]: Reloading.
Dec  5 01:15:52 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.a scrub starts
Dec  5 01:15:52 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.a scrub ok
Dec  5 01:15:52 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Dec  5 01:15:52 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Dec  5 01:15:53 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:15:53 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:15:53 compute-0 systemd[1]: Reloading.
Dec  5 01:15:53 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14264 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  5 01:15:53 compute-0 jovial_rosalind[219860]: 
Dec  5 01:15:53 compute-0 jovial_rosalind[219860]: [{"container_id": "f9154648f016", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.41%", "created": "2025-12-05T01:13:50.760162Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-12-05T01:13:50.817961Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-05T01:15:41.601116Z", "memory_usage": 11618222, "ports": [], "service_name": "crash", "started": "2025-12-05T01:13:50.651037Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee@crash.compute-0", "version": "18.2.7"}, {"container_id": "08717604c330", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "26.75%", "created": "2025-12-05T01:12:30.242897Z", "daemon_id": "compute-0.afshmv", "daemon_name": "mgr.compute-0.afshmv", "daemon_type": "mgr", "events": ["2025-12-05T01:14:54.814152Z daemon:mgr.compute-0.afshmv [INFO] \"Reconfigured mgr.compute-0.afshmv on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-05T01:15:41.600971Z", "memory_usage": 549453824, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-12-05T01:12:30.052616Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee@mgr.compute-0.afshmv", "version": "18.2.7"}, {"container_id": "aab8d24497e0", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "3.02%", "created": "2025-12-05T01:12:22.738694Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-12-05T01:14:53.891126Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-05T01:15:41.600092Z", "memory_request": 2147483648, "memory_usage": 39992688, "ports": [], "service_name": "mon", "started": "2025-12-05T01:12:26.716454Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee@mon.compute-0", "version": "18.2.7"}, {"container_id": "a1423cde747e", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "3.06%", "created": "2025-12-05T01:14:21.724026Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-12-05T01:14:21.812000Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-05T01:15:41.601255Z", "memory_request": 4294967296, "memory_usage": 66857205, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-05T01:14:21.508629Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee@osd.0", "version": "18.2.7"}, {"container_id": "4bb9d1516855", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "3.67%", "created": "2025-12-05T01:14:27.459185Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-12-05T01:14:27.573058Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-05T01:15:41.601389Z", "memory_request": 4294967296, "memory_usage": 67454894, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-05T01:14:27.278124Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee@osd.1", "version": "18.2.7"}, {"container_id": "6e6a7cedb28b", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "3.85%", "created": "2025-12-05T01:14:34.085370Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-12-05T01:14:34.191548Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-05T01:15:41.601519Z", "memory_request": 4294967296, "memory_usage": 66280488, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-05T01:14:33.869842Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee@osd.2", "version": "18.2.7"}]
Dec  5 01:15:53 compute-0 podman[219837]: 2025-12-05 01:15:53.434204801 +0000 UTC m=+0.815442925 container died ed264eaefb19e8c6061b9807fa84ef7afb728aac6e9f62906cb8619552c0fe4d (image=quay.io/ceph/ceph:v18, name=jovial_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  5 01:15:53 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:15:53 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:15:53 compute-0 systemd[1]: libpod-ed264eaefb19e8c6061b9807fa84ef7afb728aac6e9f62906cb8619552c0fe4d.scope: Deactivated successfully.
Dec  5 01:15:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f608feb84c02cf26cdd195146e22c6d2f7cc26d80110f959a6c47bd6209526b-merged.mount: Deactivated successfully.
Dec  5 01:15:53 compute-0 podman[219837]: 2025-12-05 01:15:53.72351281 +0000 UTC m=+1.104750904 container remove ed264eaefb19e8c6061b9807fa84ef7afb728aac6e9f62906cb8619552c0fe4d (image=quay.io/ceph/ceph:v18, name=jovial_rosalind, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:15:53 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.umynax for cbd280d3-cbd8-528b-ace6-2b3a887cdcee...
Dec  5 01:15:53 compute-0 systemd[1]: libpod-conmon-ed264eaefb19e8c6061b9807fa84ef7afb728aac6e9f62906cb8619552c0fe4d.scope: Deactivated successfully.
Dec  5 01:15:53 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Dec  5 01:15:53 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Dec  5 01:15:54 compute-0 podman[220047]: 2025-12-05 01:15:54.010789306 +0000 UTC m=+0.052469017 container create 07a0bf3345446b9c345c954d7d5a217cbe22e484fd24a9b291ee8aef91e9a01f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-rgw-rgw-compute-0-umynax, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:15:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58ed6658262e9b6d4e8fc3242e23d55208eaef19b9dbf78789a789e9eb7a83be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58ed6658262e9b6d4e8fc3242e23d55208eaef19b9dbf78789a789e9eb7a83be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58ed6658262e9b6d4e8fc3242e23d55208eaef19b9dbf78789a789e9eb7a83be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58ed6658262e9b6d4e8fc3242e23d55208eaef19b9dbf78789a789e9eb7a83be/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.umynax supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:54 compute-0 podman[220047]: 2025-12-05 01:15:53.986481934 +0000 UTC m=+0.028161705 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:15:54 compute-0 podman[220047]: 2025-12-05 01:15:54.085563579 +0000 UTC m=+0.127243370 container init 07a0bf3345446b9c345c954d7d5a217cbe22e484fd24a9b291ee8aef91e9a01f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-rgw-rgw-compute-0-umynax, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:15:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v111: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:15:54 compute-0 podman[220047]: 2025-12-05 01:15:54.104564618 +0000 UTC m=+0.146244369 container start 07a0bf3345446b9c345c954d7d5a217cbe22e484fd24a9b291ee8aef91e9a01f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-rgw-rgw-compute-0-umynax, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:15:54 compute-0 bash[220047]: 07a0bf3345446b9c345c954d7d5a217cbe22e484fd24a9b291ee8aef91e9a01f
Dec  5 01:15:54 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.umynax for cbd280d3-cbd8-528b-ace6-2b3a887cdcee.
Dec  5 01:15:54 compute-0 radosgw[220065]: deferred set uid:gid to 167:167 (ceph:ceph)
Dec  5 01:15:54 compute-0 radosgw[220065]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Dec  5 01:15:54 compute-0 radosgw[220065]: framework: beast
Dec  5 01:15:54 compute-0 radosgw[220065]: framework conf key: endpoint, val: 192.168.122.100:8082
Dec  5 01:15:54 compute-0 radosgw[220065]: init_numa not setting numa affinity
Dec  5 01:15:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:15:54 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:15:54 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Dec  5 01:15:54 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:54 compute-0 ceph-mgr[193209]: [progress INFO root] complete: finished ev 90453ab0-db65-46d9-9577-6791a8ecefd3 (Updating rgw.rgw deployment (+1 -> 1))
Dec  5 01:15:54 compute-0 ceph-mgr[193209]: [progress INFO root] Completed event 90453ab0-db65-46d9-9577-6791a8ecefd3 (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Dec  5 01:15:54 compute-0 ceph-mgr[193209]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Dec  5 01:15:54 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Dec  5 01:15:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Dec  5 01:15:54 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Dec  5 01:15:54 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:54 compute-0 ceph-mgr[193209]: [progress INFO root] update: starting ev 6761f519-01b4-4e0e-8c9d-4575f616a5ba (Updating mds.cephfs deployment (+1 -> 1))
Dec  5 01:15:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ksxtqc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Dec  5 01:15:54 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ksxtqc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec  5 01:15:54 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ksxtqc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  5 01:15:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:15:54 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:15:54 compute-0 ceph-mgr[193209]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.ksxtqc on compute-0
Dec  5 01:15:54 compute-0 ceph-mgr[193209]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.ksxtqc on compute-0
Dec  5 01:15:54 compute-0 python3[220240]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:15:54 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Dec  5 01:15:54 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Dec  5 01:15:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Dec  5 01:15:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:54 compute-0 ceph-mon[192914]: Saving service rgw.rgw spec with placement compute-0
Dec  5 01:15:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ksxtqc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec  5 01:15:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ksxtqc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  5 01:15:54 compute-0 ceph-mon[192914]: Deploying daemon mds.cephfs.compute-0.ksxtqc on compute-0
Dec  5 01:15:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Dec  5 01:15:54 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Dec  5 01:15:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Dec  5 01:15:54 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2042496677' entity='client.rgw.rgw.compute-0.umynax' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec  5 01:15:54 compute-0 podman[220253]: 2025-12-05 01:15:54.83186563 +0000 UTC m=+0.060997225 container create 16720125ea74b1f30c50302c8a17d81c14cf776e5d8a92e58dfc80fffd403e9f (image=quay.io/ceph/ceph:v18, name=pedantic_robinson, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:15:54 compute-0 systemd[1]: Started libpod-conmon-16720125ea74b1f30c50302c8a17d81c14cf776e5d8a92e58dfc80fffd403e9f.scope.
Dec  5 01:15:54 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9efcccdbce815484f880e276f2886d781ee70c92bccb8a10ba8a2ed2762da3aa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9efcccdbce815484f880e276f2886d781ee70c92bccb8a10ba8a2ed2762da3aa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:54 compute-0 podman[220253]: 2025-12-05 01:15:54.810324953 +0000 UTC m=+0.039456568 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:15:54 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Dec  5 01:15:54 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 42 pg[8.0( empty local-lis/les=0/0 n=0 ec=42/42 lis/c=0/0 les/c/f=0/0/0 sis=42) [1] r=0 lpr=42 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:54 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Dec  5 01:15:54 compute-0 podman[220253]: 2025-12-05 01:15:54.946631214 +0000 UTC m=+0.175762839 container init 16720125ea74b1f30c50302c8a17d81c14cf776e5d8a92e58dfc80fffd403e9f (image=quay.io/ceph/ceph:v18, name=pedantic_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Dec  5 01:15:54 compute-0 podman[220253]: 2025-12-05 01:15:54.960743262 +0000 UTC m=+0.189874867 container start 16720125ea74b1f30c50302c8a17d81c14cf776e5d8a92e58dfc80fffd403e9f (image=quay.io/ceph/ceph:v18, name=pedantic_robinson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  5 01:15:54 compute-0 podman[220253]: 2025-12-05 01:15:54.965922921 +0000 UTC m=+0.195054546 container attach 16720125ea74b1f30c50302c8a17d81c14cf776e5d8a92e58dfc80fffd403e9f (image=quay.io/ceph/ceph:v18, name=pedantic_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  5 01:15:55 compute-0 podman[220277]: 2025-12-05 01:15:55.010621608 +0000 UTC m=+0.131091602 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, distribution-scope=public, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, container_name=kepler, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., version=9.4, architecture=x86_64, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  5 01:15:55 compute-0 podman[220323]: 2025-12-05 01:15:55.125434054 +0000 UTC m=+0.056490575 container create cc5d7b2cfa7720f58aef668530290cb9a7f931750e7583cdbc6f846f34fc3260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:15:55 compute-0 systemd[1]: Started libpod-conmon-cc5d7b2cfa7720f58aef668530290cb9a7f931750e7583cdbc6f846f34fc3260.scope.
Dec  5 01:15:55 compute-0 podman[220323]: 2025-12-05 01:15:55.104344399 +0000 UTC m=+0.035400950 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:15:55 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:55 compute-0 podman[220323]: 2025-12-05 01:15:55.238657276 +0000 UTC m=+0.169713807 container init cc5d7b2cfa7720f58aef668530290cb9a7f931750e7583cdbc6f846f34fc3260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_robinson, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Dec  5 01:15:55 compute-0 podman[220323]: 2025-12-05 01:15:55.248345246 +0000 UTC m=+0.179401777 container start cc5d7b2cfa7720f58aef668530290cb9a7f931750e7583cdbc6f846f34fc3260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_robinson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  5 01:15:55 compute-0 podman[220323]: 2025-12-05 01:15:55.253256428 +0000 UTC m=+0.184312979 container attach cc5d7b2cfa7720f58aef668530290cb9a7f931750e7583cdbc6f846f34fc3260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_robinson, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:15:55 compute-0 musing_robinson[220339]: 167 167
Dec  5 01:15:55 compute-0 systemd[1]: libpod-cc5d7b2cfa7720f58aef668530290cb9a7f931750e7583cdbc6f846f34fc3260.scope: Deactivated successfully.
Dec  5 01:15:55 compute-0 podman[220323]: 2025-12-05 01:15:55.263238965 +0000 UTC m=+0.194295586 container died cc5d7b2cfa7720f58aef668530290cb9a7f931750e7583cdbc6f846f34fc3260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_robinson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  5 01:15:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-022d498befe28f59a077d2cd99588156b644cde4140ee5bf5be7113f9798c047-merged.mount: Deactivated successfully.
Dec  5 01:15:55 compute-0 podman[220323]: 2025-12-05 01:15:55.333138137 +0000 UTC m=+0.264194668 container remove cc5d7b2cfa7720f58aef668530290cb9a7f931750e7583cdbc6f846f34fc3260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:15:55 compute-0 systemd[1]: libpod-conmon-cc5d7b2cfa7720f58aef668530290cb9a7f931750e7583cdbc6f846f34fc3260.scope: Deactivated successfully.
Dec  5 01:15:55 compute-0 systemd[1]: Reloading.
Dec  5 01:15:55 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:15:55 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:15:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Dec  5 01:15:55 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3381832146' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  5 01:15:55 compute-0 pedantic_robinson[220281]: 
Dec  5 01:15:55 compute-0 pedantic_robinson[220281]: {"fsid":"cbd280d3-cbd8-528b-ace6-2b3a887cdcee","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":208,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":42,"num_osds":3,"num_up_osds":3,"osd_up_since":1764897281,"num_in_osds":3,"osd_in_since":1764897248,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":193}],"num_pgs":193,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":84131840,"bytes_avail":64327794688,"bytes_total":64411926528},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":5,"modified":"2025-12-05T01:15:42.082279+0000","services":{}},"progress_events":{"90453ab0-db65-46d9-9577-6791a8ecefd3":{"message":"Updating rgw.rgw deployment (+1 -> 1) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Dec  5 01:15:55 compute-0 podman[220253]: 2025-12-05 01:15:55.672831787 +0000 UTC m=+0.901963382 container died 16720125ea74b1f30c50302c8a17d81c14cf776e5d8a92e58dfc80fffd403e9f (image=quay.io/ceph/ceph:v18, name=pedantic_robinson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:15:55 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.19 deep-scrub starts
Dec  5 01:15:55 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.19 deep-scrub ok
Dec  5 01:15:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Dec  5 01:15:55 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2042496677' entity='client.rgw.rgw.compute-0.umynax' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec  5 01:15:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Dec  5 01:15:55 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Dec  5 01:15:55 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2042496677' entity='client.rgw.rgw.compute-0.umynax' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec  5 01:15:55 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 43 pg[8.0( empty local-lis/les=42/43 n=0 ec=42/42 lis/c=0/0 les/c/f=0/0/0 sis=42) [1] r=0 lpr=42 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:55 compute-0 systemd[1]: libpod-16720125ea74b1f30c50302c8a17d81c14cf776e5d8a92e58dfc80fffd403e9f.scope: Deactivated successfully.
Dec  5 01:15:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-9efcccdbce815484f880e276f2886d781ee70c92bccb8a10ba8a2ed2762da3aa-merged.mount: Deactivated successfully.
Dec  5 01:15:55 compute-0 podman[220253]: 2025-12-05 01:15:55.932599475 +0000 UTC m=+1.161731080 container remove 16720125ea74b1f30c50302c8a17d81c14cf776e5d8a92e58dfc80fffd403e9f (image=quay.io/ceph/ceph:v18, name=pedantic_robinson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  5 01:15:55 compute-0 systemd[1]: libpod-conmon-16720125ea74b1f30c50302c8a17d81c14cf776e5d8a92e58dfc80fffd403e9f.scope: Deactivated successfully.
Dec  5 01:15:55 compute-0 systemd[1]: Reloading.
Dec  5 01:15:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v114: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:15:56 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:15:56 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:15:56 compute-0 ceph-mgr[193209]: [progress INFO root] Writing back 11 completed events
Dec  5 01:15:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec  5 01:15:56 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:56 compute-0 ceph-mgr[193209]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Dec  5 01:15:56 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.ksxtqc for cbd280d3-cbd8-528b-ace6-2b3a887cdcee...
Dec  5 01:15:56 compute-0 podman[220534]: 2025-12-05 01:15:56.813710566 +0000 UTC m=+0.071075764 container create a2540f6b0515ed7d779a8ff346084fa35650221c77d67c6518898d317ca0e92e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mds-cephfs-compute-0-ksxtqc, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:15:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Dec  5 01:15:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Dec  5 01:15:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb45c5da733bd8b6a28f6ab4f0fbdfa4031cd81b00f8f981ba3128832202eef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb45c5da733bd8b6a28f6ab4f0fbdfa4031cd81b00f8f981ba3128832202eef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb45c5da733bd8b6a28f6ab4f0fbdfa4031cd81b00f8f981ba3128832202eef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:56 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2042496677' entity='client.rgw.rgw.compute-0.umynax' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec  5 01:15:56 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:56 compute-0 podman[220534]: 2025-12-05 01:15:56.781484213 +0000 UTC m=+0.038849401 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:15:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb45c5da733bd8b6a28f6ab4f0fbdfa4031cd81b00f8f981ba3128832202eef/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.ksxtqc supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:56 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Dec  5 01:15:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Dec  5 01:15:56 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2042496677' entity='client.rgw.rgw.compute-0.umynax' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec  5 01:15:56 compute-0 podman[220534]: 2025-12-05 01:15:56.90942199 +0000 UTC m=+0.166787248 container init a2540f6b0515ed7d779a8ff346084fa35650221c77d67c6518898d317ca0e92e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mds-cephfs-compute-0-ksxtqc, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:15:56 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.b scrub starts
Dec  5 01:15:56 compute-0 python3[220553]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:15:56 compute-0 podman[220534]: 2025-12-05 01:15:56.935779486 +0000 UTC m=+0.193144694 container start a2540f6b0515ed7d779a8ff346084fa35650221c77d67c6518898d317ca0e92e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mds-cephfs-compute-0-ksxtqc, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:15:56 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 44 pg[9.0( empty local-lis/les=0/0 n=0 ec=44/44 lis/c=0/0 les/c/f=0/0/0 sis=44) [1] r=0 lpr=44 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:56 compute-0 bash[220534]: a2540f6b0515ed7d779a8ff346084fa35650221c77d67c6518898d317ca0e92e
Dec  5 01:15:56 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.b scrub ok
Dec  5 01:15:56 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.ksxtqc for cbd280d3-cbd8-528b-ace6-2b3a887cdcee.
Dec  5 01:15:56 compute-0 ceph-mds[220561]: set uid:gid to 167:167 (ceph:ceph)
Dec  5 01:15:56 compute-0 ceph-mds[220561]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Dec  5 01:15:56 compute-0 ceph-mds[220561]: main not setting numa affinity
Dec  5 01:15:57 compute-0 ceph-mds[220561]: pidfile_write: ignore empty --pid-file
Dec  5 01:15:57 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mds-cephfs-compute-0-ksxtqc[220557]: starting mds.cephfs.compute-0.ksxtqc at 
Dec  5 01:15:57 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc Updating MDS map to version 2 from mon.0
Dec  5 01:15:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:15:57 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).mds e2 assigned standby [v2:192.168.122.100:6814/1147292858,v1:192.168.122.100:6815/1147292858] as mds.0
Dec  5 01:15:57 compute-0 ceph-mon[192914]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.ksxtqc assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec  5 01:15:57 compute-0 ceph-mon[192914]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec  5 01:15:57 compute-0 ceph-mon[192914]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec  5 01:15:57 compute-0 ceph-mon[192914]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec  5 01:15:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:15:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).mds e3 new map
Dec  5 01:15:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).mds e3 print_map#012e3#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0113#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-05T01:15:33.603075+0000#012modified#0112025-12-05T01:15:57.040449+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14271}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-0.ksxtqc{0:14271} state up:creating seq 1 addr [v2:192.168.122.100:6814/1147292858,v1:192.168.122.100:6815/1147292858] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Dec  5 01:15:57 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc Updating MDS map to version 3 from mon.0
Dec  5 01:15:57 compute-0 ceph-mds[220561]: mds.0.3 handle_mds_map i am now mds.0.3
Dec  5 01:15:57 compute-0 ceph-mds[220561]: mds.0.3 handle_mds_map state change up:standby --> up:creating
Dec  5 01:15:57 compute-0 ceph-mds[220561]: mds.0.cache creating system inode with ino:0x1
Dec  5 01:15:57 compute-0 ceph-mds[220561]: mds.0.cache creating system inode with ino:0x100
Dec  5 01:15:57 compute-0 ceph-mds[220561]: mds.0.cache creating system inode with ino:0x600
Dec  5 01:15:57 compute-0 ceph-mds[220561]: mds.0.cache creating system inode with ino:0x601
Dec  5 01:15:57 compute-0 ceph-mds[220561]: mds.0.cache creating system inode with ino:0x602
Dec  5 01:15:57 compute-0 ceph-mds[220561]: mds.0.cache creating system inode with ino:0x603
Dec  5 01:15:57 compute-0 ceph-mds[220561]: mds.0.cache creating system inode with ino:0x604
Dec  5 01:15:57 compute-0 ceph-mds[220561]: mds.0.cache creating system inode with ino:0x605
Dec  5 01:15:57 compute-0 ceph-mds[220561]: mds.0.cache creating system inode with ino:0x606
Dec  5 01:15:57 compute-0 ceph-mds[220561]: mds.0.cache creating system inode with ino:0x607
Dec  5 01:15:57 compute-0 ceph-mds[220561]: mds.0.cache creating system inode with ino:0x608
Dec  5 01:15:57 compute-0 ceph-mds[220561]: mds.0.cache creating system inode with ino:0x609
Dec  5 01:15:57 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/1147292858,v1:192.168.122.100:6815/1147292858] up:boot
Dec  5 01:15:57 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.ksxtqc=up:creating}
Dec  5 01:15:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:15:57 compute-0 podman[220562]: 2025-12-05 01:15:57.051392043 +0000 UTC m=+0.074438355 container create 938dc171aa4231302a3411ba987fff008e0e4ea9bee3f2f9eebac8c03e33e38d (image=quay.io/ceph/ceph:v18, name=epic_albattani, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  5 01:15:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.ksxtqc"} v 0) v1
Dec  5 01:15:57 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.ksxtqc"}]: dispatch
Dec  5 01:15:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).mds e3 all = 0
Dec  5 01:15:57 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Dec  5 01:15:57 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:57 compute-0 ceph-mgr[193209]: [progress INFO root] complete: finished ev 6761f519-01b4-4e0e-8c9d-4575f616a5ba (Updating mds.cephfs deployment (+1 -> 1))
Dec  5 01:15:57 compute-0 ceph-mgr[193209]: [progress INFO root] Completed event 6761f519-01b4-4e0e-8c9d-4575f616a5ba (Updating mds.cephfs deployment (+1 -> 1)) in 3 seconds
Dec  5 01:15:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Dec  5 01:15:57 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Dec  5 01:15:57 compute-0 ceph-mds[220561]: mds.0.3 creating_done
Dec  5 01:15:57 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:57 compute-0 ceph-mon[192914]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.ksxtqc is now active in filesystem cephfs as rank 0
Dec  5 01:15:57 compute-0 podman[220562]: 2025-12-05 01:15:57.018467061 +0000 UTC m=+0.041513353 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:15:57 compute-0 systemd[1]: Started libpod-conmon-938dc171aa4231302a3411ba987fff008e0e4ea9bee3f2f9eebac8c03e33e38d.scope.
Dec  5 01:15:57 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a186da3fbf6c1d1e49f4ca8c0498aeee7587cdbb81fcd599c8114e6ec71118a2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a186da3fbf6c1d1e49f4ca8c0498aeee7587cdbb81fcd599c8114e6ec71118a2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:57 compute-0 podman[220562]: 2025-12-05 01:15:57.219463765 +0000 UTC m=+0.242510057 container init 938dc171aa4231302a3411ba987fff008e0e4ea9bee3f2f9eebac8c03e33e38d (image=quay.io/ceph/ceph:v18, name=epic_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  5 01:15:57 compute-0 podman[220562]: 2025-12-05 01:15:57.234067997 +0000 UTC m=+0.257114289 container start 938dc171aa4231302a3411ba987fff008e0e4ea9bee3f2f9eebac8c03e33e38d (image=quay.io/ceph/ceph:v18, name=epic_albattani, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:15:57 compute-0 podman[220562]: 2025-12-05 01:15:57.241426014 +0000 UTC m=+0.264472316 container attach 938dc171aa4231302a3411ba987fff008e0e4ea9bee3f2f9eebac8c03e33e38d (image=quay.io/ceph/ceph:v18, name=epic_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:15:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Dec  5 01:15:57 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2775645715' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  5 01:15:57 compute-0 epic_albattani[220612]: 
Dec  5 01:15:57 compute-0 epic_albattani[220612]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.umynax","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Dec  5 01:15:57 compute-0 systemd[1]: libpod-938dc171aa4231302a3411ba987fff008e0e4ea9bee3f2f9eebac8c03e33e38d.scope: Deactivated successfully.
Dec  5 01:15:57 compute-0 podman[220562]: 2025-12-05 01:15:57.84047769 +0000 UTC m=+0.863523992 container died 938dc171aa4231302a3411ba987fff008e0e4ea9bee3f2f9eebac8c03e33e38d (image=quay.io/ceph/ceph:v18, name=epic_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:15:57 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.b scrub starts
Dec  5 01:15:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Dec  5 01:15:57 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.b scrub ok
Dec  5 01:15:57 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2042496677' entity='client.rgw.rgw.compute-0.umynax' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec  5 01:15:57 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:57 compute-0 ceph-mon[192914]: daemon mds.cephfs.compute-0.ksxtqc assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec  5 01:15:57 compute-0 ceph-mon[192914]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec  5 01:15:57 compute-0 ceph-mon[192914]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec  5 01:15:57 compute-0 ceph-mon[192914]: Cluster is now healthy
Dec  5 01:15:57 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:57 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:57 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:57 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:57 compute-0 ceph-mon[192914]: daemon mds.cephfs.compute-0.ksxtqc is now active in filesystem cephfs as rank 0
Dec  5 01:15:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-a186da3fbf6c1d1e49f4ca8c0498aeee7587cdbb81fcd599c8114e6ec71118a2-merged.mount: Deactivated successfully.
Dec  5 01:15:57 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2042496677' entity='client.rgw.rgw.compute-0.umynax' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec  5 01:15:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Dec  5 01:15:57 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Dec  5 01:15:57 compute-0 podman[220562]: 2025-12-05 01:15:57.935317811 +0000 UTC m=+0.958364063 container remove 938dc171aa4231302a3411ba987fff008e0e4ea9bee3f2f9eebac8c03e33e38d (image=quay.io/ceph/ceph:v18, name=epic_albattani, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  5 01:15:57 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 45 pg[9.0( empty local-lis/les=44/45 n=0 ec=44/44 lis/c=0/0 les/c/f=0/0/0 sis=44) [1] r=0 lpr=44 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:15:57 compute-0 systemd[1]: libpod-conmon-938dc171aa4231302a3411ba987fff008e0e4ea9bee3f2f9eebac8c03e33e38d.scope: Deactivated successfully.
Dec  5 01:15:58 compute-0 podman[220781]: 2025-12-05 01:15:58.030215213 +0000 UTC m=+0.155610669 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, distribution-scope=public, name=ubi9-minimal, build-date=2025-08-20T13:12:41, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, release=1755695350, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, config_id=edpm, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  5 01:15:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v117: 195 pgs: 1 unknown, 194 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 1023 B/s wr, 1 op/s
Dec  5 01:15:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).mds e4 new map
Dec  5 01:15:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).mds e4 print_map#012e4#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-05T01:15:33.603075+0000#012modified#0112025-12-05T01:15:58.103370+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14271}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-0.ksxtqc{0:14271} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/1147292858,v1:192.168.122.100:6815/1147292858] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Dec  5 01:15:58 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc Updating MDS map to version 4 from mon.0
Dec  5 01:15:58 compute-0 ceph-mds[220561]: mds.0.3 handle_mds_map i am now mds.0.3
Dec  5 01:15:58 compute-0 ceph-mds[220561]: mds.0.3 handle_mds_map state change up:creating --> up:active
Dec  5 01:15:58 compute-0 ceph-mds[220561]: mds.0.3 recovery_done -- successful recovery!
Dec  5 01:15:58 compute-0 ceph-mds[220561]: mds.0.3 active_start
Dec  5 01:15:58 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/1147292858,v1:192.168.122.100:6815/1147292858] up:active
Dec  5 01:15:58 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.ksxtqc=up:active}
Dec  5 01:15:58 compute-0 podman[220882]: 2025-12-05 01:15:58.487990695 +0000 UTC m=+0.104228343 container exec aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  5 01:15:58 compute-0 podman[220882]: 2025-12-05 01:15:58.622450357 +0000 UTC m=+0.238688005 container exec_died aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  5 01:15:58 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Dec  5 01:15:58 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Dec  5 01:15:58 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.d scrub starts
Dec  5 01:15:58 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.d scrub ok
Dec  5 01:15:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Dec  5 01:15:58 compute-0 python3[220973]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:15:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Dec  5 01:15:58 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2042496677' entity='client.rgw.rgw.compute-0.umynax' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec  5 01:15:58 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Dec  5 01:15:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Dec  5 01:15:58 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2042496677' entity='client.rgw.rgw.compute-0.umynax' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  5 01:15:59 compute-0 podman[220992]: 2025-12-05 01:15:59.068547677 +0000 UTC m=+0.072037791 container create f4999778980d731c06648a939878fc7b6a59a37897d907216bf9bc54a060e567 (image=quay.io/ceph/ceph:v18, name=condescending_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  5 01:15:59 compute-0 systemd[1]: Started libpod-conmon-f4999778980d731c06648a939878fc7b6a59a37897d907216bf9bc54a060e567.scope.
Dec  5 01:15:59 compute-0 podman[220992]: 2025-12-05 01:15:59.040950758 +0000 UTC m=+0.044440912 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:15:59 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:15:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c636909733330c052d040703b15c4061dfca02b90beb94d2415b48e406173988/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c636909733330c052d040703b15c4061dfca02b90beb94d2415b48e406173988/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:15:59 compute-0 podman[220992]: 2025-12-05 01:15:59.197251984 +0000 UTC m=+0.200742108 container init f4999778980d731c06648a939878fc7b6a59a37897d907216bf9bc54a060e567 (image=quay.io/ceph/ceph:v18, name=condescending_wing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  5 01:15:59 compute-0 podman[220992]: 2025-12-05 01:15:59.207038866 +0000 UTC m=+0.210528970 container start f4999778980d731c06648a939878fc7b6a59a37897d907216bf9bc54a060e567 (image=quay.io/ceph/ceph:v18, name=condescending_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:15:59 compute-0 podman[220992]: 2025-12-05 01:15:59.212114182 +0000 UTC m=+0.215604306 container attach f4999778980d731c06648a939878fc7b6a59a37897d907216bf9bc54a060e567 (image=quay.io/ceph/ceph:v18, name=condescending_wing, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:15:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:15:59 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:15:59 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:15:59 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:15:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 01:15:59 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:15:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 01:15:59 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:15:59 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 9ffb2308-ca2f-4984-94fd-edfcb3085c21 does not exist
Dec  5 01:15:59 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 952f3be6-a194-4a10-824b-5467be011e24 does not exist
Dec  5 01:15:59 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 681b0222-2a70-41bf-a041-12c7b9b44e8c does not exist
Dec  5 01:15:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 01:15:59 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 01:15:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 01:15:59 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:15:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:15:59 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:15:59 compute-0 podman[158197]: time="2025-12-05T01:15:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:15:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:15:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 34299 "" "Go-http-client/1.1"
Dec  5 01:15:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:15:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7196 "" "Go-http-client/1.1"
Dec  5 01:15:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Dec  5 01:15:59 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3196859533' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Dec  5 01:15:59 compute-0 condescending_wing[221026]: mimic
Dec  5 01:15:59 compute-0 systemd[1]: libpod-f4999778980d731c06648a939878fc7b6a59a37897d907216bf9bc54a060e567.scope: Deactivated successfully.
Dec  5 01:15:59 compute-0 podman[220992]: 2025-12-05 01:15:59.829798897 +0000 UTC m=+0.833289001 container died f4999778980d731c06648a939878fc7b6a59a37897d907216bf9bc54a060e567 (image=quay.io/ceph/ceph:v18, name=condescending_wing, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  5 01:15:59 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 46 pg[10.0( empty local-lis/les=0/0 n=0 ec=46/46 lis/c=0/0 les/c/f=0/0/0 sis=46) [2] r=0 lpr=46 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:15:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-c636909733330c052d040703b15c4061dfca02b90beb94d2415b48e406173988-merged.mount: Deactivated successfully.
Dec  5 01:15:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Dec  5 01:15:59 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2042496677' entity='client.rgw.rgw.compute-0.umynax' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec  5 01:15:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Dec  5 01:15:59 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Dec  5 01:15:59 compute-0 podman[220992]: 2025-12-05 01:15:59.995616269 +0000 UTC m=+0.999106373 container remove f4999778980d731c06648a939878fc7b6a59a37897d907216bf9bc54a060e567 (image=quay.io/ceph/ceph:v18, name=condescending_wing, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:16:00 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2042496677' entity='client.rgw.rgw.compute-0.umynax' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  5 01:16:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:16:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:16:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:16:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:16:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:16:00 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 47 pg[10.0( empty local-lis/les=46/47 n=0 ec=46/46 lis/c=0/0 les/c/f=0/0/0 sis=46) [2] r=0 lpr=46 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:00 compute-0 systemd[1]: libpod-conmon-f4999778980d731c06648a939878fc7b6a59a37897d907216bf9bc54a060e567.scope: Deactivated successfully.
Dec  5 01:16:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v120: 196 pgs: 2 unknown, 194 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 1023 B/s wr, 5 op/s
Dec  5 01:16:00 compute-0 podman[221265]: 2025-12-05 01:16:00.511082097 +0000 UTC m=+0.078568536 container create f2f53b1b259c481aa7b178612574b12e0b1789aca2273199dbda0f1f2a2c5fb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mccarthy, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:16:00 compute-0 podman[221265]: 2025-12-05 01:16:00.48431109 +0000 UTC m=+0.051797539 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:16:00 compute-0 systemd[1]: Started libpod-conmon-f2f53b1b259c481aa7b178612574b12e0b1789aca2273199dbda0f1f2a2c5fb6.scope.
Dec  5 01:16:00 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:16:00 compute-0 podman[221265]: 2025-12-05 01:16:00.660744606 +0000 UTC m=+0.228231035 container init f2f53b1b259c481aa7b178612574b12e0b1789aca2273199dbda0f1f2a2c5fb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mccarthy, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  5 01:16:00 compute-0 podman[221265]: 2025-12-05 01:16:00.679610461 +0000 UTC m=+0.247096860 container start f2f53b1b259c481aa7b178612574b12e0b1789aca2273199dbda0f1f2a2c5fb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mccarthy, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  5 01:16:00 compute-0 podman[221265]: 2025-12-05 01:16:00.684343868 +0000 UTC m=+0.251830337 container attach f2f53b1b259c481aa7b178612574b12e0b1789aca2273199dbda0f1f2a2c5fb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mccarthy, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:16:00 compute-0 heuristic_mccarthy[221281]: 167 167
Dec  5 01:16:00 compute-0 systemd[1]: libpod-f2f53b1b259c481aa7b178612574b12e0b1789aca2273199dbda0f1f2a2c5fb6.scope: Deactivated successfully.
Dec  5 01:16:00 compute-0 podman[221265]: 2025-12-05 01:16:00.69149055 +0000 UTC m=+0.258977009 container died f2f53b1b259c481aa7b178612574b12e0b1789aca2273199dbda0f1f2a2c5fb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Dec  5 01:16:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-4caa69792e8a755c786529ffff40b8821773cd01441bfd1bb74e3cff5829db51-merged.mount: Deactivated successfully.
Dec  5 01:16:00 compute-0 podman[221265]: 2025-12-05 01:16:00.780032131 +0000 UTC m=+0.347518560 container remove f2f53b1b259c481aa7b178612574b12e0b1789aca2273199dbda0f1f2a2c5fb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  5 01:16:00 compute-0 systemd[1]: libpod-conmon-f2f53b1b259c481aa7b178612574b12e0b1789aca2273199dbda0f1f2a2c5fb6.scope: Deactivated successfully.
Dec  5 01:16:00 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.d scrub starts
Dec  5 01:16:00 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.d scrub ok
Dec  5 01:16:01 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Dec  5 01:16:01 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Dec  5 01:16:01 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Dec  5 01:16:01 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Dec  5 01:16:01 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/343582894' entity='client.rgw.rgw.compute-0.umynax' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  5 01:16:01 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 48 pg[11.0( empty local-lis/les=0/0 n=0 ec=48/48 lis/c=0/0 les/c/f=0/0/0 sis=48) [1] r=0 lpr=48 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:01 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/2042496677' entity='client.rgw.rgw.compute-0.umynax' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec  5 01:16:01 compute-0 podman[221329]: 2025-12-05 01:16:01.041426573 +0000 UTC m=+0.079704056 container create 49b8defef1571414628aafda99e4a5d4b817b2202fd68d2b3fb734e7edeccd5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:16:01 compute-0 python3[221323]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:16:01 compute-0 systemd[1]: Started libpod-conmon-49b8defef1571414628aafda99e4a5d4b817b2202fd68d2b3fb734e7edeccd5f.scope.
Dec  5 01:16:01 compute-0 podman[221329]: 2025-12-05 01:16:01.012755085 +0000 UTC m=+0.051032588 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:16:01 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:16:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57acb385ffe68f31534ea9a48f8fe4560287132bd21e4dcf888cd80c32b8eb40/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:16:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57acb385ffe68f31534ea9a48f8fe4560287132bd21e4dcf888cd80c32b8eb40/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:16:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57acb385ffe68f31534ea9a48f8fe4560287132bd21e4dcf888cd80c32b8eb40/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:16:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57acb385ffe68f31534ea9a48f8fe4560287132bd21e4dcf888cd80c32b8eb40/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:16:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57acb385ffe68f31534ea9a48f8fe4560287132bd21e4dcf888cd80c32b8eb40/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:16:01 compute-0 podman[221342]: 2025-12-05 01:16:01.193373454 +0000 UTC m=+0.075289098 container create 3d4250468e7a780451cab5a24201f02f131cb2768d7e5c69915d5c6b05963f73 (image=quay.io/ceph/ceph:v18, name=festive_boyd, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:16:01 compute-0 podman[221329]: 2025-12-05 01:16:01.203393142 +0000 UTC m=+0.241670665 container init 49b8defef1571414628aafda99e4a5d4b817b2202fd68d2b3fb734e7edeccd5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bhabha, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  5 01:16:01 compute-0 podman[221329]: 2025-12-05 01:16:01.220817519 +0000 UTC m=+0.259094982 container start 49b8defef1571414628aafda99e4a5d4b817b2202fd68d2b3fb734e7edeccd5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bhabha, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:16:01 compute-0 podman[221329]: 2025-12-05 01:16:01.224770835 +0000 UTC m=+0.263048298 container attach 49b8defef1571414628aafda99e4a5d4b817b2202fd68d2b3fb734e7edeccd5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bhabha, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  5 01:16:01 compute-0 systemd[1]: Started libpod-conmon-3d4250468e7a780451cab5a24201f02f131cb2768d7e5c69915d5c6b05963f73.scope.
Dec  5 01:16:01 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:16:01 compute-0 podman[221342]: 2025-12-05 01:16:01.167002017 +0000 UTC m=+0.048917701 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:16:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e37451e72da8781ad8f32860d7e053c129abf850104bef5442e70f012e6bd46d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:16:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e37451e72da8781ad8f32860d7e053c129abf850104bef5442e70f012e6bd46d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:16:01 compute-0 podman[221342]: 2025-12-05 01:16:01.292782986 +0000 UTC m=+0.174698680 container init 3d4250468e7a780451cab5a24201f02f131cb2768d7e5c69915d5c6b05963f73 (image=quay.io/ceph/ceph:v18, name=festive_boyd, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  5 01:16:01 compute-0 ceph-mgr[193209]: [progress INFO root] Writing back 12 completed events
Dec  5 01:16:01 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec  5 01:16:01 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:16:01 compute-0 podman[221342]: 2025-12-05 01:16:01.310518232 +0000 UTC m=+0.192433876 container start 3d4250468e7a780451cab5a24201f02f131cb2768d7e5c69915d5c6b05963f73 (image=quay.io/ceph/ceph:v18, name=festive_boyd, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  5 01:16:01 compute-0 ceph-mgr[193209]: [progress INFO root] Completed event 7ef99225-ce26-48d9-bfcb-d0db672cf464 (Global Recovery Event) in 5 seconds
Dec  5 01:16:01 compute-0 podman[221342]: 2025-12-05 01:16:01.316195634 +0000 UTC m=+0.198111278 container attach 3d4250468e7a780451cab5a24201f02f131cb2768d7e5c69915d5c6b05963f73 (image=quay.io/ceph/ceph:v18, name=festive_boyd, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  5 01:16:01 compute-0 openstack_network_exporter[160350]: ERROR   01:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:16:01 compute-0 openstack_network_exporter[160350]: ERROR   01:16:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:16:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:16:01 compute-0 openstack_network_exporter[160350]: ERROR   01:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:16:01 compute-0 openstack_network_exporter[160350]: ERROR   01:16:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:16:01 compute-0 openstack_network_exporter[160350]: ERROR   01:16:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:16:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:16:01 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Dec  5 01:16:01 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Dec  5 01:16:01 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.e scrub starts
Dec  5 01:16:01 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.e scrub ok
Dec  5 01:16:01 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Dec  5 01:16:01 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3585021271' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Dec  5 01:16:01 compute-0 festive_boyd[221362]: 
Dec  5 01:16:01 compute-0 festive_boyd[221362]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":6}}
Dec  5 01:16:01 compute-0 systemd[1]: libpod-3d4250468e7a780451cab5a24201f02f131cb2768d7e5c69915d5c6b05963f73.scope: Deactivated successfully.
Dec  5 01:16:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Dec  5 01:16:02 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/343582894' entity='client.rgw.rgw.compute-0.umynax' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec  5 01:16:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Dec  5 01:16:02 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Dec  5 01:16:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Dec  5 01:16:02 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/343582894' entity='client.rgw.rgw.compute-0.umynax' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  5 01:16:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:16:02 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/343582894' entity='client.rgw.rgw.compute-0.umynax' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  5 01:16:02 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:16:02 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/343582894' entity='client.rgw.rgw.compute-0.umynax' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec  5 01:16:02 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 49 pg[11.0( empty local-lis/les=48/49 n=0 ec=48/48 lis/c=0/0 les/c/f=0/0/0 sis=48) [1] r=0 lpr=48 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:02 compute-0 podman[221392]: 2025-12-05 01:16:02.069204569 +0000 UTC m=+0.055639116 container died 3d4250468e7a780451cab5a24201f02f131cb2768d7e5c69915d5c6b05963f73 (image=quay.io/ceph/ceph:v18, name=festive_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:16:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v123: 197 pgs: 1 creating+peering, 196 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 4.0 KiB/s wr, 12 op/s
Dec  5 01:16:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-e37451e72da8781ad8f32860d7e053c129abf850104bef5442e70f012e6bd46d-merged.mount: Deactivated successfully.
Dec  5 01:16:02 compute-0 podman[221392]: 2025-12-05 01:16:02.132971457 +0000 UTC m=+0.119405984 container remove 3d4250468e7a780451cab5a24201f02f131cb2768d7e5c69915d5c6b05963f73 (image=quay.io/ceph/ceph:v18, name=festive_boyd, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  5 01:16:02 compute-0 systemd[1]: libpod-conmon-3d4250468e7a780451cab5a24201f02f131cb2768d7e5c69915d5c6b05963f73.scope: Deactivated successfully.
Dec  5 01:16:02 compute-0 flamboyant_bhabha[221343]: --> passed data devices: 0 physical, 3 LVM
Dec  5 01:16:02 compute-0 flamboyant_bhabha[221343]: --> relative data size: 1.0
Dec  5 01:16:02 compute-0 flamboyant_bhabha[221343]: --> All data devices are unavailable
Dec  5 01:16:02 compute-0 systemd[1]: libpod-49b8defef1571414628aafda99e4a5d4b817b2202fd68d2b3fb734e7edeccd5f.scope: Deactivated successfully.
Dec  5 01:16:02 compute-0 systemd[1]: libpod-49b8defef1571414628aafda99e4a5d4b817b2202fd68d2b3fb734e7edeccd5f.scope: Consumed 1.182s CPU time.
Dec  5 01:16:02 compute-0 podman[221329]: 2025-12-05 01:16:02.497036209 +0000 UTC m=+1.535313772 container died 49b8defef1571414628aafda99e4a5d4b817b2202fd68d2b3fb734e7edeccd5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:16:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-57acb385ffe68f31534ea9a48f8fe4560287132bd21e4dcf888cd80c32b8eb40-merged.mount: Deactivated successfully.
Dec  5 01:16:02 compute-0 podman[221329]: 2025-12-05 01:16:02.842750682 +0000 UTC m=+1.881028185 container remove 49b8defef1571414628aafda99e4a5d4b817b2202fd68d2b3fb734e7edeccd5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:16:02 compute-0 podman[221426]: 2025-12-05 01:16:02.855358252 +0000 UTC m=+0.315869157 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 01:16:02 compute-0 systemd[1]: libpod-conmon-49b8defef1571414628aafda99e4a5d4b817b2202fd68d2b3fb734e7edeccd5f.scope: Deactivated successfully.
Dec  5 01:16:02 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Dec  5 01:16:02 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Dec  5 01:16:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Dec  5 01:16:03 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/343582894' entity='client.rgw.rgw.compute-0.umynax' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec  5 01:16:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Dec  5 01:16:03 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Dec  5 01:16:03 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/343582894' entity='client.rgw.rgw.compute-0.umynax' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  5 01:16:03 compute-0 ceph-mon[192914]: from='client.? 192.168.122.100:0/343582894' entity='client.rgw.rgw.compute-0.umynax' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec  5 01:16:03 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-rgw-rgw-compute-0-umynax[220061]: 2025-12-05T01:16:03.258+0000 7fb96d85d940 -1 LDAP not started since no server URIs were provided in the configuration.
Dec  5 01:16:03 compute-0 radosgw[220065]: LDAP not started since no server URIs were provided in the configuration.
Dec  5 01:16:03 compute-0 radosgw[220065]: framework: beast
Dec  5 01:16:03 compute-0 radosgw[220065]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Dec  5 01:16:03 compute-0 radosgw[220065]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Dec  5 01:16:03 compute-0 radosgw[220065]: starting handler: beast
Dec  5 01:16:03 compute-0 radosgw[220065]: set uid:gid to 167:167 (ceph:ceph)
Dec  5 01:16:03 compute-0 radosgw[220065]: mgrc service_daemon_register rgw.14277 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.umynax,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025,kernel_version=5.14.0-645.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=95d46763-35cf-41c2-8ad8-575beccf8981,zone_name=default,zonegroup_id=67b9f8de-8a42-4509-9f7a-8c2563510693,zonegroup_name=default}
Dec  5 01:16:03 compute-0 podman[222142]: 2025-12-05 01:16:03.80735656 +0000 UTC m=+0.080775850 container create e3d9e729c4f49e844bae0cf6f543768a42ff85c885b5544522f2600bef1157df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_edison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  5 01:16:03 compute-0 systemd[1]: Started libpod-conmon-e3d9e729c4f49e844bae0cf6f543768a42ff85c885b5544522f2600bef1157df.scope.
Dec  5 01:16:03 compute-0 podman[222142]: 2025-12-05 01:16:03.785391621 +0000 UTC m=+0.058810931 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:16:03 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:16:03 compute-0 podman[222142]: 2025-12-05 01:16:03.935705998 +0000 UTC m=+0.209125378 container init e3d9e729c4f49e844bae0cf6f543768a42ff85c885b5544522f2600bef1157df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_edison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Dec  5 01:16:03 compute-0 podman[222142]: 2025-12-05 01:16:03.950961251 +0000 UTC m=+0.224380541 container start e3d9e729c4f49e844bae0cf6f543768a42ff85c885b5544522f2600bef1157df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  5 01:16:03 compute-0 podman[222142]: 2025-12-05 01:16:03.954993343 +0000 UTC m=+0.228412633 container attach e3d9e729c4f49e844bae0cf6f543768a42ff85c885b5544522f2600bef1157df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_edison, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  5 01:16:03 compute-0 gallant_edison[222158]: 167 167
Dec  5 01:16:03 compute-0 systemd[1]: libpod-e3d9e729c4f49e844bae0cf6f543768a42ff85c885b5544522f2600bef1157df.scope: Deactivated successfully.
Dec  5 01:16:03 compute-0 podman[222142]: 2025-12-05 01:16:03.965406192 +0000 UTC m=+0.238825552 container died e3d9e729c4f49e844bae0cf6f543768a42ff85c885b5544522f2600bef1157df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  5 01:16:04 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Dec  5 01:16:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-a678252b80a9c7108da7e8b4cd8946381b58bf7fb7af713a673000abca827023-merged.mount: Deactivated successfully.
Dec  5 01:16:04 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Dec  5 01:16:04 compute-0 podman[222142]: 2025-12-05 01:16:04.046100638 +0000 UTC m=+0.319519928 container remove e3d9e729c4f49e844bae0cf6f543768a42ff85c885b5544522f2600bef1157df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_edison, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  5 01:16:04 compute-0 systemd[1]: libpod-conmon-e3d9e729c4f49e844bae0cf6f543768a42ff85c885b5544522f2600bef1157df.scope: Deactivated successfully.
Dec  5 01:16:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v125: 197 pgs: 1 creating+peering, 196 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 249 B/s rd, 3.9 KiB/s wr, 8 op/s
Dec  5 01:16:04 compute-0 podman[222180]: 2025-12-05 01:16:04.299553294 +0000 UTC m=+0.079253538 container create 8b84c4ebc3845e037cded2cbe9735e8336376756e6c31598155bebf27eb32758 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:16:04 compute-0 podman[222180]: 2025-12-05 01:16:04.268577555 +0000 UTC m=+0.048277839 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:16:04 compute-0 systemd[1]: Started libpod-conmon-8b84c4ebc3845e037cded2cbe9735e8336376756e6c31598155bebf27eb32758.scope.
Dec  5 01:16:04 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:16:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afe2ea0de8d26a1bde57826d06d74a9102d4fa63eb7d8614de4b0313fc67f405/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:16:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afe2ea0de8d26a1bde57826d06d74a9102d4fa63eb7d8614de4b0313fc67f405/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:16:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afe2ea0de8d26a1bde57826d06d74a9102d4fa63eb7d8614de4b0313fc67f405/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:16:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afe2ea0de8d26a1bde57826d06d74a9102d4fa63eb7d8614de4b0313fc67f405/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:16:04 compute-0 podman[222180]: 2025-12-05 01:16:04.456671829 +0000 UTC m=+0.236372123 container init 8b84c4ebc3845e037cded2cbe9735e8336376756e6c31598155bebf27eb32758 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_brattain, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:16:04 compute-0 podman[222180]: 2025-12-05 01:16:04.485699314 +0000 UTC m=+0.265399548 container start 8b84c4ebc3845e037cded2cbe9735e8336376756e6c31598155bebf27eb32758 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_brattain, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  5 01:16:04 compute-0 podman[222180]: 2025-12-05 01:16:04.490058415 +0000 UTC m=+0.269758649 container attach 8b84c4ebc3845e037cded2cbe9735e8336376756e6c31598155bebf27eb32758 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_brattain, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  5 01:16:04 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Dec  5 01:16:04 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Dec  5 01:16:05 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Dec  5 01:16:05 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]: {
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:    "0": [
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:        {
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:            "devices": [
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:                "/dev/loop3"
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:            ],
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:            "lv_name": "ceph_lv0",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:            "lv_size": "21470642176",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:            "name": "ceph_lv0",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:            "tags": {
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:                "ceph.cluster_name": "ceph",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:                "ceph.crush_device_class": "",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:                "ceph.encrypted": "0",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:                "ceph.osd_id": "0",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:                "ceph.type": "block",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:                "ceph.vdo": "0"
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:            },
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:            "type": "block",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:            "vg_name": "ceph_vg0"
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:        }
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:    ],
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:    "1": [
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:        {
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:            "devices": [
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:                "/dev/loop4"
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:            ],
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:            "lv_name": "ceph_lv1",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:            "lv_size": "21470642176",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:            "name": "ceph_lv1",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:            "tags": {
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:                "ceph.cluster_name": "ceph",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:                "ceph.crush_device_class": "",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:                "ceph.encrypted": "0",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:                "ceph.osd_id": "1",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:                "ceph.type": "block",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:                "ceph.vdo": "0"
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:            },
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:            "type": "block",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:            "vg_name": "ceph_vg1"
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:        }
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:    ],
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:    "2": [
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:        {
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:            "devices": [
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:                "/dev/loop5"
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:            ],
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:            "lv_name": "ceph_lv2",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:            "lv_size": "21470642176",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:            "name": "ceph_lv2",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:            "tags": {
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:                "ceph.cluster_name": "ceph",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:                "ceph.crush_device_class": "",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:                "ceph.encrypted": "0",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:                "ceph.osd_id": "2",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:                "ceph.type": "block",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:                "ceph.vdo": "0"
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:            },
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:            "type": "block",
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:            "vg_name": "ceph_vg2"
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:        }
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]:    ]
Dec  5 01:16:05 compute-0 mystifying_brattain[222196]: }
Dec  5 01:16:05 compute-0 systemd[1]: libpod-8b84c4ebc3845e037cded2cbe9735e8336376756e6c31598155bebf27eb32758.scope: Deactivated successfully.
Dec  5 01:16:05 compute-0 podman[222180]: 2025-12-05 01:16:05.320737902 +0000 UTC m=+1.100438176 container died 8b84c4ebc3845e037cded2cbe9735e8336376756e6c31598155bebf27eb32758 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  5 01:16:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-afe2ea0de8d26a1bde57826d06d74a9102d4fa63eb7d8614de4b0313fc67f405-merged.mount: Deactivated successfully.
Dec  5 01:16:05 compute-0 podman[222180]: 2025-12-05 01:16:05.393091247 +0000 UTC m=+1.172791481 container remove 8b84c4ebc3845e037cded2cbe9735e8336376756e6c31598155bebf27eb32758 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_brattain, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:16:05 compute-0 systemd[1]: libpod-conmon-8b84c4ebc3845e037cded2cbe9735e8336376756e6c31598155bebf27eb32758.scope: Deactivated successfully.
Dec  5 01:16:05 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Dec  5 01:16:05 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Dec  5 01:16:06 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.16 deep-scrub starts
Dec  5 01:16:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v126: 197 pgs: 1 active+clean+scrubbing, 1 creating+peering, 195 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 2.7 KiB/s wr, 5 op/s
Dec  5 01:16:06 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.16 deep-scrub ok
Dec  5 01:16:06 compute-0 ceph-mgr[193209]: [progress INFO root] Writing back 13 completed events
Dec  5 01:16:06 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec  5 01:16:06 compute-0 podman[222356]: 2025-12-05 01:16:06.330632536 +0000 UTC m=+0.047053705 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:16:06 compute-0 podman[222356]: 2025-12-05 01:16:06.536860853 +0000 UTC m=+0.253282012 container create 0627c3f6b858ab367dfa0785dcb26c66265a5785d1c54a8206bdcfa339daa422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_nightingale, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  5 01:16:06 compute-0 systemd[1]: Started libpod-conmon-0627c3f6b858ab367dfa0785dcb26c66265a5785d1c54a8206bdcfa339daa422.scope.
Dec  5 01:16:06 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:16:06 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:16:06 compute-0 podman[222356]: 2025-12-05 01:16:06.943531324 +0000 UTC m=+0.659952493 container init 0627c3f6b858ab367dfa0785dcb26c66265a5785d1c54a8206bdcfa339daa422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_nightingale, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:16:06 compute-0 podman[222356]: 2025-12-05 01:16:06.95670751 +0000 UTC m=+0.673128649 container start 0627c3f6b858ab367dfa0785dcb26c66265a5785d1c54a8206bdcfa339daa422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:16:06 compute-0 podman[222356]: 2025-12-05 01:16:06.962033837 +0000 UTC m=+0.678455016 container attach 0627c3f6b858ab367dfa0785dcb26c66265a5785d1c54a8206bdcfa339daa422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:16:06 compute-0 funny_nightingale[222373]: 167 167
Dec  5 01:16:06 compute-0 systemd[1]: libpod-0627c3f6b858ab367dfa0785dcb26c66265a5785d1c54a8206bdcfa339daa422.scope: Deactivated successfully.
Dec  5 01:16:06 compute-0 podman[222356]: 2025-12-05 01:16:06.972067236 +0000 UTC m=+0.688488385 container died 0627c3f6b858ab367dfa0785dcb26c66265a5785d1c54a8206bdcfa339daa422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  5 01:16:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c7ed388d8b1b73c5642f2a925864094ba660af63823941e89c103d119f6f15a-merged.mount: Deactivated successfully.
Dec  5 01:16:07 compute-0 podman[222356]: 2025-12-05 01:16:07.039063873 +0000 UTC m=+0.755485012 container remove 0627c3f6b858ab367dfa0785dcb26c66265a5785d1c54a8206bdcfa339daa422 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:16:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:16:07 compute-0 systemd[1]: libpod-conmon-0627c3f6b858ab367dfa0785dcb26c66265a5785d1c54a8206bdcfa339daa422.scope: Deactivated successfully.
Dec  5 01:16:07 compute-0 podman[222396]: 2025-12-05 01:16:07.30811376 +0000 UTC m=+0.072731236 container create 41a85ebd5227ba0fe4b5f1ee7507163fbeb8a4197b4ae57fa7cd7426be233072 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  5 01:16:07 compute-0 systemd[1]: Started libpod-conmon-41a85ebd5227ba0fe4b5f1ee7507163fbeb8a4197b4ae57fa7cd7426be233072.scope.
Dec  5 01:16:07 compute-0 podman[222396]: 2025-12-05 01:16:07.283523369 +0000 UTC m=+0.048140875 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:16:07 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa1bdefb1f559caa4cc06cc2e3e8d365e9f7991be0de1ad7249fafb6563d9de0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa1bdefb1f559caa4cc06cc2e3e8d365e9f7991be0de1ad7249fafb6563d9de0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa1bdefb1f559caa4cc06cc2e3e8d365e9f7991be0de1ad7249fafb6563d9de0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:16:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa1bdefb1f559caa4cc06cc2e3e8d365e9f7991be0de1ad7249fafb6563d9de0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:16:07 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:16:07 compute-0 podman[222396]: 2025-12-05 01:16:07.444818739 +0000 UTC m=+0.209436275 container init 41a85ebd5227ba0fe4b5f1ee7507163fbeb8a4197b4ae57fa7cd7426be233072 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  5 01:16:07 compute-0 podman[222396]: 2025-12-05 01:16:07.463458576 +0000 UTC m=+0.228076082 container start 41a85ebd5227ba0fe4b5f1ee7507163fbeb8a4197b4ae57fa7cd7426be233072 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_einstein, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:16:07 compute-0 podman[222396]: 2025-12-05 01:16:07.469608867 +0000 UTC m=+0.234226383 container attach 41a85ebd5227ba0fe4b5f1ee7507163fbeb8a4197b4ae57fa7cd7426be233072 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  5 01:16:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v127: 197 pgs: 1 active+clean+scrubbing, 196 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 6.8 KiB/s wr, 228 op/s
Dec  5 01:16:08 compute-0 focused_einstein[222412]: {
Dec  5 01:16:08 compute-0 focused_einstein[222412]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 01:16:08 compute-0 focused_einstein[222412]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:16:08 compute-0 focused_einstein[222412]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 01:16:08 compute-0 focused_einstein[222412]:        "osd_id": 0,
Dec  5 01:16:08 compute-0 focused_einstein[222412]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:16:08 compute-0 focused_einstein[222412]:        "type": "bluestore"
Dec  5 01:16:08 compute-0 focused_einstein[222412]:    },
Dec  5 01:16:08 compute-0 focused_einstein[222412]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 01:16:08 compute-0 focused_einstein[222412]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:16:08 compute-0 focused_einstein[222412]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 01:16:08 compute-0 focused_einstein[222412]:        "osd_id": 1,
Dec  5 01:16:08 compute-0 focused_einstein[222412]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:16:08 compute-0 focused_einstein[222412]:        "type": "bluestore"
Dec  5 01:16:08 compute-0 focused_einstein[222412]:    },
Dec  5 01:16:08 compute-0 focused_einstein[222412]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 01:16:08 compute-0 focused_einstein[222412]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:16:08 compute-0 focused_einstein[222412]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 01:16:08 compute-0 focused_einstein[222412]:        "osd_id": 2,
Dec  5 01:16:08 compute-0 focused_einstein[222412]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:16:08 compute-0 focused_einstein[222412]:        "type": "bluestore"
Dec  5 01:16:08 compute-0 focused_einstein[222412]:    }
Dec  5 01:16:08 compute-0 focused_einstein[222412]: }
Dec  5 01:16:08 compute-0 podman[222396]: 2025-12-05 01:16:08.611235513 +0000 UTC m=+1.375853029 container died 41a85ebd5227ba0fe4b5f1ee7507163fbeb8a4197b4ae57fa7cd7426be233072 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  5 01:16:08 compute-0 systemd[1]: libpod-41a85ebd5227ba0fe4b5f1ee7507163fbeb8a4197b4ae57fa7cd7426be233072.scope: Deactivated successfully.
Dec  5 01:16:08 compute-0 systemd[1]: libpod-41a85ebd5227ba0fe4b5f1ee7507163fbeb8a4197b4ae57fa7cd7426be233072.scope: Consumed 1.149s CPU time.
Dec  5 01:16:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa1bdefb1f559caa4cc06cc2e3e8d365e9f7991be0de1ad7249fafb6563d9de0-merged.mount: Deactivated successfully.
Dec  5 01:16:08 compute-0 podman[222396]: 2025-12-05 01:16:08.719112793 +0000 UTC m=+1.483730309 container remove 41a85ebd5227ba0fe4b5f1ee7507163fbeb8a4197b4ae57fa7cd7426be233072 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_einstein, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  5 01:16:08 compute-0 systemd[1]: libpod-conmon-41a85ebd5227ba0fe4b5f1ee7507163fbeb8a4197b4ae57fa7cd7426be233072.scope: Deactivated successfully.
Dec  5 01:16:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:16:08 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:16:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:16:08 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:16:08 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 9ad02fcc-60f1-4501-9188-dc8d2ba57528 does not exist
Dec  5 01:16:08 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 99c36fca-cf95-4237-a8e6-0afc85c77f81 does not exist
Dec  5 01:16:09 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Dec  5 01:16:09 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Dec  5 01:16:09 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:16:09 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:16:09 compute-0 python3[222644]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:16:10 compute-0 podman[222666]: 2025-12-05 01:16:10.040991166 +0000 UTC m=+0.099538771 container create fd1d7b6ac1028d21b73eae51edb99cb762a8b7a7f97aeeb2172f8c6a1312ff7f (image=quay.io/ceph/ceph:v18, name=bold_perlman, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:16:10 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Dec  5 01:16:10 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Dec  5 01:16:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v128: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 4.0 KiB/s wr, 195 op/s
Dec  5 01:16:10 compute-0 podman[222666]: 2025-12-05 01:16:10.004262578 +0000 UTC m=+0.062810203 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:16:10 compute-0 systemd[1]: Started libpod-conmon-fd1d7b6ac1028d21b73eae51edb99cb762a8b7a7f97aeeb2172f8c6a1312ff7f.scope.
Dec  5 01:16:10 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:16:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a82e3e90eeb063f1944073b40bf371a9da99df4c5fd77dbc908ca7555daf29f1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:16:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a82e3e90eeb063f1944073b40bf371a9da99df4c5fd77dbc908ca7555daf29f1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:16:10 compute-0 podman[222666]: 2025-12-05 01:16:10.17172697 +0000 UTC m=+0.230274575 container init fd1d7b6ac1028d21b73eae51edb99cb762a8b7a7f97aeeb2172f8c6a1312ff7f (image=quay.io/ceph/ceph:v18, name=bold_perlman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:16:10 compute-0 podman[222666]: 2025-12-05 01:16:10.182036455 +0000 UTC m=+0.240584070 container start fd1d7b6ac1028d21b73eae51edb99cb762a8b7a7f97aeeb2172f8c6a1312ff7f (image=quay.io/ceph/ceph:v18, name=bold_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  5 01:16:10 compute-0 podman[222666]: 2025-12-05 01:16:10.188281289 +0000 UTC m=+0.246829014 container attach fd1d7b6ac1028d21b73eae51edb99cb762a8b7a7f97aeeb2172f8c6a1312ff7f (image=quay.io/ceph/ceph:v18, name=bold_perlman, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  5 01:16:10 compute-0 podman[222780]: 2025-12-05 01:16:10.399787771 +0000 UTC m=+0.107927162 container exec aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:16:10 compute-0 bold_perlman[222690]: could not fetch user info: no user info saved
Dec  5 01:16:10 compute-0 podman[222780]: 2025-12-05 01:16:10.499868935 +0000 UTC m=+0.208008326 container exec_died aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  5 01:16:10 compute-0 systemd[1]: libpod-fd1d7b6ac1028d21b73eae51edb99cb762a8b7a7f97aeeb2172f8c6a1312ff7f.scope: Deactivated successfully.
Dec  5 01:16:10 compute-0 podman[222666]: 2025-12-05 01:16:10.552041791 +0000 UTC m=+0.610589436 container died fd1d7b6ac1028d21b73eae51edb99cb762a8b7a7f97aeeb2172f8c6a1312ff7f (image=quay.io/ceph/ceph:v18, name=bold_perlman, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  5 01:16:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-a82e3e90eeb063f1944073b40bf371a9da99df4c5fd77dbc908ca7555daf29f1-merged.mount: Deactivated successfully.
Dec  5 01:16:10 compute-0 podman[222666]: 2025-12-05 01:16:10.636393139 +0000 UTC m=+0.694940754 container remove fd1d7b6ac1028d21b73eae51edb99cb762a8b7a7f97aeeb2172f8c6a1312ff7f (image=quay.io/ceph/ceph:v18, name=bold_perlman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  5 01:16:10 compute-0 systemd[1]: libpod-conmon-fd1d7b6ac1028d21b73eae51edb99cb762a8b7a7f97aeeb2172f8c6a1312ff7f.scope: Deactivated successfully.
Dec  5 01:16:11 compute-0 python3[222922]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid cbd280d3-cbd8-528b-ace6-2b3a887cdcee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:16:11 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.1d deep-scrub starts
Dec  5 01:16:11 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.1d deep-scrub ok
Dec  5 01:16:11 compute-0 podman[222945]: 2025-12-05 01:16:11.161622938 +0000 UTC m=+0.075673439 container create f26438e79f01ce352826329ea27b8eff785cfcc843e358178b1a3115332e85d2 (image=quay.io/ceph/ceph:v18, name=lucid_bose, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:16:11 compute-0 systemd[1]: Started libpod-conmon-f26438e79f01ce352826329ea27b8eff785cfcc843e358178b1a3115332e85d2.scope.
Dec  5 01:16:11 compute-0 podman[222945]: 2025-12-05 01:16:11.13065268 +0000 UTC m=+0.044703201 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Dec  5 01:16:11 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:16:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8205e836f11254ed6a26eda7960b5918c647a7a0f3c31d203a26cd000dbbcec/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:16:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8205e836f11254ed6a26eda7960b5918c647a7a0f3c31d203a26cd000dbbcec/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:16:11 compute-0 podman[222945]: 2025-12-05 01:16:11.317637963 +0000 UTC m=+0.231688524 container init f26438e79f01ce352826329ea27b8eff785cfcc843e358178b1a3115332e85d2 (image=quay.io/ceph/ceph:v18, name=lucid_bose, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Dec  5 01:16:11 compute-0 podman[222945]: 2025-12-05 01:16:11.328809762 +0000 UTC m=+0.242860263 container start f26438e79f01ce352826329ea27b8eff785cfcc843e358178b1a3115332e85d2 (image=quay.io/ceph/ceph:v18, name=lucid_bose, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:16:11 compute-0 podman[222945]: 2025-12-05 01:16:11.343028536 +0000 UTC m=+0.257079087 container attach f26438e79f01ce352826329ea27b8eff785cfcc843e358178b1a3115332e85d2 (image=quay.io/ceph/ceph:v18, name=lucid_bose, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  5 01:16:11 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:16:11 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:16:11 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:16:11 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:16:11 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:16:11 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:16:11 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 01:16:11 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:16:11 compute-0 lucid_bose[222977]: {
Dec  5 01:16:11 compute-0 lucid_bose[222977]:    "user_id": "openstack",
Dec  5 01:16:11 compute-0 lucid_bose[222977]:    "display_name": "openstack",
Dec  5 01:16:11 compute-0 lucid_bose[222977]:    "email": "",
Dec  5 01:16:11 compute-0 lucid_bose[222977]:    "suspended": 0,
Dec  5 01:16:11 compute-0 lucid_bose[222977]:    "max_buckets": 1000,
Dec  5 01:16:11 compute-0 lucid_bose[222977]:    "subusers": [],
Dec  5 01:16:11 compute-0 lucid_bose[222977]:    "keys": [
Dec  5 01:16:11 compute-0 lucid_bose[222977]:        {
Dec  5 01:16:11 compute-0 lucid_bose[222977]:            "user": "openstack",
Dec  5 01:16:11 compute-0 lucid_bose[222977]:            "access_key": "D49DRQMAOR3T1P2P12S7",
Dec  5 01:16:11 compute-0 lucid_bose[222977]:            "secret_key": "DUkrPGeChDkks0mQE8eW45wrhj9VE5DM73LT72oz"
Dec  5 01:16:11 compute-0 lucid_bose[222977]:        }
Dec  5 01:16:11 compute-0 lucid_bose[222977]:    ],
Dec  5 01:16:11 compute-0 lucid_bose[222977]:    "swift_keys": [],
Dec  5 01:16:11 compute-0 lucid_bose[222977]:    "caps": [],
Dec  5 01:16:11 compute-0 lucid_bose[222977]:    "op_mask": "read, write, delete",
Dec  5 01:16:11 compute-0 lucid_bose[222977]:    "default_placement": "",
Dec  5 01:16:11 compute-0 lucid_bose[222977]:    "default_storage_class": "",
Dec  5 01:16:11 compute-0 lucid_bose[222977]:    "placement_tags": [],
Dec  5 01:16:11 compute-0 lucid_bose[222977]:    "bucket_quota": {
Dec  5 01:16:11 compute-0 lucid_bose[222977]:        "enabled": false,
Dec  5 01:16:11 compute-0 lucid_bose[222977]:        "check_on_raw": false,
Dec  5 01:16:11 compute-0 lucid_bose[222977]:        "max_size": -1,
Dec  5 01:16:11 compute-0 lucid_bose[222977]:        "max_size_kb": 0,
Dec  5 01:16:11 compute-0 lucid_bose[222977]:        "max_objects": -1
Dec  5 01:16:11 compute-0 lucid_bose[222977]:    },
Dec  5 01:16:11 compute-0 lucid_bose[222977]:    "user_quota": {
Dec  5 01:16:11 compute-0 lucid_bose[222977]:        "enabled": false,
Dec  5 01:16:11 compute-0 lucid_bose[222977]:        "check_on_raw": false,
Dec  5 01:16:11 compute-0 lucid_bose[222977]:        "max_size": -1,
Dec  5 01:16:11 compute-0 lucid_bose[222977]:        "max_size_kb": 0,
Dec  5 01:16:11 compute-0 lucid_bose[222977]:        "max_objects": -1
Dec  5 01:16:11 compute-0 lucid_bose[222977]:    },
Dec  5 01:16:11 compute-0 lucid_bose[222977]:    "temp_url_keys": [],
Dec  5 01:16:11 compute-0 lucid_bose[222977]:    "type": "rgw",
Dec  5 01:16:11 compute-0 lucid_bose[222977]:    "mfa_ids": []
Dec  5 01:16:11 compute-0 lucid_bose[222977]: }
Dec  5 01:16:11 compute-0 lucid_bose[222977]: 
Dec  5 01:16:11 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 01:16:11 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:16:11 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev b7f246a2-4f7b-481e-af96-fd227bb0ce22 does not exist
Dec  5 01:16:11 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev d2666066-421e-4801-ab4e-d1da52f1042f does not exist
Dec  5 01:16:11 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 4d9f7f5c-ceae-4484-8597-31e33157afc2 does not exist
Dec  5 01:16:11 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 01:16:11 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 01:16:11 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 01:16:11 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:16:11 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:16:11 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:16:11 compute-0 systemd[1]: libpod-f26438e79f01ce352826329ea27b8eff785cfcc843e358178b1a3115332e85d2.scope: Deactivated successfully.
Dec  5 01:16:11 compute-0 podman[222945]: 2025-12-05 01:16:11.659253482 +0000 UTC m=+0.573303983 container died f26438e79f01ce352826329ea27b8eff785cfcc843e358178b1a3115332e85d2 (image=quay.io/ceph/ceph:v18, name=lucid_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:16:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8205e836f11254ed6a26eda7960b5918c647a7a0f3c31d203a26cd000dbbcec-merged.mount: Deactivated successfully.
Dec  5 01:16:11 compute-0 podman[222945]: 2025-12-05 01:16:11.704792905 +0000 UTC m=+0.618843396 container remove f26438e79f01ce352826329ea27b8eff785cfcc843e358178b1a3115332e85d2 (image=quay.io/ceph/ceph:v18, name=lucid_bose, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:16:11 compute-0 systemd[1]: libpod-conmon-f26438e79f01ce352826329ea27b8eff785cfcc843e358178b1a3115332e85d2.scope: Deactivated successfully.
Dec  5 01:16:11 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:16:11 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:16:11 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:16:11 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:16:11 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:16:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:16:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v129: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 3.4 KiB/s wr, 159 op/s
Dec  5 01:16:12 compute-0 podman[223248]: 2025-12-05 01:16:12.491541143 +0000 UTC m=+0.083155437 container create 6d5576026b1256a30f697b8e64b9b580732488c49670455d68afdf486336fb19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:16:12 compute-0 podman[223248]: 2025-12-05 01:16:12.443388439 +0000 UTC m=+0.035002812 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:16:12 compute-0 systemd[1]: Started libpod-conmon-6d5576026b1256a30f697b8e64b9b580732488c49670455d68afdf486336fb19.scope.
Dec  5 01:16:12 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:16:12 compute-0 podman[223248]: 2025-12-05 01:16:12.597565282 +0000 UTC m=+0.189179575 container init 6d5576026b1256a30f697b8e64b9b580732488c49670455d68afdf486336fb19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:16:12 compute-0 podman[223248]: 2025-12-05 01:16:12.615776207 +0000 UTC m=+0.207390520 container start 6d5576026b1256a30f697b8e64b9b580732488c49670455d68afdf486336fb19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_williamson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  5 01:16:12 compute-0 youthful_williamson[223264]: 167 167
Dec  5 01:16:12 compute-0 systemd[1]: libpod-6d5576026b1256a30f697b8e64b9b580732488c49670455d68afdf486336fb19.scope: Deactivated successfully.
Dec  5 01:16:12 compute-0 podman[223248]: 2025-12-05 01:16:12.623504892 +0000 UTC m=+0.215119255 container attach 6d5576026b1256a30f697b8e64b9b580732488c49670455d68afdf486336fb19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  5 01:16:12 compute-0 podman[223248]: 2025-12-05 01:16:12.626101964 +0000 UTC m=+0.217716297 container died 6d5576026b1256a30f697b8e64b9b580732488c49670455d68afdf486336fb19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_williamson, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:16:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-c68cd4b69316e6954e0c5f0dc7e924ec972d7d095196d1e5ec006da85c412189-merged.mount: Deactivated successfully.
Dec  5 01:16:12 compute-0 podman[223248]: 2025-12-05 01:16:12.699231791 +0000 UTC m=+0.290846114 container remove 6d5576026b1256a30f697b8e64b9b580732488c49670455d68afdf486336fb19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_williamson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:16:12 compute-0 systemd[1]: libpod-conmon-6d5576026b1256a30f697b8e64b9b580732488c49670455d68afdf486336fb19.scope: Deactivated successfully.
Dec  5 01:16:12 compute-0 podman[223287]: 2025-12-05 01:16:12.993345044 +0000 UTC m=+0.093932455 container create cfd41497ecc112e2bebe0b31c13481863dd2a4b0ff27fa8ef37e960034ed65c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_stonebraker, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:16:13 compute-0 podman[223287]: 2025-12-05 01:16:12.956702988 +0000 UTC m=+0.057290369 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:16:13 compute-0 systemd[1]: Started libpod-conmon-cfd41497ecc112e2bebe0b31c13481863dd2a4b0ff27fa8ef37e960034ed65c5.scope.
Dec  5 01:16:13 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:16:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/825b189b260d7badc4354b726266866072d70c534d1be80b702852be3596acdd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:16:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/825b189b260d7badc4354b726266866072d70c534d1be80b702852be3596acdd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:16:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/825b189b260d7badc4354b726266866072d70c534d1be80b702852be3596acdd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:16:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/825b189b260d7badc4354b726266866072d70c534d1be80b702852be3596acdd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:16:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/825b189b260d7badc4354b726266866072d70c534d1be80b702852be3596acdd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:16:13 compute-0 podman[223287]: 2025-12-05 01:16:13.162277106 +0000 UTC m=+0.262864547 container init cfd41497ecc112e2bebe0b31c13481863dd2a4b0ff27fa8ef37e960034ed65c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_stonebraker, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:16:13 compute-0 podman[223287]: 2025-12-05 01:16:13.173077076 +0000 UTC m=+0.273664477 container start cfd41497ecc112e2bebe0b31c13481863dd2a4b0ff27fa8ef37e960034ed65c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_stonebraker, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:16:13 compute-0 podman[223287]: 2025-12-05 01:16:13.179178755 +0000 UTC m=+0.279766216 container attach cfd41497ecc112e2bebe0b31c13481863dd2a4b0ff27fa8ef37e960034ed65c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:16:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v130: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 3.1 KiB/s wr, 144 op/s
Dec  5 01:16:14 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Dec  5 01:16:14 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Dec  5 01:16:14 compute-0 dreamy_stonebraker[223302]: --> passed data devices: 0 physical, 3 LVM
Dec  5 01:16:14 compute-0 dreamy_stonebraker[223302]: --> relative data size: 1.0
Dec  5 01:16:14 compute-0 dreamy_stonebraker[223302]: --> All data devices are unavailable
Dec  5 01:16:14 compute-0 systemd[1]: libpod-cfd41497ecc112e2bebe0b31c13481863dd2a4b0ff27fa8ef37e960034ed65c5.scope: Deactivated successfully.
Dec  5 01:16:14 compute-0 systemd[1]: libpod-cfd41497ecc112e2bebe0b31c13481863dd2a4b0ff27fa8ef37e960034ed65c5.scope: Consumed 1.317s CPU time.
Dec  5 01:16:14 compute-0 podman[223331]: 2025-12-05 01:16:14.645996554 +0000 UTC m=+0.065045234 container died cfd41497ecc112e2bebe0b31c13481863dd2a4b0ff27fa8ef37e960034ed65c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  5 01:16:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-825b189b260d7badc4354b726266866072d70c534d1be80b702852be3596acdd-merged.mount: Deactivated successfully.
Dec  5 01:16:14 compute-0 podman[223331]: 2025-12-05 01:16:14.76848821 +0000 UTC m=+0.187536810 container remove cfd41497ecc112e2bebe0b31c13481863dd2a4b0ff27fa8ef37e960034ed65c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_stonebraker, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:16:14 compute-0 systemd[1]: libpod-conmon-cfd41497ecc112e2bebe0b31c13481863dd2a4b0ff27fa8ef37e960034ed65c5.scope: Deactivated successfully.
Dec  5 01:16:16 compute-0 podman[223484]: 2025-12-05 01:16:16.039355519 +0000 UTC m=+0.094326266 container create db0d1332fe2cb3ff36d49fa1cc121eb8d5ebd4d454346887a29f3f4fb84075d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Dec  5 01:16:16 compute-0 podman[223484]: 2025-12-05 01:16:16.003789833 +0000 UTC m=+0.058760620 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:16:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:16:16
Dec  5 01:16:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 01:16:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v131: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 2.8 KiB/s wr, 133 op/s
Dec  5 01:16:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 01:16:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', '.rgw.root', 'backups', '.mgr', 'images', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control']
Dec  5 01:16:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 01:16:16 compute-0 systemd[1]: Started libpod-conmon-db0d1332fe2cb3ff36d49fa1cc121eb8d5ebd4d454346887a29f3f4fb84075d5.scope.
Dec  5 01:16:16 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:16:16 compute-0 podman[223484]: 2025-12-05 01:16:16.18336137 +0000 UTC m=+0.238332167 container init db0d1332fe2cb3ff36d49fa1cc121eb8d5ebd4d454346887a29f3f4fb84075d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_thompson, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:16:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:16:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:16:16 compute-0 podman[223484]: 2025-12-05 01:16:16.202709727 +0000 UTC m=+0.257680464 container start db0d1332fe2cb3ff36d49fa1cc121eb8d5ebd4d454346887a29f3f4fb84075d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_thompson, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  5 01:16:16 compute-0 podman[223484]: 2025-12-05 01:16:16.209724661 +0000 UTC m=+0.264695398 container attach db0d1332fe2cb3ff36d49fa1cc121eb8d5ebd4d454346887a29f3f4fb84075d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  5 01:16:16 compute-0 ecstatic_thompson[223499]: 167 167
Dec  5 01:16:16 compute-0 systemd[1]: libpod-db0d1332fe2cb3ff36d49fa1cc121eb8d5ebd4d454346887a29f3f4fb84075d5.scope: Deactivated successfully.
Dec  5 01:16:16 compute-0 podman[223484]: 2025-12-05 01:16:16.215367978 +0000 UTC m=+0.270338715 container died db0d1332fe2cb3ff36d49fa1cc121eb8d5ebd4d454346887a29f3f4fb84075d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  5 01:16:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 01:16:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:16:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:16:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:16:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:16:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:16:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 01:16:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:16:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:16:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:16:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:16:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:16:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:16:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:16:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfa105965becef1b237f68056e224e8fc5cc92adbb4bdf0fdb847137601fb8f3-merged.mount: Deactivated successfully.
Dec  5 01:16:16 compute-0 podman[223484]: 2025-12-05 01:16:16.29734104 +0000 UTC m=+0.352311777 container remove db0d1332fe2cb3ff36d49fa1cc121eb8d5ebd4d454346887a29f3f4fb84075d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_thompson, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  5 01:16:16 compute-0 systemd[1]: libpod-conmon-db0d1332fe2cb3ff36d49fa1cc121eb8d5ebd4d454346887a29f3f4fb84075d5.scope: Deactivated successfully.
Dec  5 01:16:16 compute-0 podman[223523]: 2025-12-05 01:16:16.605562164 +0000 UTC m=+0.091946270 container create 01dfac9f455d005e7b674e4edaf1a6a393195559c91872c43b11ca0b2987be26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_jemison, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:16:16 compute-0 podman[223523]: 2025-12-05 01:16:16.569076013 +0000 UTC m=+0.055460149 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:16:16 compute-0 systemd[1]: Started libpod-conmon-01dfac9f455d005e7b674e4edaf1a6a393195559c91872c43b11ca0b2987be26.scope.
Dec  5 01:16:16 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:16:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/631917f490f26ac1548e53ad3ad1d5d5f5af5f8025a660117b05adc8d0fa77f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:16:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/631917f490f26ac1548e53ad3ad1d5d5f5af5f8025a660117b05adc8d0fa77f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:16:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/631917f490f26ac1548e53ad3ad1d5d5f5af5f8025a660117b05adc8d0fa77f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:16:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/631917f490f26ac1548e53ad3ad1d5d5f5af5f8025a660117b05adc8d0fa77f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:16:16 compute-0 podman[223523]: 2025-12-05 01:16:16.763463931 +0000 UTC m=+0.249848097 container init 01dfac9f455d005e7b674e4edaf1a6a393195559c91872c43b11ca0b2987be26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  5 01:16:16 compute-0 podman[223523]: 2025-12-05 01:16:16.792963269 +0000 UTC m=+0.279347375 container start 01dfac9f455d005e7b674e4edaf1a6a393195559c91872c43b11ca0b2987be26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:16:16 compute-0 podman[223523]: 2025-12-05 01:16:16.799525921 +0000 UTC m=+0.285910017 container attach 01dfac9f455d005e7b674e4edaf1a6a393195559c91872c43b11ca0b2987be26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_jemison, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:16:16 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.10 deep-scrub starts
Dec  5 01:16:17 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.10 deep-scrub ok
Dec  5 01:16:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:16:17 compute-0 agitated_jemison[223539]: {
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:    "0": [
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:        {
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:            "devices": [
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:                "/dev/loop3"
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:            ],
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:            "lv_name": "ceph_lv0",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:            "lv_size": "21470642176",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:            "name": "ceph_lv0",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:            "tags": {
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:                "ceph.cluster_name": "ceph",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:                "ceph.crush_device_class": "",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:                "ceph.encrypted": "0",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:                "ceph.osd_id": "0",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:                "ceph.type": "block",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:                "ceph.vdo": "0"
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:            },
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:            "type": "block",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:            "vg_name": "ceph_vg0"
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:        }
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:    ],
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:    "1": [
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:        {
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:            "devices": [
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:                "/dev/loop4"
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:            ],
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:            "lv_name": "ceph_lv1",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:            "lv_size": "21470642176",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:            "name": "ceph_lv1",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:            "tags": {
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:                "ceph.cluster_name": "ceph",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:                "ceph.crush_device_class": "",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:                "ceph.encrypted": "0",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:                "ceph.osd_id": "1",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:                "ceph.type": "block",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:                "ceph.vdo": "0"
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:            },
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:            "type": "block",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:            "vg_name": "ceph_vg1"
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:        }
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:    ],
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:    "2": [
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:        {
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:            "devices": [
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:                "/dev/loop5"
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:            ],
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:            "lv_name": "ceph_lv2",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:            "lv_size": "21470642176",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:            "name": "ceph_lv2",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:            "tags": {
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:                "ceph.cluster_name": "ceph",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:                "ceph.crush_device_class": "",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:                "ceph.encrypted": "0",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:                "ceph.osd_id": "2",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:                "ceph.type": "block",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:                "ceph.vdo": "0"
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:            },
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:            "type": "block",
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:            "vg_name": "ceph_vg2"
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:        }
Dec  5 01:16:17 compute-0 agitated_jemison[223539]:    ]
Dec  5 01:16:17 compute-0 agitated_jemison[223539]: }
Dec  5 01:16:17 compute-0 systemd[1]: libpod-01dfac9f455d005e7b674e4edaf1a6a393195559c91872c43b11ca0b2987be26.scope: Deactivated successfully.
Dec  5 01:16:17 compute-0 podman[223523]: 2025-12-05 01:16:17.690696083 +0000 UTC m=+1.177080179 container died 01dfac9f455d005e7b674e4edaf1a6a393195559c91872c43b11ca0b2987be26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:16:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-631917f490f26ac1548e53ad3ad1d5d5f5af5f8025a660117b05adc8d0fa77f0-merged.mount: Deactivated successfully.
Dec  5 01:16:17 compute-0 podman[223523]: 2025-12-05 01:16:17.788825313 +0000 UTC m=+1.275209379 container remove 01dfac9f455d005e7b674e4edaf1a6a393195559c91872c43b11ca0b2987be26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  5 01:16:17 compute-0 systemd[1]: libpod-conmon-01dfac9f455d005e7b674e4edaf1a6a393195559c91872c43b11ca0b2987be26.scope: Deactivated successfully.
Dec  5 01:16:17 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.5 deep-scrub starts
Dec  5 01:16:17 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.5 deep-scrub ok
Dec  5 01:16:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v132: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 2.8 KiB/s wr, 132 op/s
Dec  5 01:16:19 compute-0 podman[223700]: 2025-12-05 01:16:19.633616341 +0000 UTC m=+0.099295313 container create a14b5d8d76f821f38a5db81914ae9128acdcfb41ab465c4a28b534431165fe24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:16:19 compute-0 podman[223700]: 2025-12-05 01:16:19.598346214 +0000 UTC m=+0.064025206 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:16:19 compute-0 systemd[1]: Started libpod-conmon-a14b5d8d76f821f38a5db81914ae9128acdcfb41ab465c4a28b534431165fe24.scope.
Dec  5 01:16:19 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:16:19 compute-0 podman[223700]: 2025-12-05 01:16:19.789314937 +0000 UTC m=+0.254993909 container init a14b5d8d76f821f38a5db81914ae9128acdcfb41ab465c4a28b534431165fe24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_shirley, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  5 01:16:19 compute-0 podman[223700]: 2025-12-05 01:16:19.808150049 +0000 UTC m=+0.273829041 container start a14b5d8d76f821f38a5db81914ae9128acdcfb41ab465c4a28b534431165fe24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  5 01:16:19 compute-0 podman[223700]: 2025-12-05 01:16:19.815697339 +0000 UTC m=+0.281376361 container attach a14b5d8d76f821f38a5db81914ae9128acdcfb41ab465c4a28b534431165fe24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_shirley, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:16:19 compute-0 loving_shirley[223716]: 167 167
Dec  5 01:16:19 compute-0 systemd[1]: libpod-a14b5d8d76f821f38a5db81914ae9128acdcfb41ab465c4a28b534431165fe24.scope: Deactivated successfully.
Dec  5 01:16:19 compute-0 podman[223700]: 2025-12-05 01:16:19.8211536 +0000 UTC m=+0.286832572 container died a14b5d8d76f821f38a5db81914ae9128acdcfb41ab465c4a28b534431165fe24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_shirley, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:16:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-f639fe88e0a61952db252562806d46a6b513b07fb847d8377aaf815631eaec71-merged.mount: Deactivated successfully.
Dec  5 01:16:19 compute-0 podman[223700]: 2025-12-05 01:16:19.899490781 +0000 UTC m=+0.365169763 container remove a14b5d8d76f821f38a5db81914ae9128acdcfb41ab465c4a28b534431165fe24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:16:19 compute-0 systemd[1]: libpod-conmon-a14b5d8d76f821f38a5db81914ae9128acdcfb41ab465c4a28b534431165fe24.scope: Deactivated successfully.
Dec  5 01:16:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v133: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 170 B/s wr, 1 op/s
Dec  5 01:16:20 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Dec  5 01:16:20 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Dec  5 01:16:20 compute-0 podman[223740]: 2025-12-05 01:16:20.198046717 +0000 UTC m=+0.101628118 container create f38ffc3185e6975ac2663c91c9fc0c6f759cd8dee7cbe1e586369809a93aef37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_carson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:16:20 compute-0 podman[223740]: 2025-12-05 01:16:20.162629406 +0000 UTC m=+0.066210887 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:16:20 compute-0 systemd[1]: Started libpod-conmon-f38ffc3185e6975ac2663c91c9fc0c6f759cd8dee7cbe1e586369809a93aef37.scope.
Dec  5 01:16:20 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:16:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/274ca815e6b235f1172869034b20a42bbd9f75f4dc2de4f185db2b15c980c85e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:16:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/274ca815e6b235f1172869034b20a42bbd9f75f4dc2de4f185db2b15c980c85e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:16:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/274ca815e6b235f1172869034b20a42bbd9f75f4dc2de4f185db2b15c980c85e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:16:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/274ca815e6b235f1172869034b20a42bbd9f75f4dc2de4f185db2b15c980c85e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:16:20 compute-0 podman[223740]: 2025-12-05 01:16:20.399188203 +0000 UTC m=+0.302769674 container init f38ffc3185e6975ac2663c91c9fc0c6f759cd8dee7cbe1e586369809a93aef37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_carson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Dec  5 01:16:20 compute-0 podman[223754]: 2025-12-05 01:16:20.407956486 +0000 UTC m=+0.144443315 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  5 01:16:20 compute-0 podman[223740]: 2025-12-05 01:16:20.420639138 +0000 UTC m=+0.324220579 container start f38ffc3185e6975ac2663c91c9fc0c6f759cd8dee7cbe1e586369809a93aef37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  5 01:16:20 compute-0 podman[223740]: 2025-12-05 01:16:20.427489978 +0000 UTC m=+0.331071469 container attach f38ffc3185e6975ac2663c91c9fc0c6f759cd8dee7cbe1e586369809a93aef37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_carson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:16:21 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Dec  5 01:16:21 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Dec  5 01:16:21 compute-0 jolly_carson[223765]: {
Dec  5 01:16:21 compute-0 jolly_carson[223765]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 01:16:21 compute-0 jolly_carson[223765]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:16:21 compute-0 jolly_carson[223765]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 01:16:21 compute-0 jolly_carson[223765]:        "osd_id": 0,
Dec  5 01:16:21 compute-0 jolly_carson[223765]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:16:21 compute-0 jolly_carson[223765]:        "type": "bluestore"
Dec  5 01:16:21 compute-0 jolly_carson[223765]:    },
Dec  5 01:16:21 compute-0 jolly_carson[223765]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 01:16:21 compute-0 jolly_carson[223765]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:16:21 compute-0 jolly_carson[223765]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 01:16:21 compute-0 jolly_carson[223765]:        "osd_id": 1,
Dec  5 01:16:21 compute-0 jolly_carson[223765]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:16:21 compute-0 jolly_carson[223765]:        "type": "bluestore"
Dec  5 01:16:21 compute-0 jolly_carson[223765]:    },
Dec  5 01:16:21 compute-0 jolly_carson[223765]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 01:16:21 compute-0 jolly_carson[223765]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:16:21 compute-0 jolly_carson[223765]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 01:16:21 compute-0 jolly_carson[223765]:        "osd_id": 2,
Dec  5 01:16:21 compute-0 jolly_carson[223765]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:16:21 compute-0 jolly_carson[223765]:        "type": "bluestore"
Dec  5 01:16:21 compute-0 jolly_carson[223765]:    }
Dec  5 01:16:21 compute-0 jolly_carson[223765]: }
Dec  5 01:16:21 compute-0 systemd[1]: libpod-f38ffc3185e6975ac2663c91c9fc0c6f759cd8dee7cbe1e586369809a93aef37.scope: Deactivated successfully.
Dec  5 01:16:21 compute-0 podman[223740]: 2025-12-05 01:16:21.699564739 +0000 UTC m=+1.603146170 container died f38ffc3185e6975ac2663c91c9fc0c6f759cd8dee7cbe1e586369809a93aef37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_carson, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:16:21 compute-0 systemd[1]: libpod-f38ffc3185e6975ac2663c91c9fc0c6f759cd8dee7cbe1e586369809a93aef37.scope: Consumed 1.282s CPU time.
Dec  5 01:16:21 compute-0 podman[223803]: 2025-12-05 01:16:21.742588472 +0000 UTC m=+0.141444512 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 01:16:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-274ca815e6b235f1172869034b20a42bbd9f75f4dc2de4f185db2b15c980c85e-merged.mount: Deactivated successfully.
Dec  5 01:16:21 compute-0 podman[223807]: 2025-12-05 01:16:21.822401624 +0000 UTC m=+0.213053817 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  5 01:16:21 compute-0 podman[223740]: 2025-12-05 01:16:21.831669111 +0000 UTC m=+1.735250512 container remove f38ffc3185e6975ac2663c91c9fc0c6f759cd8dee7cbe1e586369809a93aef37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_carson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:16:21 compute-0 systemd[1]: libpod-conmon-f38ffc3185e6975ac2663c91c9fc0c6f759cd8dee7cbe1e586369809a93aef37.scope: Deactivated successfully.
Dec  5 01:16:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:16:21 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:16:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:16:21 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Dec  5 01:16:21 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:16:21 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Dec  5 01:16:21 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 8dbdaefe-90b7-45c4-acd2-53dd119bdf3c does not exist
Dec  5 01:16:21 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev de32f3ef-18d5-4249-a79c-1298fd87fe83 does not exist
Dec  5 01:16:21 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:16:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Dec  5 01:16:22 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Dec  5 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 1)
Dec  5 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Dec  5 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  5 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:16:22 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 1)
Dec  5 01:16:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:16:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v134: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 170 B/s wr, 1 op/s
Dec  5 01:16:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:16:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:16:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Dec  5 01:16:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Dec  5 01:16:22 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec  5 01:16:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Dec  5 01:16:22 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Dec  5 01:16:22 compute-0 ceph-mgr[193209]: [progress INFO root] update: starting ev 582dba70-76ae-473a-a8e8-f92239aa32ac (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec  5 01:16:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Dec  5 01:16:22 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Dec  5 01:16:23 compute-0 podman[223918]: 2025-12-05 01:16:23.719638947 +0000 UTC m=+0.123504565 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:16:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Dec  5 01:16:23 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec  5 01:16:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Dec  5 01:16:23 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Dec  5 01:16:23 compute-0 ceph-mgr[193209]: [progress INFO root] update: starting ev bd60c833-eb19-451e-8af3-d097b0e1ed12 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec  5 01:16:23 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec  5 01:16:23 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Dec  5 01:16:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Dec  5 01:16:23 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Dec  5 01:16:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v137: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:16:24 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec  5 01:16:24 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  5 01:16:24 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec  5 01:16:24 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  5 01:16:24 compute-0 systemd-logind[792]: New session 41 of user zuul.
Dec  5 01:16:24 compute-0 systemd[1]: Started Session 41 of User zuul.
Dec  5 01:16:24 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Dec  5 01:16:24 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec  5 01:16:24 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec  5 01:16:24 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec  5 01:16:24 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Dec  5 01:16:24 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Dec  5 01:16:24 compute-0 ceph-mgr[193209]: [progress INFO root] update: starting ev 80c7654f-dc62-4d1b-9b5d-dfbd2305777a (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec  5 01:16:24 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Dec  5 01:16:24 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec  5 01:16:24 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec  5 01:16:24 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Dec  5 01:16:24 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  5 01:16:24 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  5 01:16:25 compute-0 podman[224017]: 2025-12-05 01:16:25.733419188 +0000 UTC m=+0.133960484 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, config_id=edpm, release=1214.1726694543, release-0.7.12=, io.openshift.expose-services=, architecture=x86_64, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, vcs-type=git)
Dec  5 01:16:25 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Dec  5 01:16:25 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec  5 01:16:25 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Dec  5 01:16:25 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Dec  5 01:16:26 compute-0 ceph-mgr[193209]: [progress INFO root] update: starting ev 4e9493b6-de84-45eb-905f-8cf9df45e752 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec  5 01:16:26 compute-0 ceph-mgr[193209]: [progress INFO root] complete: finished ev 582dba70-76ae-473a-a8e8-f92239aa32ac (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec  5 01:16:26 compute-0 ceph-mgr[193209]: [progress INFO root] Completed event 582dba70-76ae-473a-a8e8-f92239aa32ac (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Dec  5 01:16:26 compute-0 ceph-mgr[193209]: [progress INFO root] complete: finished ev bd60c833-eb19-451e-8af3-d097b0e1ed12 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec  5 01:16:26 compute-0 ceph-mgr[193209]: [progress INFO root] Completed event bd60c833-eb19-451e-8af3-d097b0e1ed12 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Dec  5 01:16:26 compute-0 ceph-mgr[193209]: [progress INFO root] complete: finished ev 80c7654f-dc62-4d1b-9b5d-dfbd2305777a (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec  5 01:16:26 compute-0 ceph-mgr[193209]: [progress INFO root] Completed event 80c7654f-dc62-4d1b-9b5d-dfbd2305777a (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Dec  5 01:16:26 compute-0 ceph-mgr[193209]: [progress INFO root] complete: finished ev 4e9493b6-de84-45eb-905f-8cf9df45e752 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec  5 01:16:26 compute-0 ceph-mgr[193209]: [progress INFO root] Completed event 4e9493b6-de84-45eb-905f-8cf9df45e752 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 53 pg[9.0( v 50'586 (0'0,50'586] local-lis/les=44/45 n=209 ec=44/44 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.930982590s) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 50'585 mlcod 50'585 active pruub 128.905731201s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:26 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec  5 01:16:26 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec  5 01:16:26 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec  5 01:16:26 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec  5 01:16:26 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 53 pg[8.0( v 43'4 (0'0,43'4] local-lis/les=42/43 n=4 ec=42/42 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.848893166s) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 43'3 mlcod 43'3 active pruub 126.824256897s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.0( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=42/42 lis/c=42/42 les/c/f=43/43/0 sis=53 pruub=9.848893166s) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 43'3 mlcod 0'0 unknown pruub 126.824256897s@ mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.11( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.1e( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.1c( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.19( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.e( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.1a( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.1d( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.13( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.12( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.a( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.1( v 43'4 (0'0,43'4] local-lis/les=42/43 n=1 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.5( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.14( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.16( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.8( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.1f( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.4( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=1 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.3( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=1 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.d( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.b( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.f( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.c( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.7( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.15( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.17( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.10( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.18( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.1b( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.6( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.9( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[8.2( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=42/43 n=1 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.0( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=6 ec=44/44 lis/c=44/44 les/c/f=45/45/0 sis=53 pruub=11.930982590s) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 50'585 mlcod 0'0 unknown pruub 128.905731201s@ mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.7( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.6( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.9( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.8( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.4( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.3( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.5( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.a( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.1( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.2( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.b( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.c( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.d( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.e( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.f( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.10( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.11( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.13( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.12( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.14( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.15( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.16( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.17( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.18( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.19( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.1a( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.1b( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.1c( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.1d( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.1e( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 54 pg[9.1f( v 50'586 lc 0'0 (0'0,50'586] local-lis/les=44/45 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:26 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.7 deep-scrub starts
Dec  5 01:16:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v140: 259 pgs: 62 unknown, 197 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:16:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec  5 01:16:26 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  5 01:16:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Dec  5 01:16:26 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  5 01:16:26 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.7 deep-scrub ok
Dec  5 01:16:26 compute-0 python3.9[224109]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  5 01:16:26 compute-0 ceph-mgr[193209]: [progress INFO root] Writing back 17 completed events
Dec  5 01:16:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Dec  5 01:16:26 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:16:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Dec  5 01:16:27 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  5 01:16:27 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  5 01:16:27 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:16:27 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec  5 01:16:27 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec  5 01:16:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Dec  5 01:16:27 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Dec  5 01:16:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:16:27 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 55 pg[10.0( v 50'64 (0'0,50'64] local-lis/les=46/47 n=8 ec=46/46 lis/c=46/46 les/c/f=47/47/0 sis=55 pruub=12.949882507s) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 50'63 mlcod 50'63 active pruub 124.379646301s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:27 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 55 pg[10.0( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=46/46 lis/c=46/46 les/c/f=47/47/0 sis=55 pruub=12.949882507s) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 50'63 mlcod 0'0 unknown pruub 124.379646301s@ mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[11.0( v 50'2 (0'0,50'2] local-lis/les=48/49 n=2 ec=48/48 lis/c=48/48 les/c/f=49/49/0 sis=55 pruub=14.987639427s) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 50'1 mlcod 50'1 active pruub 133.016494751s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[11.0( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=48/48 lis/c=48/48 les/c/f=49/49/0 sis=55 pruub=14.987639427s) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 50'1 mlcod 0'0 unknown pruub 133.016494751s@ mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:27 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.15( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.14( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.16( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.14( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.17( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.17( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.16( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.10( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.11( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.0( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=44/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 50'585 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.15( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.2( v 43'4 (0'0,43'4] local-lis/les=53/55 n=1 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.1( v 43'4 (0'0,43'4] local-lis/les=53/55 n=1 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.3( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.2( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.3( v 43'4 (0'0,43'4] local-lis/les=53/55 n=1 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.d( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.e( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.8( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.9( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.d( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.a( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.b( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.f( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.f( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.e( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.b( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.a( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.9( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.7( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.8( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.0( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=42/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 43'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.6( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.1( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.7( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.6( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.4( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.5( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.1a( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.c( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.1b( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.18( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.19( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.19( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.1e( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.5( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.18( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.1e( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.c( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.1c( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.4( v 43'4 (0'0,43'4] local-lis/les=53/55 n=1 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.1d( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.1d( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.1c( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.13( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.12( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.12( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.13( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.1a( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.1f( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.1b( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[8.11( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=42/42 les/c/f=43/43/0 sis=53) [1] r=0 lpr=53 pi=[42,53)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.10( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 55 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=44/44 les/c/f=45/45/0 sis=53) [1] r=0 lpr=53 pi=[44,53)/1 crt=50'586 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:27 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Dec  5 01:16:27 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Dec  5 01:16:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Dec  5 01:16:28 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec  5 01:16:28 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec  5 01:16:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Dec  5 01:16:28 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Dec  5 01:16:28 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.a scrub starts
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.1e( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.d( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.b( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.1b( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.a( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.13( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.12( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.11( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.10( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.1d( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.1c( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.19( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.18( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.1a( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.1f( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.7( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.6( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.5( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.4( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.f( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.8( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.9( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.c( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.e( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.1( v 50'64 (0'0,50'64] local-lis/les=46/47 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.14( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.3( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.15( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.16( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.17( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.16( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.15( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.14( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.2( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=1 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.a scrub ok
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.17( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.1( v 50'2 (0'0,50'2] local-lis/les=48/49 n=1 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.2( v 50'64 lc 0'0 (0'0,50'64] local-lis/les=46/47 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.13( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.f( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.e( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.b( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.9( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.d( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.c( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.8( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.3( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.4( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.5( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.6( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.7( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.18( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.1a( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.1b( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.1c( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.a( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.1e( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.10( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.1e( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.d( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.1f( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.11( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.1d( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.12( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.19( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=48/49 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.17( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v143: 321 pgs: 62 unknown, 64 peering, 195 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.13( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.11( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.10( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.a( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.1c( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.b( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.19( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.18( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.1d( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.1f( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.12( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.7( v 50'64 (0'0,50'64] local-lis/les=55/56 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.4( v 50'64 (0'0,50'64] local-lis/les=55/56 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.1b( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.1a( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.8( v 50'64 (0'0,50'64] local-lis/les=55/56 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.c( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.e( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.6( v 50'64 (0'0,50'64] local-lis/les=55/56 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.5( v 50'64 (0'0,50'64] local-lis/les=55/56 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.9( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.1( v 50'64 (0'0,50'64] local-lis/les=55/56 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.14( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.f( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.0( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=46/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 50'63 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.16( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.3( v 50'64 (0'0,50'64] local-lis/les=55/56 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.17( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.2( v 50'64 (0'0,50'64] local-lis/les=55/56 n=1 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 56 pg[10.15( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=46/46 les/c/f=47/47/0 sis=55) [2] r=0 lpr=55 pi=[46,55)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.15( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.2( v 50'2 (0'0,50'2] local-lis/les=55/56 n=1 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.16( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.1( v 50'2 (0'0,50'2] local-lis/les=55/56 n=1 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.14( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.13( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.0( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=48/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 50'1 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.f( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.e( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.b( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.9( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.8( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.c( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.3( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.4( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.5( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.6( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.7( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.d( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.18( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.1a( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.1b( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.a( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.1c( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.1e( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.10( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.11( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.1d( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.12( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.1f( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 56 pg[11.19( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=48/48 les/c/f=49/49/0 sis=55) [1] r=0 lpr=55 pi=[48,55)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:28 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.1d deep-scrub starts
Dec  5 01:16:28 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.1d deep-scrub ok
Dec  5 01:16:28 compute-0 podman[224307]: 2025-12-05 01:16:28.67251554 +0000 UTC m=+0.114975568 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-type=git, name=ubi9-minimal, config_id=edpm, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, vendor=Red Hat, Inc., version=9.6, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350)
Dec  5 01:16:28 compute-0 python3.9[224354]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:16:29 compute-0 podman[158197]: time="2025-12-05T01:16:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:16:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:16:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec  5 01:16:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:16:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6787 "" "Go-http-client/1.1"
Dec  5 01:16:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v144: 321 pgs: 62 unknown, 64 peering, 195 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:16:30 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.c scrub starts
Dec  5 01:16:30 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.c scrub ok
Dec  5 01:16:31 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Dec  5 01:16:31 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Dec  5 01:16:31 compute-0 openstack_network_exporter[160350]: ERROR   01:16:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:16:31 compute-0 openstack_network_exporter[160350]: ERROR   01:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:16:31 compute-0 openstack_network_exporter[160350]: ERROR   01:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:16:31 compute-0 openstack_network_exporter[160350]: ERROR   01:16:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:16:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:16:31 compute-0 openstack_network_exporter[160350]: ERROR   01:16:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:16:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:16:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:16:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v145: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:16:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec  5 01:16:32 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  5 01:16:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec  5 01:16:32 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  5 01:16:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Dec  5 01:16:32 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec  5 01:16:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec  5 01:16:32 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  5 01:16:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Dec  5 01:16:32 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  5 01:16:32 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  5 01:16:32 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec  5 01:16:32 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  5 01:16:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Dec  5 01:16:32 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Dec  5 01:16:32 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  5 01:16:32 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  5 01:16:32 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec  5 01:16:32 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.d( v 56'65 (0'0,56'65] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.878374100s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 50'64 mlcod 50'64 active pruub 128.463775635s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.d( v 56'65 (0'0,56'65] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.877438545s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 50'64 mlcod 0'0 unknown NOTIFY pruub 128.463775635s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.12( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.889671326s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 128.476364136s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.11( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.889369965s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 128.476135254s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.11( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.889344215s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.476135254s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.12( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.889582634s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.476364136s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.10( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.889259338s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 128.476226807s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.10( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.889238358s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.476226807s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.1a( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.889079094s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 128.476379395s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.19( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.889013290s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 128.476318359s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.19( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.888989449s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.476318359s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.1a( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.889044762s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.476379395s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.1e( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.875101089s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 128.463745117s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.1e( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.875068665s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.463745117s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.7( v 50'64 (0'0,50'64] local-lis/les=55/56 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.887742996s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 128.476470947s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.7( v 50'64 (0'0,50'64] local-lis/les=55/56 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.887702942s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.476470947s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.b( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.890359879s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 128.476287842s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.b( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.886901855s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.476287842s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.6( v 50'64 (0'0,50'64] local-lis/les=55/56 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.886710167s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 128.476531982s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.6( v 50'64 (0'0,50'64] local-lis/les=55/56 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.886675835s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.476531982s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.4( v 50'64 (0'0,50'64] local-lis/les=55/56 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.886508942s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 128.476531982s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.4( v 50'64 (0'0,50'64] local-lis/les=55/56 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.886487961s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.476531982s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.13( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.885747910s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 128.476074219s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.13( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.885709763s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.476074219s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.8( v 50'64 (0'0,50'64] local-lis/les=55/56 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.886237144s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 128.476654053s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.8( v 50'64 (0'0,50'64] local-lis/les=55/56 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.886204720s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.476654053s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.f( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.886037827s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 128.476654053s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.f( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.886006355s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.476654053s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.9( v 56'65 (0'0,56'65] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.885848999s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 50'64 mlcod 50'64 active pruub 128.476882935s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.e( v 56'65 (0'0,56'65] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.885631561s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 50'64 mlcod 50'64 active pruub 128.476699829s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.1( v 50'64 (0'0,50'64] local-lis/les=55/56 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.885595322s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 128.476928711s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.9( v 56'65 (0'0,56'65] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.885489464s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 50'64 mlcod 0'0 unknown NOTIFY pruub 128.476882935s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.2( v 50'64 (0'0,50'64] local-lis/les=55/56 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.885498047s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 128.477096558s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.2( v 50'64 (0'0,50'64] local-lis/les=55/56 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.885476112s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.477096558s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.14( v 56'65 (0'0,56'65] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.885182381s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 50'64 mlcod 50'64 active pruub 128.476974487s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.14( v 56'65 (0'0,56'65] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.885154724s) [1] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 50'64 mlcod 0'0 unknown NOTIFY pruub 128.476974487s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.15( v 56'65 (0'0,56'65] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.885043144s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 50'64 mlcod 50'64 active pruub 128.476974487s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.15( v 56'65 (0'0,56'65] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.885017395s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 50'64 mlcod 0'0 unknown NOTIFY pruub 128.476974487s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.16( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.885004997s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 128.477066040s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.16( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.884985924s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.477066040s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.17( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.884865761s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active pruub 128.477081299s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.17( v 50'64 (0'0,50'64] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.884846687s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.477081299s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.e( v 56'65 (0'0,56'65] local-lis/les=55/56 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.884408951s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 50'64 mlcod 0'0 unknown NOTIFY pruub 128.476699829s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[10.1( v 50'64 (0'0,50'64] local-lis/les=55/56 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.884453773s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.476928711s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[10.9( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[10.8( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[10.15( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[10.13( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[10.10( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[10.11( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[10.1a( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[10.19( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[10.6( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[10.2( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[10.b( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[10.4( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[10.f( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[10.12( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[10.14( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.17( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.859036446s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.061859131s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.14( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.851114273s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 134.054016113s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.15( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.851468086s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 134.054367065s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.14( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.851088524s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.054016113s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.15( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.851434708s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.054367065s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.15( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.837277412s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 134.040405273s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.15( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.837207794s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.040405273s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.15( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.871263504s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.074539185s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.17( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.850790024s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 134.054122925s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.15( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.871229172s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.074539185s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.17( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.858995438s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.061859131s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.17( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.850772858s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.054122925s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.14( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.870972633s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.074707031s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.14( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.870937347s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.074707031s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.10( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.850204468s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 134.054229736s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.10( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.850178719s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.054229736s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.2( v 50'2 (0'0,50'2] local-lis/les=55/56 n=1 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.870398521s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.074630737s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.2( v 50'2 (0'0,50'2] local-lis/les=55/56 n=1 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.870374680s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.074630737s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.2( v 43'4 (0'0,43'4] local-lis/les=53/55 n=1 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.849956512s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 134.054382324s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.11( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.849859238s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 134.054306030s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.2( v 43'4 (0'0,43'4] local-lis/les=53/55 n=1 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.849934578s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.054382324s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.11( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.849824905s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.054306030s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.3( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.849868774s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 134.054428101s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.3( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.849845886s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.054428101s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.f( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.870020866s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.074829102s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.f( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.869997025s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.074829102s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.c( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.850317001s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 134.055297852s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.c( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.850296974s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.055297852s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.1( v 50'2 (0'0,50'2] local-lis/les=55/56 n=1 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.869367599s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.074676514s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.d( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.849087715s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 134.054519653s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.e( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.869245529s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.074844360s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.e( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.869209290s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.074844360s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.d( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.848733902s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 134.054565430s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.d( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.848699570s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.054565430s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.d( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.868861198s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.074920654s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.e( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.848338127s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 134.054580688s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.e( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.848313332s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.054580688s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.f( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.848398209s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 134.054809570s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.f( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.848378181s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.054809570s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.b( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.868321419s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.074859619s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.b( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.868298531s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.074859619s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.d( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.868830681s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.074920654s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.9( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.848031998s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 134.054763794s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.9( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.848009109s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.054763794s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.9( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.868103981s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.074890137s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.b( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.847919464s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 134.054824829s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.b( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.847896576s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.054824829s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.1( v 50'2 (0'0,50'2] local-lis/les=55/56 n=1 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.869333267s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.074676514s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.8( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.867715836s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.074935913s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.f( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.847631454s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 134.054870605s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.8( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.867692947s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.074935913s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.b( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.847529411s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 134.054931641s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.b( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.847506523s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.054931641s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.f( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.847595215s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.054870605s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.d( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.849052429s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.054519653s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.9( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.847260475s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 134.054946899s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.9( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.847226143s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.054946899s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.3( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.867200851s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.075012207s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.3( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.867165565s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.075012207s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.1( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.846507072s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 134.054992676s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.1( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.846475601s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.054992676s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.4( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.866384506s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.075042725s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.4( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.866353035s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.075042725s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.7( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.846394539s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 134.055221558s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.6( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.846260071s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 134.055130005s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.6( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.866254807s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.075134277s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.7( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.846359253s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.055221558s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.6( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.846236229s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.055130005s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.6( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.866223335s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.075134277s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.4( v 43'4 (0'0,43'4] local-lis/les=53/55 n=1 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.846088409s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 134.055297852s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.4( v 43'4 (0'0,43'4] local-lis/les=53/55 n=1 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.846068382s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.055297852s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.5( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.846284866s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 134.055557251s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.5( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.846252441s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.055557251s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.1b( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.846049309s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 134.055450439s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.1b( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.846034050s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.055450439s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.1a( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.865964890s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.075439453s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.1a( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.865938187s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.075439453s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.1b( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.865914345s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.075439453s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.1b( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.865899086s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.075439453s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.18( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.845909119s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 134.055603027s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.18( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.845893860s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.055603027s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.18( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.865543365s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.075302124s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.1c( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.865718842s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.075500488s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.18( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.865479469s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.075302124s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.1c( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.865605354s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.075500488s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.1f( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.844847679s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 134.056213379s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.1f( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.844816208s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.056213379s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[8.15( empty local-lis/les=0/0 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[11.2( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[8.2( empty local-lis/les=0/0 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[11.15( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[8.d( empty local-lis/les=0/0 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[11.b( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[11.d( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[11.8( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[11.3( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[10.7( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.1e( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.863600731s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.075500488s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.1e( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.863578796s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.075500488s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[8.4( empty local-lis/les=0/0 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.1d( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.843503952s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 134.055816650s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.1d( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.843461037s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.055816650s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[8.1b( empty local-lis/les=0/0 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[11.1a( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[11.1b( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.1f( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.863329887s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.075790405s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.843858719s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 134.056442261s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.1f( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.863151550s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.075790405s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.843791962s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.056442261s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.1c( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.843135834s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 134.055923462s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.1c( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.843114853s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.055923462s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.10( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.862828255s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.075653076s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.10( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.862795830s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.075653076s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.11( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.862702370s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.075668335s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.11( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.862681389s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.075668335s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.1d( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.842847824s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 134.055862427s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.12( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.842951775s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 134.056015015s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.13( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.842913628s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 134.056030273s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.12( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.842921257s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.056015015s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.13( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.842892647s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.056030273s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.1d( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.842644691s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.055862427s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.11( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.842899323s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 134.056274414s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.11( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.842875481s) [2] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.056274414s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.19( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.862174034s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.075698853s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.19( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.862127304s) [0] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.075698853s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.1b( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.842557907s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 134.056243896s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.1a( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.842420578s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active pruub 134.056137085s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.1b( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.842535973s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.056243896s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[8.1a( v 43'4 (0'0,43'4] local-lis/les=53/55 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.842396736s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.056137085s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[10.17( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.19( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.845877647s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 134.055511475s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[9.19( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57 pruub=10.841468811s) [0] r=-1 lpr=57 pi=[53,57)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 134.055511475s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.12( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.861424446s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active pruub 135.075729370s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[10.d( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[10.e( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.12( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.861391068s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.075729370s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 57 pg[11.9( v 50'2 (0'0,50'2] local-lis/les=55/56 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57 pruub=11.859471321s) [2] r=-1 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 135.074890137s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[11.18( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[10.1e( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[10.16( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[10.1( empty local-lis/les=0/0 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[11.1c( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[11.1e( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[11.1f( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[8.1c( empty local-lis/les=0/0 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[11.11( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[8.12( empty local-lis/les=0/0 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[8.11( empty local-lis/les=0/0 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[8.14( empty local-lis/les=0/0 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[11.17( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[9.17( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[8.10( empty local-lis/les=0/0 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[11.12( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[11.14( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[9.3( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[11.f( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[9.11( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[11.e( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[8.c( empty local-lis/les=0/0 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[8.e( empty local-lis/les=0/0 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[9.f( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 57 pg[11.9( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[9.9( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[9.b( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[11.1( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[8.b( empty local-lis/les=0/0 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[8.f( empty local-lis/les=0/0 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[9.d( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[8.9( empty local-lis/les=0/0 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[9.1( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[11.4( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[9.7( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[8.6( empty local-lis/les=0/0 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[11.6( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[9.5( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[8.18( empty local-lis/les=0/0 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[9.15( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[8.1f( empty local-lis/les=0/0 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[8.1d( empty local-lis/les=0/0 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[11.10( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[9.1d( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[11.19( empty local-lis/les=0/0 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[9.13( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[9.1b( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[8.1a( empty local-lis/les=0/0 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 57 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:32 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.1c deep-scrub starts
Dec  5 01:16:32 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.1c deep-scrub ok
Dec  5 01:16:33 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.10 scrub starts
Dec  5 01:16:33 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.10 scrub ok
Dec  5 01:16:33 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.f scrub starts
Dec  5 01:16:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Dec  5 01:16:33 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.f scrub ok
Dec  5 01:16:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Dec  5 01:16:33 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Dec  5 01:16:33 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  5 01:16:33 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  5 01:16:33 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec  5 01:16:33 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.15( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.15( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.1b( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.1b( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.1d( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.1d( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.3( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.3( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.1( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.1( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.f( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.f( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.d( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.d( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.15( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[8.1c( v 43'4 (0'0,43'4] local-lis/les=57/58 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.15( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[11.1f( v 50'2 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.17( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.17( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.11( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.11( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.3( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.3( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.d( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.d( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.f( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.f( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.9( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.9( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.b( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.b( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.7( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.7( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.5( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.5( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.19( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.19( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.1( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.1( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.9( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.9( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.17( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.17( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.7( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.b( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.7( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.b( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.5( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.5( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.11( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.11( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.13( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.1d( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[9.13( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.1d( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.13( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.13( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.1b( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[9.1b( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[10.13( v 50'64 (0'0,50'64] local-lis/les=57/58 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[10.10( v 50'64 (0'0,50'64] local-lis/les=57/58 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[10.11( v 50'64 (0'0,50'64] local-lis/les=57/58 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[11.1a( v 50'2 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[11.b( v 50'2 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[8.11( v 43'4 (0'0,43'4] local-lis/les=57/58 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[11.11( v 50'2 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[11.12( v 50'2 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[11.1e( v 50'2 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[11.1c( v 50'2 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[11.1b( v 50'2 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[11.18( v 50'2 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[8.d( v 43'4 (0'0,43'4] local-lis/les=57/58 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[8.1b( v 43'4 (0'0,43'4] local-lis/les=57/58 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[11.9( v 50'2 lc 0'0 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=50'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[8.4( v 43'4 (0'0,43'4] local-lis/les=57/58 n=1 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[11.8( v 50'2 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[11.3( v 50'2 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[11.2( v 50'2 (0'0,50'2] local-lis/les=57/58 n=1 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[11.15( v 50'2 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[8.2( v 43'4 (0'0,43'4] local-lis/les=57/58 n=1 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[8.15( v 43'4 (0'0,43'4] local-lis/les=57/58 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[8.12( v 43'4 (0'0,43'4] local-lis/les=57/58 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 58 pg[11.d( v 50'2 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[8.1d( v 43'4 (0'0,43'4] local-lis/les=57/58 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[10.1a( v 50'64 (0'0,50'64] local-lis/les=57/58 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[10.19( v 50'64 (0'0,50'64] local-lis/les=57/58 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[10.b( v 50'64 (0'0,50'64] local-lis/les=57/58 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[10.6( v 50'64 (0'0,50'64] local-lis/les=57/58 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[10.12( v 50'64 (0'0,50'64] local-lis/les=57/58 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[10.14( v 56'65 lc 50'54 (0'0,56'65] local-lis/les=57/58 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=56'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[10.f( v 50'64 (0'0,50'64] local-lis/les=57/58 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 58 pg[10.2( v 50'64 (0'0,50'64] local-lis/les=57/58 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [1] r=0 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[8.1f( v 43'4 (0'0,43'4] local-lis/les=57/58 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[10.1( v 50'64 (0'0,50'64] local-lis/les=57/58 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[11.17( v 50'2 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[10.16( v 50'64 (0'0,50'64] local-lis/les=57/58 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[8.1a( v 43'4 (0'0,43'4] local-lis/les=57/58 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[11.19( v 50'2 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[8.18( v 43'4 (0'0,43'4] local-lis/les=57/58 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[8.14( v 43'4 (0'0,43'4] local-lis/les=57/58 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[11.1( v 50'2 (0'0,50'2] local-lis/les=57/58 n=1 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[10.1e( v 50'64 (0'0,50'64] local-lis/les=57/58 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[10.7( v 50'64 (0'0,50'64] local-lis/les=57/58 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[10.4( v 50'64 (0'0,50'64] local-lis/les=57/58 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[10.8( v 50'64 (0'0,50'64] local-lis/les=57/58 n=1 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[10.9( v 56'65 lc 50'56 (0'0,56'65] local-lis/les=57/58 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=56'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[10.15( v 56'65 lc 50'46 (0'0,56'65] local-lis/les=57/58 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=56'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[10.17( v 50'64 (0'0,50'64] local-lis/les=57/58 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=50'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[10.e( v 56'65 lc 50'48 (0'0,56'65] local-lis/les=57/58 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=56'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[8.e( v 43'4 (0'0,43'4] local-lis/les=57/58 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[11.f( v 50'2 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[11.e( v 50'2 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[8.c( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=57/58 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=43'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[8.6( v 43'4 (0'0,43'4] local-lis/les=57/58 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[8.9( v 43'4 (0'0,43'4] local-lis/les=57/58 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[8.f( v 43'4 lc 0'0 (0'0,43'4] local-lis/les=57/58 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=43'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[11.14( v 50'2 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[11.6( v 50'2 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[11.4( v 50'2 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[8.b( v 43'4 (0'0,43'4] local-lis/les=57/58 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[11.10( v 50'2 (0'0,50'2] local-lis/les=57/58 n=0 ec=55/48 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=50'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[8.10( v 43'4 (0'0,43'4] local-lis/les=57/58 n=0 ec=53/42 lis/c=53/53 les/c/f=55/55/0 sis=57) [0] r=0 lpr=57 pi=[53,57)/1 crt=43'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 58 pg[10.d( v 56'65 lc 50'50 (0'0,56'65] local-lis/les=57/58 n=0 ec=55/46 lis/c=55/55 les/c/f=56/56/0 sis=57) [0] r=0 lpr=57 pi=[55,57)/1 crt=56'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:33 compute-0 podman[224378]: 2025-12-05 01:16:33.72623164 +0000 UTC m=+0.144591629 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  5 01:16:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v148: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:16:34 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Dec  5 01:16:34 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec  5 01:16:34 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Dec  5 01:16:34 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec  5 01:16:34 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec  5 01:16:34 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Dec  5 01:16:34 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Dec  5 01:16:35 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 59 pg[9.13( v 50'586 (0'0,50'586] local-lis/les=58/59 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:35 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 59 pg[9.b( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:35 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 59 pg[9.17( v 50'586 (0'0,50'586] local-lis/les=58/59 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:35 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 59 pg[9.9( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:35 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 59 pg[9.d( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:35 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 59 pg[9.f( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:35 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 59 pg[9.1d( v 50'586 (0'0,50'586] local-lis/les=58/59 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:35 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 59 pg[9.1b( v 50'586 (0'0,50'586] local-lis/les=58/59 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:35 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 59 pg[9.1( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:35 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 59 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=58/59 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:35 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 59 pg[9.19( v 50'586 (0'0,50'586] local-lis/les=58/59 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=11}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:35 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 59 pg[9.15( v 50'586 (0'0,50'586] local-lis/les=58/59 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:35 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 59 pg[9.7( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:35 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 59 pg[9.11( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:35 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 59 pg[9.5( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:35 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 59 pg[9.3( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=58) [0]/[1] async=[0] r=0 lpr=58 pi=[53,58)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Dec  5 01:16:35 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec  5 01:16:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Dec  5 01:16:35 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Dec  5 01:16:35 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 60 pg[9.13( v 50'586 (0'0,50'586] local-lis/les=58/59 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.791930199s) [0] async=[0] r=-1 lpr=60 pi=[53,60)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 142.058853149s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:35 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 60 pg[9.13( v 50'586 (0'0,50'586] local-lis/les=58/59 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.791602135s) [0] r=-1 lpr=60 pi=[53,60)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.058853149s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:35 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 60 pg[9.b( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.803325653s) [0] async=[0] r=-1 lpr=60 pi=[53,60)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 142.071273804s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:35 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 60 pg[9.b( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=60 pruub=15.801901817s) [0] r=-1 lpr=60 pi=[53,60)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.071273804s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:35 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 60 pg[9.13( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:35 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 60 pg[9.13( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:35 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 60 pg[9.b( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:35 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 60 pg[9.b( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v151: 321 pgs: 321 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 127 B/s, 1 objects/s recovering
Dec  5 01:16:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Dec  5 01:16:36 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec  5 01:16:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Dec  5 01:16:36 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec  5 01:16:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Dec  5 01:16:36 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec  5 01:16:36 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Dec  5 01:16:36 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 61 pg[9.15( v 50'586 (0'0,50'586] local-lis/les=58/59 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.766530991s) [0] async=[0] r=-1 lpr=61 pi=[53,61)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 142.072189331s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:36 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 61 pg[9.d( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.765852928s) [0] async=[0] r=-1 lpr=61 pi=[53,61)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 142.071533203s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:36 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 61 pg[9.9( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.765682220s) [0] async=[0] r=-1 lpr=61 pi=[53,61)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 142.071380615s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:36 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 61 pg[9.d( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.765716553s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.071533203s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:36 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 61 pg[9.9( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.765491486s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.071380615s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:36 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 61 pg[9.f( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.765296936s) [0] async=[0] r=-1 lpr=61 pi=[53,61)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 142.071563721s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:36 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 61 pg[9.15( v 50'586 (0'0,50'586] local-lis/les=58/59 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.765923500s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.072189331s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:36 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 61 pg[9.7( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.765857697s) [0] async=[0] r=-1 lpr=61 pi=[53,61)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 142.072250366s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:36 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 61 pg[9.7( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.765787125s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.072250366s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:36 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 61 pg[9.1( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.765112877s) [0] async=[0] r=-1 lpr=61 pi=[53,61)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 142.071807861s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:36 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 61 pg[9.1( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.764968872s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.071807861s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:36 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 61 pg[9.19( v 50'586 (0'0,50'586] local-lis/les=58/59 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.765035629s) [0] async=[0] r=-1 lpr=61 pi=[53,61)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 142.072021484s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:36 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 61 pg[9.f( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.764240265s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.071563721s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:36 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 61 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=58/59 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.764391899s) [0] async=[0] r=-1 lpr=61 pi=[53,61)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 142.071929932s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:36 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 61 pg[9.19( v 50'586 (0'0,50'586] local-lis/les=58/59 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.764489174s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.072021484s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:36 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 61 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=58/59 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.764314651s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.071929932s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:36 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 61 pg[9.1d( v 50'586 (0'0,50'586] local-lis/les=58/59 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.763593674s) [0] async=[0] r=-1 lpr=61 pi=[53,61)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 142.071670532s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:36 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 61 pg[9.1d( v 50'586 (0'0,50'586] local-lis/les=58/59 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.763382912s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.071670532s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:36 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 61 pg[9.1b( v 50'586 (0'0,50'586] local-lis/les=58/59 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.762916565s) [0] async=[0] r=-1 lpr=61 pi=[53,61)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 142.071777344s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:36 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 61 pg[9.1b( v 50'586 (0'0,50'586] local-lis/les=58/59 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61 pruub=14.762870789s) [0] r=-1 lpr=61 pi=[53,61)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.071777344s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.7( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.f( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.f( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.7( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.d( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.d( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.1( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.1( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.1d( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.1d( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.19( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.19( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.1b( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.15( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.1b( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.15( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.9( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.9( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.13( v 50'586 (0'0,50'586] local-lis/les=60/61 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:36 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 61 pg[9.b( v 50'586 (0'0,50'586] local-lis/les=60/61 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=60) [0] r=0 lpr=60 pi=[53,60)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:36 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Dec  5 01:16:36 compute-0 systemd[1]: session-41.scope: Consumed 10.288s CPU time.
Dec  5 01:16:36 compute-0 systemd-logind[792]: Session 41 logged out. Waiting for processes to exit.
Dec  5 01:16:36 compute-0 systemd-logind[792]: Removed session 41.
Dec  5 01:16:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:16:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Dec  5 01:16:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Dec  5 01:16:37 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Dec  5 01:16:37 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 62 pg[9.3( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=62 pruub=14.017174721s) [0] async=[0] r=-1 lpr=62 pi=[53,62)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 142.072509766s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:37 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 62 pg[9.3( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=62 pruub=14.016999245s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.072509766s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:37 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 62 pg[9.11( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=62 pruub=14.015672684s) [0] async=[0] r=-1 lpr=62 pi=[53,62)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 142.072311401s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:37 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 62 pg[9.11( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=62 pruub=14.015570641s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.072311401s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:37 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 62 pg[9.3( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:37 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 62 pg[9.3( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:37 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 62 pg[9.17( v 50'586 (0'0,50'586] local-lis/les=58/59 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=62 pruub=14.013773918s) [0] async=[0] r=-1 lpr=62 pi=[53,62)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 142.071350098s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:37 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 62 pg[9.17( v 50'586 (0'0,50'586] local-lis/les=58/59 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=62 pruub=14.013423920s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.071350098s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:37 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 62 pg[9.5( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=62 pruub=14.013886452s) [0] async=[0] r=-1 lpr=62 pi=[53,62)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 142.072387695s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:37 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 62 pg[9.5( v 50'586 (0'0,50'586] local-lis/les=58/59 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=62 pruub=14.013783455s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.072387695s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:37 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 62 pg[9.11( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:37 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 62 pg[9.11( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:37 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 62 pg[9.17( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:37 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 62 pg[9.17( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:37 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 62 pg[9.5( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:37 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 62 pg[9.5( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:37 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 62 pg[9.15( v 50'586 (0'0,50'586] local-lis/les=61/62 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:37 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 62 pg[9.1b( v 50'586 (0'0,50'586] local-lis/les=61/62 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:37 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 62 pg[9.f( v 50'586 (0'0,50'586] local-lis/les=61/62 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:37 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 62 pg[9.9( v 50'586 (0'0,50'586] local-lis/les=61/62 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:37 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 62 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=61/62 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:37 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 62 pg[9.1d( v 50'586 (0'0,50'586] local-lis/les=61/62 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:37 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 62 pg[9.7( v 50'586 (0'0,50'586] local-lis/les=61/62 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:37 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 62 pg[9.19( v 50'586 (0'0,50'586] local-lis/les=61/62 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:37 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 62 pg[9.1( v 50'586 (0'0,50'586] local-lis/les=61/62 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:37 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 62 pg[9.d( v 50'586 (0'0,50'586] local-lis/les=61/62 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=61) [0] r=0 lpr=61 pi=[53,61)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:37 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.12 scrub starts
Dec  5 01:16:37 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.12 scrub ok
Dec  5 01:16:37 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec  5 01:16:37 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.1f deep-scrub starts
Dec  5 01:16:37 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 5.1f deep-scrub ok
Dec  5 01:16:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Dec  5 01:16:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Dec  5 01:16:38 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Dec  5 01:16:38 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 63 pg[9.3( v 50'586 (0'0,50'586] local-lis/les=62/63 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:38 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 63 pg[9.17( v 50'586 (0'0,50'586] local-lis/les=62/63 n=6 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:38 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 63 pg[9.5( v 50'586 (0'0,50'586] local-lis/les=62/63 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:38 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 63 pg[9.11( v 50'586 (0'0,50'586] local-lis/les=62/63 n=7 ec=53/44 lis/c=58/53 les/c/f=59/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v155: 321 pgs: 4 active+remapped, 317 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 382 B/s, 9 objects/s recovering
Dec  5 01:16:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Dec  5 01:16:38 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec  5 01:16:38 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec  5 01:16:38 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Dec  5 01:16:38 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Dec  5 01:16:38 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 4.18 deep-scrub starts
Dec  5 01:16:38 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 4.18 deep-scrub ok
Dec  5 01:16:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Dec  5 01:16:39 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec  5 01:16:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Dec  5 01:16:39 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Dec  5 01:16:39 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec  5 01:16:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v157: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 651 B/s, 26 objects/s recovering
Dec  5 01:16:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Dec  5 01:16:40 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec  5 01:16:40 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Dec  5 01:16:40 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Dec  5 01:16:40 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Dec  5 01:16:40 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Dec  5 01:16:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Dec  5 01:16:40 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec  5 01:16:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Dec  5 01:16:40 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Dec  5 01:16:40 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec  5 01:16:41 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Dec  5 01:16:41 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Dec  5 01:16:41 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec  5 01:16:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:16:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v159: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 327 B/s, 15 objects/s recovering
Dec  5 01:16:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Dec  5 01:16:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec  5 01:16:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Dec  5 01:16:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec  5 01:16:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Dec  5 01:16:42 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Dec  5 01:16:42 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.543 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.544 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f83151a5f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.546 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f83151a6690>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8316c39160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee59a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f941a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee79e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f942c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee6300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.549 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8314f94050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.551 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8314f940e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.552 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f831506dc10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8314ee7950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8314ee7a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8314f94170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8314ee79b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8314f94200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8314f94290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8314ee7ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.558 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8314f94320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8314ee59d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.558 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.560 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.561 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8314ee7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.560 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee74d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.562 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.561 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.563 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8314ee7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.563 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.564 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8314ee74a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.564 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.564 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8314ee7500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.562 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.565 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.566 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.566 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.566 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee76b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.567 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.564 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.567 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.568 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.569 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.568 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8314ee7560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.569 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.569 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8314ee75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.570 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.570 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8314f945f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.570 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.570 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8314ee7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.570 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.571 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8314ee7680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.571 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.571 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8314ee76e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.571 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.571 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8314ee7f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.571 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.572 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8314ee7740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.572 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.572 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8314ee7f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.573 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.573 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.574 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.574 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.574 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.574 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.574 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.575 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.575 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.575 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.575 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.575 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.576 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.576 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.576 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.576 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.577 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.577 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.577 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.578 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.578 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.578 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.578 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.578 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.579 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.579 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:16:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:16:42.579 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:16:42 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 66 pg[9.16( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=66 pruub=8.198377609s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 142.054595947s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:42 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 66 pg[9.16( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=66 pruub=8.198309898s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.054595947s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:42 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 66 pg[9.16( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:42 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 66 pg[9.6( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=66 pruub=8.196864128s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 142.055664062s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:42 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 66 pg[9.6( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=66 pruub=8.196774483s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.055664062s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:42 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 66 pg[9.1e( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=66 pruub=8.196623802s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 142.056686401s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:42 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 66 pg[9.1e( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=66 pruub=8.196444511s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.056686401s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:42 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 66 pg[9.e( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=66 pruub=8.194192886s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 142.055496216s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:42 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 66 pg[9.e( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=66 pruub=8.194120407s) [2] r=-1 lpr=66 pi=[53,66)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.055496216s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:42 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 66 pg[9.6( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:42 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 66 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:42 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 66 pg[9.e( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=66) [2] r=0 lpr=66 pi=[53,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:43 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Dec  5 01:16:43 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Dec  5 01:16:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Dec  5 01:16:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Dec  5 01:16:43 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Dec  5 01:16:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec  5 01:16:43 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 67 pg[9.e( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[53,67)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:43 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 67 pg[9.e( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=67) [2]/[1] r=0 lpr=67 pi=[53,67)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:43 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 67 pg[9.e( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=67) [2]/[1] r=0 lpr=67 pi=[53,67)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:43 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 67 pg[9.16( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=67) [2]/[1] r=0 lpr=67 pi=[53,67)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:43 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 67 pg[9.6( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=67) [2]/[1] r=0 lpr=67 pi=[53,67)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:43 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 67 pg[9.6( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=67) [2]/[1] r=0 lpr=67 pi=[53,67)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:43 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 67 pg[9.e( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[53,67)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:43 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 67 pg[9.16( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=67) [2]/[1] r=0 lpr=67 pi=[53,67)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:43 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 67 pg[9.1e( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=67) [2]/[1] r=0 lpr=67 pi=[53,67)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:43 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 67 pg[9.1e( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=67) [2]/[1] r=0 lpr=67 pi=[53,67)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:43 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 67 pg[9.6( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[53,67)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:43 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 67 pg[9.6( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[53,67)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:43 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 67 pg[9.16( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[53,67)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:43 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 67 pg[9.16( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[53,67)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:43 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 67 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[53,67)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:43 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 67 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=67) [2]/[1] r=-1 lpr=67 pi=[53,67)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:43 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Dec  5 01:16:43 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Dec  5 01:16:44 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.18 scrub starts
Dec  5 01:16:44 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.18 scrub ok
Dec  5 01:16:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v162: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 328 B/s, 15 objects/s recovering
Dec  5 01:16:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Dec  5 01:16:44 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec  5 01:16:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Dec  5 01:16:44 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec  5 01:16:44 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec  5 01:16:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Dec  5 01:16:44 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Dec  5 01:16:44 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 68 pg[9.e( v 50'586 (0'0,50'586] local-lis/les=67/68 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[53,67)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:44 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 68 pg[9.16( v 50'586 (0'0,50'586] local-lis/les=67/68 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[53,67)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:44 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 68 pg[9.1e( v 50'586 (0'0,50'586] local-lis/les=67/68 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[53,67)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:44 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 68 pg[9.6( v 50'586 (0'0,50'586] local-lis/les=67/68 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=67) [2]/[1] async=[2] r=0 lpr=67 pi=[53,67)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:45 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 68 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=61/62 n=6 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=68 pruub=15.709008217s) [2] r=-1 lpr=68 pi=[61,68)/1 crt=50'586 mlcod 0'0 active pruub 158.131195068s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:45 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 68 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=61/62 n=6 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=68 pruub=15.708824158s) [2] r=-1 lpr=68 pi=[61,68)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 158.131195068s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:45 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 68 pg[9.f( v 50'586 (0'0,50'586] local-lis/les=61/62 n=7 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=68 pruub=15.708337784s) [2] r=-1 lpr=68 pi=[61,68)/1 crt=50'586 mlcod 0'0 active pruub 158.131195068s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:45 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 68 pg[9.f( v 50'586 (0'0,50'586] local-lis/les=61/62 n=7 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=68 pruub=15.708277702s) [2] r=-1 lpr=68 pi=[61,68)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 158.131195068s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:45 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 68 pg[9.17( v 50'586 (0'0,50'586] local-lis/les=62/63 n=6 ec=53/44 lis/c=62/62 les/c/f=63/63/0 sis=68 pruub=8.703824043s) [2] r=-1 lpr=68 pi=[62,68)/1 crt=50'586 mlcod 0'0 active pruub 151.127349854s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:45 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 68 pg[9.17( v 50'586 (0'0,50'586] local-lis/les=62/63 n=6 ec=53/44 lis/c=62/62 les/c/f=63/63/0 sis=68 pruub=8.703773499s) [2] r=-1 lpr=68 pi=[62,68)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 151.127349854s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:45 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 68 pg[9.7( v 50'586 (0'0,50'586] local-lis/les=61/62 n=7 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=68 pruub=15.707578659s) [2] r=-1 lpr=68 pi=[61,68)/1 crt=50'586 mlcod 0'0 active pruub 158.131271362s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:45 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 68 pg[9.7( v 50'586 (0'0,50'586] local-lis/les=61/62 n=7 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=68 pruub=15.707477570s) [2] r=-1 lpr=68 pi=[61,68)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 158.131271362s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 68 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=68) [2] r=0 lpr=68 pi=[61,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 68 pg[9.f( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=68) [2] r=0 lpr=68 pi=[61,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 68 pg[9.17( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=62/62 les/c/f=63/63/0 sis=68) [2] r=0 lpr=68 pi=[62,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 68 pg[9.7( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=68) [2] r=0 lpr=68 pi=[61,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Dec  5 01:16:45 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec  5 01:16:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Dec  5 01:16:45 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Dec  5 01:16:45 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 69 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=61/62 n=6 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=69) [2]/[0] r=0 lpr=69 pi=[61,69)/1 crt=50'586 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:45 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 69 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=61/62 n=6 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=69) [2]/[0] r=0 lpr=69 pi=[61,69)/1 crt=50'586 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:45 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 69 pg[9.f( v 50'586 (0'0,50'586] local-lis/les=61/62 n=7 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=69) [2]/[0] r=0 lpr=69 pi=[61,69)/1 crt=50'586 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:45 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 69 pg[9.f( v 50'586 (0'0,50'586] local-lis/les=61/62 n=7 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=69) [2]/[0] r=0 lpr=69 pi=[61,69)/1 crt=50'586 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 69 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[61,69)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 69 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[61,69)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 69 pg[9.f( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[61,69)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 69 pg[9.f( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[61,69)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 69 pg[9.17( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=62/62 les/c/f=63/63/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[62,69)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 69 pg[9.17( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=62/62 les/c/f=63/63/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[62,69)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 69 pg[9.1e( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=67/53 les/c/f=68/55/0 sis=69) [2] r=0 lpr=69 pi=[53,69)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 69 pg[9.1e( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=67/53 les/c/f=68/55/0 sis=69) [2] r=0 lpr=69 pi=[53,69)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 69 pg[9.7( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[61,69)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 69 pg[9.7( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=69) [2]/[0] r=-1 lpr=69 pi=[61,69)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 69 pg[9.e( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=67/53 les/c/f=68/55/0 sis=69) [2] r=0 lpr=69 pi=[53,69)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 69 pg[9.6( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=67/53 les/c/f=68/55/0 sis=69) [2] r=0 lpr=69 pi=[53,69)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 69 pg[9.6( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=67/53 les/c/f=68/55/0 sis=69) [2] r=0 lpr=69 pi=[53,69)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 69 pg[9.e( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=67/53 les/c/f=68/55/0 sis=69) [2] r=0 lpr=69 pi=[53,69)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 69 pg[9.16( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=67/53 les/c/f=68/55/0 sis=69) [2] r=0 lpr=69 pi=[53,69)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 69 pg[9.16( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=67/53 les/c/f=68/55/0 sis=69) [2] r=0 lpr=69 pi=[53,69)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:45 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 69 pg[9.17( v 50'586 (0'0,50'586] local-lis/les=62/63 n=6 ec=53/44 lis/c=62/62 les/c/f=63/63/0 sis=69) [2]/[0] r=0 lpr=69 pi=[62,69)/1 crt=50'586 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:45 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 69 pg[9.17( v 50'586 (0'0,50'586] local-lis/les=62/63 n=6 ec=53/44 lis/c=62/62 les/c/f=63/63/0 sis=69) [2]/[0] r=0 lpr=69 pi=[62,69)/1 crt=50'586 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:45 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 69 pg[9.1e( v 50'586 (0'0,50'586] local-lis/les=67/68 n=6 ec=53/44 lis/c=67/53 les/c/f=68/55/0 sis=69 pruub=15.282924652s) [2] async=[2] r=-1 lpr=69 pi=[53,69)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 151.742507935s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:45 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 69 pg[9.1e( v 50'586 (0'0,50'586] local-lis/les=67/68 n=6 ec=53/44 lis/c=67/53 les/c/f=68/55/0 sis=69 pruub=15.282825470s) [2] r=-1 lpr=69 pi=[53,69)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.742507935s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:45 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 69 pg[9.6( v 50'586 (0'0,50'586] local-lis/les=67/68 n=7 ec=53/44 lis/c=67/53 les/c/f=68/55/0 sis=69 pruub=15.282798767s) [2] async=[2] r=-1 lpr=69 pi=[53,69)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 151.742675781s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:45 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 69 pg[9.e( v 50'586 (0'0,50'586] local-lis/les=67/68 n=7 ec=53/44 lis/c=67/53 les/c/f=68/55/0 sis=69 pruub=15.273217201s) [2] async=[2] r=-1 lpr=69 pi=[53,69)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 151.733306885s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:45 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 69 pg[9.7( v 50'586 (0'0,50'586] local-lis/les=61/62 n=7 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=69) [2]/[0] r=0 lpr=69 pi=[61,69)/1 crt=50'586 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:45 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 69 pg[9.e( v 50'586 (0'0,50'586] local-lis/les=67/68 n=7 ec=53/44 lis/c=67/53 les/c/f=68/55/0 sis=69 pruub=15.273057938s) [2] r=-1 lpr=69 pi=[53,69)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.733306885s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:45 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 69 pg[9.7( v 50'586 (0'0,50'586] local-lis/les=61/62 n=7 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=69) [2]/[0] r=0 lpr=69 pi=[61,69)/1 crt=50'586 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:45 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 69 pg[9.16( v 50'586 (0'0,50'586] local-lis/les=67/68 n=6 ec=53/44 lis/c=67/53 les/c/f=68/55/0 sis=69 pruub=15.280101776s) [2] async=[2] r=-1 lpr=69 pi=[53,69)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 151.742477417s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:45 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 69 pg[9.6( v 50'586 (0'0,50'586] local-lis/les=67/68 n=7 ec=53/44 lis/c=67/53 les/c/f=68/55/0 sis=69 pruub=15.280090332s) [2] r=-1 lpr=69 pi=[53,69)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.742675781s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:45 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 69 pg[9.16( v 50'586 (0'0,50'586] local-lis/les=67/68 n=6 ec=53/44 lis/c=67/53 les/c/f=68/55/0 sis=69 pruub=15.279090881s) [2] r=-1 lpr=69 pi=[53,69)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.742477417s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:45 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Dec  5 01:16:45 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Dec  5 01:16:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v165: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:16:46 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Dec  5 01:16:46 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec  5 01:16:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:16:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:16:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:16:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:16:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:16:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:16:46 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Dec  5 01:16:46 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec  5 01:16:46 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec  5 01:16:46 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Dec  5 01:16:46 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Dec  5 01:16:46 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 70 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=69/70 n=6 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=69) [2]/[0] async=[2] r=0 lpr=69 pi=[61,69)/1 crt=50'586 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:46 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 70 pg[9.1e( v 50'586 (0'0,50'586] local-lis/les=69/70 n=6 ec=53/44 lis/c=67/53 les/c/f=68/55/0 sis=69) [2] r=0 lpr=69 pi=[53,69)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:46 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 70 pg[9.6( v 50'586 (0'0,50'586] local-lis/les=69/70 n=7 ec=53/44 lis/c=67/53 les/c/f=68/55/0 sis=69) [2] r=0 lpr=69 pi=[53,69)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:46 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 70 pg[9.e( v 50'586 (0'0,50'586] local-lis/les=69/70 n=7 ec=53/44 lis/c=67/53 les/c/f=68/55/0 sis=69) [2] r=0 lpr=69 pi=[53,69)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:46 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 70 pg[9.16( v 50'586 (0'0,50'586] local-lis/les=69/70 n=6 ec=53/44 lis/c=67/53 les/c/f=68/55/0 sis=69) [2] r=0 lpr=69 pi=[53,69)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:46 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 70 pg[9.7( v 50'586 (0'0,50'586] local-lis/les=69/70 n=7 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=69) [2]/[0] async=[2] r=0 lpr=69 pi=[61,69)/1 crt=50'586 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:46 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 70 pg[9.17( v 50'586 (0'0,50'586] local-lis/les=69/70 n=6 ec=53/44 lis/c=62/62 les/c/f=63/63/0 sis=69) [2]/[0] async=[2] r=0 lpr=69 pi=[62,69)/1 crt=50'586 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:46 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 70 pg[9.f( v 50'586 (0'0,50'586] local-lis/les=69/70 n=7 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=69) [2]/[0] async=[2] r=0 lpr=69 pi=[61,69)/1 crt=50'586 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e70 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:16:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Dec  5 01:16:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Dec  5 01:16:47 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Dec  5 01:16:47 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 71 pg[9.7( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=69/61 les/c/f=70/62/0 sis=71) [2] r=0 lpr=71 pi=[61,71)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:47 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 71 pg[9.7( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=69/61 les/c/f=70/62/0 sis=71) [2] r=0 lpr=71 pi=[61,71)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:47 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 71 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=69/61 les/c/f=70/62/0 sis=71) [2] r=0 lpr=71 pi=[61,71)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:47 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 71 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=69/61 les/c/f=70/62/0 sis=71) [2] r=0 lpr=71 pi=[61,71)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:47 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 71 pg[9.17( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=69/62 les/c/f=70/63/0 sis=71) [2] r=0 lpr=71 pi=[62,71)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:47 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 71 pg[9.17( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=69/62 les/c/f=70/63/0 sis=71) [2] r=0 lpr=71 pi=[62,71)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:47 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 71 pg[9.f( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=69/61 les/c/f=70/62/0 sis=71) [2] r=0 lpr=71 pi=[61,71)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:47 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 71 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=69/70 n=6 ec=53/44 lis/c=69/61 les/c/f=70/62/0 sis=71 pruub=15.443318367s) [2] async=[2] r=-1 lpr=71 pi=[61,71)/1 crt=50'586 mlcod 50'586 active pruub 159.559448242s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:47 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 71 pg[9.7( v 50'586 (0'0,50'586] local-lis/les=69/70 n=7 ec=53/44 lis/c=69/61 les/c/f=70/62/0 sis=71 pruub=15.447704315s) [2] async=[2] r=-1 lpr=71 pi=[61,71)/1 crt=50'586 mlcod 50'586 active pruub 159.563919067s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:47 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 71 pg[9.7( v 50'586 (0'0,50'586] local-lis/les=69/70 n=7 ec=53/44 lis/c=69/61 les/c/f=70/62/0 sis=71 pruub=15.447580338s) [2] r=-1 lpr=71 pi=[61,71)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 159.563919067s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:47 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 71 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=69/70 n=6 ec=53/44 lis/c=69/61 les/c/f=70/62/0 sis=71 pruub=15.443078041s) [2] r=-1 lpr=71 pi=[61,71)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 159.559448242s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:47 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 71 pg[9.f( v 50'586 (0'0,50'586] local-lis/les=69/70 n=7 ec=53/44 lis/c=69/61 les/c/f=70/62/0 sis=71 pruub=15.447667122s) [2] async=[2] r=-1 lpr=71 pi=[61,71)/1 crt=50'586 mlcod 50'586 active pruub 159.564453125s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:47 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 71 pg[9.f( v 50'586 (0'0,50'586] local-lis/les=69/70 n=7 ec=53/44 lis/c=69/61 les/c/f=70/62/0 sis=71 pruub=15.447522163s) [2] r=-1 lpr=71 pi=[61,71)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 159.564453125s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:47 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 71 pg[9.17( v 50'586 (0'0,50'586] local-lis/les=69/70 n=6 ec=53/44 lis/c=69/62 les/c/f=70/63/0 sis=71 pruub=15.446838379s) [2] async=[2] r=-1 lpr=71 pi=[62,71)/1 crt=50'586 mlcod 50'586 active pruub 159.563934326s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:47 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 71 pg[9.f( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=69/61 les/c/f=70/62/0 sis=71) [2] r=0 lpr=71 pi=[61,71)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:47 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 71 pg[9.17( v 50'586 (0'0,50'586] local-lis/les=69/70 n=6 ec=53/44 lis/c=69/62 les/c/f=70/63/0 sis=71 pruub=15.446743965s) [2] r=-1 lpr=71 pi=[62,71)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 159.563934326s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:47 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 70 pg[9.8( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=70 pruub=11.760512352s) [2] r=-1 lpr=70 pi=[53,70)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 150.056045532s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:47 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 71 pg[9.8( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=70 pruub=11.760368347s) [2] r=-1 lpr=70 pi=[53,70)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.056045532s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:47 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 71 pg[9.8( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=70) [2] r=0 lpr=71 pi=[53,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:47 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 70 pg[9.18( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=70 pruub=11.760275841s) [2] r=-1 lpr=70 pi=[53,70)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 150.057830811s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:47 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 71 pg[9.18( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=70 pruub=11.760203362s) [2] r=-1 lpr=70 pi=[53,70)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 150.057830811s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:47 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 71 pg[9.18( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=70) [2] r=0 lpr=71 pi=[53,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:47 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec  5 01:16:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Dec  5 01:16:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Dec  5 01:16:48 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Dec  5 01:16:48 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 72 pg[9.8( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[53,72)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:48 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 72 pg[9.8( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[53,72)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:48 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 72 pg[9.18( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[53,72)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:48 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 72 pg[9.18( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[53,72)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:48 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 72 pg[9.17( v 50'586 (0'0,50'586] local-lis/les=71/72 n=6 ec=53/44 lis/c=69/62 les/c/f=70/63/0 sis=71) [2] r=0 lpr=71 pi=[62,71)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:48 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 72 pg[9.8( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=72) [2]/[1] r=0 lpr=72 pi=[53,72)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:48 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 72 pg[9.8( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=72) [2]/[1] r=0 lpr=72 pi=[53,72)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:48 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 72 pg[9.18( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=72) [2]/[1] r=0 lpr=72 pi=[53,72)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v169: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 59 B/s, 6 objects/s recovering
Dec  5 01:16:48 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.19 scrub starts
Dec  5 01:16:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Dec  5 01:16:48 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec  5 01:16:48 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 72 pg[9.18( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=72) [2]/[1] r=0 lpr=72 pi=[53,72)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:48 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 72 pg[9.7( v 50'586 (0'0,50'586] local-lis/les=71/72 n=7 ec=53/44 lis/c=69/61 les/c/f=70/62/0 sis=71) [2] r=0 lpr=71 pi=[61,71)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:48 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 72 pg[9.f( v 50'586 (0'0,50'586] local-lis/les=71/72 n=7 ec=53/44 lis/c=69/61 les/c/f=70/62/0 sis=71) [2] r=0 lpr=71 pi=[61,71)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:48 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 72 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=71/72 n=6 ec=53/44 lis/c=69/61 les/c/f=70/62/0 sis=71) [2] r=0 lpr=71 pi=[61,71)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:48 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.19 scrub ok
Dec  5 01:16:48 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.4 deep-scrub starts
Dec  5 01:16:48 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.4 deep-scrub ok
Dec  5 01:16:48 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Dec  5 01:16:48 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Dec  5 01:16:49 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Dec  5 01:16:49 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec  5 01:16:49 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Dec  5 01:16:49 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec  5 01:16:49 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Dec  5 01:16:49 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 73 pg[9.8( v 50'586 (0'0,50'586] local-lis/les=72/73 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[53,72)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:49 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 73 pg[9.18( v 50'586 (0'0,50'586] local-lis/les=72/73 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[53,72)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:50 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.1a scrub starts
Dec  5 01:16:50 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.1a scrub ok
Dec  5 01:16:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Dec  5 01:16:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Dec  5 01:16:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v172: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 60 B/s, 6 objects/s recovering
Dec  5 01:16:50 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Dec  5 01:16:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Dec  5 01:16:50 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec  5 01:16:50 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 74 pg[9.8( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=72/53 les/c/f=73/55/0 sis=74) [2] r=0 lpr=74 pi=[53,74)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:50 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 74 pg[9.8( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=72/53 les/c/f=73/55/0 sis=74) [2] r=0 lpr=74 pi=[53,74)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:50 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec  5 01:16:50 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 74 pg[9.18( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=72/53 les/c/f=73/55/0 sis=74) [2] r=0 lpr=74 pi=[53,74)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:50 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 74 pg[9.18( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=72/53 les/c/f=73/55/0 sis=74) [2] r=0 lpr=74 pi=[53,74)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:50 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 74 pg[9.8( v 50'586 (0'0,50'586] local-lis/les=72/73 n=7 ec=53/44 lis/c=72/53 les/c/f=73/55/0 sis=74 pruub=15.009253502s) [2] async=[2] r=-1 lpr=74 pi=[53,74)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 156.109695435s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:50 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 74 pg[9.8( v 50'586 (0'0,50'586] local-lis/les=72/73 n=7 ec=53/44 lis/c=72/53 les/c/f=73/55/0 sis=74 pruub=15.009100914s) [2] r=-1 lpr=74 pi=[53,74)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 156.109695435s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:50 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 74 pg[9.18( v 50'586 (0'0,50'586] local-lis/les=72/73 n=6 ec=53/44 lis/c=72/53 les/c/f=73/55/0 sis=74 pruub=15.016135216s) [2] async=[2] r=-1 lpr=74 pi=[53,74)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 156.117431641s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:50 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 74 pg[9.18( v 50'586 (0'0,50'586] local-lis/les=72/73 n=6 ec=53/44 lis/c=72/53 les/c/f=73/55/0 sis=74 pruub=15.016005516s) [2] r=-1 lpr=74 pi=[53,74)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 156.117431641s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:50 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 6.11 scrub starts
Dec  5 01:16:50 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 6.11 scrub ok
Dec  5 01:16:50 compute-0 podman[224436]: 2025-12-05 01:16:50.719199746 +0000 UTC m=+0.122662841 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm)
Dec  5 01:16:51 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.1b scrub starts
Dec  5 01:16:51 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 6.1b scrub ok
Dec  5 01:16:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Dec  5 01:16:51 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec  5 01:16:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Dec  5 01:16:51 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Dec  5 01:16:51 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 75 pg[9.8( v 50'586 (0'0,50'586] local-lis/les=74/75 n=7 ec=53/44 lis/c=72/53 les/c/f=73/55/0 sis=74) [2] r=0 lpr=74 pi=[53,74)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:51 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 75 pg[9.18( v 50'586 (0'0,50'586] local-lis/les=74/75 n=6 ec=53/44 lis/c=72/53 les/c/f=73/55/0 sis=74) [2] r=0 lpr=74 pi=[53,74)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:51 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec  5 01:16:51 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec  5 01:16:51 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Dec  5 01:16:51 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Dec  5 01:16:51 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Dec  5 01:16:51 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Dec  5 01:16:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:16:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v174: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 190 B/s, 9 objects/s recovering
Dec  5 01:16:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Dec  5 01:16:52 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec  5 01:16:52 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Dec  5 01:16:52 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Dec  5 01:16:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Dec  5 01:16:52 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec  5 01:16:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Dec  5 01:16:52 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Dec  5 01:16:52 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec  5 01:16:52 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Dec  5 01:16:52 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Dec  5 01:16:52 compute-0 podman[224455]: 2025-12-05 01:16:52.707620235 +0000 UTC m=+0.106191064 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 01:16:52 compute-0 podman[224456]: 2025-12-05 01:16:52.779294762 +0000 UTC m=+0.173165201 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:16:53 compute-0 systemd-logind[792]: New session 42 of user zuul.
Dec  5 01:16:53 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec  5 01:16:53 compute-0 systemd[1]: Started Session 42 of User zuul.
Dec  5 01:16:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v176: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 153 B/s, 7 objects/s recovering
Dec  5 01:16:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Dec  5 01:16:54 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Dec  5 01:16:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Dec  5 01:16:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Dec  5 01:16:54 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec  5 01:16:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Dec  5 01:16:54 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Dec  5 01:16:54 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 77 pg[9.c( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=77 pruub=12.866765022s) [2] r=-1 lpr=77 pi=[53,77)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 158.059799194s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:54 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 77 pg[9.c( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=77 pruub=12.866698265s) [2] r=-1 lpr=77 pi=[53,77)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 158.059799194s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:54 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 77 pg[9.1c( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=77 pruub=12.863279343s) [2] r=-1 lpr=77 pi=[53,77)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 158.060073853s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:54 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 77 pg[9.1c( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=77 pruub=12.863193512s) [2] r=-1 lpr=77 pi=[53,77)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 158.060073853s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:54 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 77 pg[9.c( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=77) [2] r=0 lpr=77 pi=[53,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:54 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 77 pg[9.1c( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=77) [2] r=0 lpr=77 pi=[53,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:54 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.a deep-scrub starts
Dec  5 01:16:54 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.a deep-scrub ok
Dec  5 01:16:54 compute-0 podman[224630]: 2025-12-05 01:16:54.316791161 +0000 UTC m=+0.194625506 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125)
Dec  5 01:16:54 compute-0 python3.9[224671]: ansible-ansible.legacy.ping Invoked with data=pong
Dec  5 01:16:54 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Dec  5 01:16:54 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Dec  5 01:16:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Dec  5 01:16:55 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec  5 01:16:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Dec  5 01:16:55 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Dec  5 01:16:55 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 78 pg[9.1c( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[53,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:55 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 78 pg[9.1c( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[53,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:55 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 78 pg[9.c( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[53,78)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:55 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 78 pg[9.c( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=78) [2]/[1] r=-1 lpr=78 pi=[53,78)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:55 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 78 pg[9.c( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=78) [2]/[1] r=0 lpr=78 pi=[53,78)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:55 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 78 pg[9.1c( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=78) [2]/[1] r=0 lpr=78 pi=[53,78)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:55 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 78 pg[9.1c( v 50'586 (0'0,50'586] local-lis/les=53/55 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=78) [2]/[1] r=0 lpr=78 pi=[53,78)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:55 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 78 pg[9.c( v 50'586 (0'0,50'586] local-lis/les=53/55 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=78) [2]/[1] r=0 lpr=78 pi=[53,78)/1 crt=50'586 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:55 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Dec  5 01:16:55 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Dec  5 01:16:56 compute-0 python3.9[224851]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  5 01:16:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v179: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 154 B/s, 7 objects/s recovering
Dec  5 01:16:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Dec  5 01:16:56 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Dec  5 01:16:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Dec  5 01:16:56 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Dec  5 01:16:56 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec  5 01:16:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Dec  5 01:16:56 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Dec  5 01:16:56 compute-0 podman[224880]: 2025-12-05 01:16:56.722751565 +0000 UTC m=+0.125041957 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, vcs-type=git, release=1214.1726694543, release-0.7.12=, io.openshift.expose-services=, com.redhat.component=ubi9-container, container_name=kepler, build-date=2024-09-18T21:23:30, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, version=9.4, vendor=Red Hat, Inc., io.openshift.tags=base rhel9)
Dec  5 01:16:56 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 79 pg[9.c( v 50'586 (0'0,50'586] local-lis/les=78/79 n=7 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[53,78)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:56 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 79 pg[9.1c( v 50'586 (0'0,50'586] local-lis/les=78/79 n=6 ec=53/44 lis/c=53/53 les/c/f=55/55/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[53,78)/1 crt=50'586 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:16:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Dec  5 01:16:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Dec  5 01:16:57 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Dec  5 01:16:57 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 80 pg[9.c( v 50'586 (0'0,50'586] local-lis/les=78/79 n=7 ec=53/44 lis/c=78/53 les/c/f=79/55/0 sis=80 pruub=15.694095612s) [2] async=[2] r=-1 lpr=80 pi=[53,80)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 163.759887695s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:57 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 80 pg[9.c( v 50'586 (0'0,50'586] local-lis/les=78/79 n=7 ec=53/44 lis/c=78/53 les/c/f=79/55/0 sis=80 pruub=15.693988800s) [2] r=-1 lpr=80 pi=[53,80)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.759887695s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:57 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 80 pg[9.1c( v 50'586 (0'0,50'586] local-lis/les=78/79 n=6 ec=53/44 lis/c=78/53 les/c/f=79/55/0 sis=80 pruub=15.690936089s) [2] async=[2] r=-1 lpr=80 pi=[53,80)/1 crt=50'586 lcod 0'0 mlcod 0'0 active pruub 163.759902954s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:57 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 80 pg[9.1c( v 50'586 (0'0,50'586] local-lis/les=78/79 n=6 ec=53/44 lis/c=78/53 les/c/f=79/55/0 sis=80 pruub=15.690846443s) [2] r=-1 lpr=80 pi=[53,80)/1 crt=50'586 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 163.759902954s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:16:57 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 80 pg[9.c( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=78/53 les/c/f=79/55/0 sis=80) [2] r=0 lpr=80 pi=[53,80)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:57 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 80 pg[9.c( v 50'586 (0'0,50'586] local-lis/les=0/0 n=7 ec=53/44 lis/c=78/53 les/c/f=79/55/0 sis=80) [2] r=0 lpr=80 pi=[53,80)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:57 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 80 pg[9.1c( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=78/53 les/c/f=79/55/0 sis=80) [2] r=0 lpr=80 pi=[53,80)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:16:57 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 80 pg[9.1c( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=78/53 les/c/f=79/55/0 sis=80) [2] r=0 lpr=80 pi=[53,80)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:16:57 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec  5 01:16:57 compute-0 python3.9[225027]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:16:57 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 4.e deep-scrub starts
Dec  5 01:16:57 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 4.e deep-scrub ok
Dec  5 01:16:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Dec  5 01:16:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Dec  5 01:16:58 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Dec  5 01:16:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v183: 321 pgs: 2 activating+remapped, 319 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 12/247 objects misplaced (4.858%)
Dec  5 01:16:58 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 81 pg[9.1c( v 50'586 (0'0,50'586] local-lis/les=80/81 n=6 ec=53/44 lis/c=78/53 les/c/f=79/55/0 sis=80) [2] r=0 lpr=80 pi=[53,80)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:58 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 81 pg[9.c( v 50'586 (0'0,50'586] local-lis/les=80/81 n=7 ec=53/44 lis/c=78/53 les/c/f=79/55/0 sis=80) [2] r=0 lpr=80 pi=[53,80)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:16:58 compute-0 podman[225152]: 2025-12-05 01:16:58.951061763 +0000 UTC m=+0.108220751 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, distribution-scope=public, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, version=9.6, managed_by=edpm_ansible, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  5 01:16:59 compute-0 python3.9[225198]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  5 01:16:59 compute-0 podman[158197]: time="2025-12-05T01:16:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:16:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:16:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec  5 01:16:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:16:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6783 "" "Go-http-client/1.1"
Dec  5 01:17:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v184: 321 pgs: 2 activating+remapped, 319 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 12/247 objects misplaced (4.858%)
Dec  5 01:17:00 compute-0 python3.9[225353]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:17:01 compute-0 openstack_network_exporter[160350]: ERROR   01:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:17:01 compute-0 openstack_network_exporter[160350]: ERROR   01:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:17:01 compute-0 openstack_network_exporter[160350]: ERROR   01:17:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:17:01 compute-0 openstack_network_exporter[160350]: ERROR   01:17:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:17:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:17:01 compute-0 openstack_network_exporter[160350]: ERROR   01:17:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:17:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:17:01 compute-0 python3.9[225505]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:17:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:17:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v185: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Dec  5 01:17:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Dec  5 01:17:02 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec  5 01:17:02 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Dec  5 01:17:02 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Dec  5 01:17:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Dec  5 01:17:02 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec  5 01:17:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Dec  5 01:17:02 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Dec  5 01:17:02 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec  5 01:17:02 compute-0 python3.9[225655]: ansible-ansible.builtin.service_facts Invoked
Dec  5 01:17:02 compute-0 network[225672]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  5 01:17:02 compute-0 network[225673]: 'network-scripts' will be removed from distribution in near future.
Dec  5 01:17:02 compute-0 network[225674]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  5 01:17:03 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Dec  5 01:17:03 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Dec  5 01:17:03 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec  5 01:17:03 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Dec  5 01:17:03 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Dec  5 01:17:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v187: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 1 objects/s recovering
Dec  5 01:17:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Dec  5 01:17:04 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec  5 01:17:04 compute-0 podman[225681]: 2025-12-05 01:17:04.18475614 +0000 UTC m=+0.158958516 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  5 01:17:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Dec  5 01:17:04 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec  5 01:17:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Dec  5 01:17:04 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Dec  5 01:17:04 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec  5 01:17:05 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Dec  5 01:17:05 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Dec  5 01:17:05 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec  5 01:17:05 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 4.a scrub starts
Dec  5 01:17:05 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 4.a scrub ok
Dec  5 01:17:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v189: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 1 objects/s recovering
Dec  5 01:17:06 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Dec  5 01:17:06 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Dec  5 01:17:06 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Dec  5 01:17:06 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec  5 01:17:06 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Dec  5 01:17:06 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Dec  5 01:17:06 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Dec  5 01:17:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e84 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:17:07 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Dec  5 01:17:07 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Dec  5 01:17:07 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec  5 01:17:07 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Dec  5 01:17:07 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Dec  5 01:17:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v191: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:17:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Dec  5 01:17:08 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Dec  5 01:17:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Dec  5 01:17:08 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec  5 01:17:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Dec  5 01:17:08 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Dec  5 01:17:08 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Dec  5 01:17:08 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 4.1b deep-scrub starts
Dec  5 01:17:08 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 4.1b deep-scrub ok
Dec  5 01:17:09 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Dec  5 01:17:09 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Dec  5 01:17:09 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec  5 01:17:09 compute-0 python3.9[225965]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:17:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v193: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:17:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Dec  5 01:17:10 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Dec  5 01:17:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Dec  5 01:17:10 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec  5 01:17:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Dec  5 01:17:10 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Dec  5 01:17:10 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Dec  5 01:17:10 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 6.f scrub starts
Dec  5 01:17:10 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 6.f scrub ok
Dec  5 01:17:10 compute-0 python3.9[226115]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  5 01:17:11 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec  5 01:17:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:17:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v195: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:17:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Dec  5 01:17:12 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Dec  5 01:17:12 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Dec  5 01:17:12 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Dec  5 01:17:12 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Dec  5 01:17:12 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Dec  5 01:17:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Dec  5 01:17:12 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec  5 01:17:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Dec  5 01:17:12 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Dec  5 01:17:12 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Dec  5 01:17:12 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 87 pg[9.13( v 50'586 (0'0,50'586] local-lis/les=60/61 n=6 ec=53/44 lis/c=60/60 les/c/f=61/61/0 sis=87 pruub=11.967425346s) [2] r=-1 lpr=87 pi=[60,87)/1 crt=50'586 mlcod 0'0 active pruub 181.384109497s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:17:12 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 87 pg[9.13( v 50'586 (0'0,50'586] local-lis/les=60/61 n=6 ec=53/44 lis/c=60/60 les/c/f=61/61/0 sis=87 pruub=11.966161728s) [2] r=-1 lpr=87 pi=[60,87)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 181.384109497s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:17:12 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 87 pg[9.13( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=60/60 les/c/f=61/61/0 sis=87) [2] r=0 lpr=87 pi=[60,87)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:17:12 compute-0 python3.9[226269]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  5 01:17:13 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Dec  5 01:17:13 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Dec  5 01:17:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Dec  5 01:17:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Dec  5 01:17:13 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Dec  5 01:17:13 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 88 pg[9.13( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=60/60 les/c/f=61/61/0 sis=88) [2]/[0] r=-1 lpr=88 pi=[60,88)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:17:13 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 88 pg[9.13( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=60/60 les/c/f=61/61/0 sis=88) [2]/[0] r=-1 lpr=88 pi=[60,88)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  5 01:17:13 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec  5 01:17:13 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 88 pg[9.13( v 50'586 (0'0,50'586] local-lis/les=60/61 n=6 ec=53/44 lis/c=60/60 les/c/f=61/61/0 sis=88) [2]/[0] r=0 lpr=88 pi=[60,88)/1 crt=50'586 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:17:13 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 88 pg[9.13( v 50'586 (0'0,50'586] local-lis/les=60/61 n=6 ec=53/44 lis/c=60/60 les/c/f=61/61/0 sis=88) [2]/[0] r=0 lpr=88 pi=[60,88)/1 crt=50'586 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  5 01:17:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v198: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:17:14 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Dec  5 01:17:14 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Dec  5 01:17:14 compute-0 python3.9[226427]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  5 01:17:14 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Dec  5 01:17:14 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec  5 01:17:14 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Dec  5 01:17:14 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Dec  5 01:17:14 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Dec  5 01:17:14 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 89 pg[9.13( v 50'586 (0'0,50'586] local-lis/les=88/89 n=6 ec=53/44 lis/c=60/60 les/c/f=61/61/0 sis=88) [2]/[0] async=[2] r=0 lpr=88 pi=[60,88)/1 crt=50'586 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:17:15 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Dec  5 01:17:15 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Dec  5 01:17:15 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Dec  5 01:17:15 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec  5 01:17:15 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Dec  5 01:17:15 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Dec  5 01:17:15 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 90 pg[9.13( v 50'586 (0'0,50'586] local-lis/les=88/89 n=6 ec=53/44 lis/c=88/60 les/c/f=89/61/0 sis=90 pruub=15.126440048s) [2] async=[2] r=-1 lpr=90 pi=[60,90)/1 crt=50'586 mlcod 50'586 active pruub 187.626922607s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:17:15 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 90 pg[9.13( v 50'586 (0'0,50'586] local-lis/les=88/89 n=6 ec=53/44 lis/c=88/60 les/c/f=89/61/0 sis=90 pruub=15.126286507s) [2] r=-1 lpr=90 pi=[60,90)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 187.626922607s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:17:15 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 90 pg[9.13( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=88/60 les/c/f=89/61/0 sis=90) [2] r=0 lpr=90 pi=[60,90)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:17:15 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 90 pg[9.13( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=88/60 les/c/f=89/61/0 sis=90) [2] r=0 lpr=90 pi=[60,90)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:17:15 compute-0 python3.9[226511]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  5 01:17:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:17:16
Dec  5 01:17:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 01:17:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 01:17:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['images', 'default.rgw.meta', '.mgr', 'vms', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes']
Dec  5 01:17:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 01:17:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v201: 321 pgs: 1 active+remapped, 320 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Dec  5 01:17:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Dec  5 01:17:16 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec  5 01:17:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:17:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:17:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:17:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:17:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:17:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:17:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 01:17:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:17:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 01:17:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:17:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:17:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:17:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:17:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:17:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:17:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:17:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Dec  5 01:17:16 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec  5 01:17:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Dec  5 01:17:16 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Dec  5 01:17:16 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 91 pg[9.15( v 50'586 (0'0,50'586] local-lis/les=61/62 n=6 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=91 pruub=8.612095833s) [1] r=-1 lpr=91 pi=[61,91)/1 crt=50'586 mlcod 0'0 active pruub 182.124923706s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:17:16 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 91 pg[9.15( v 50'586 (0'0,50'586] local-lis/les=61/62 n=6 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=91 pruub=8.612011909s) [1] r=-1 lpr=91 pi=[61,91)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 182.124923706s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:17:16 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec  5 01:17:16 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 91 pg[9.15( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=91) [1] r=0 lpr=91 pi=[61,91)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:17:16 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 91 pg[9.13( v 50'586 (0'0,50'586] local-lis/les=90/91 n=6 ec=53/44 lis/c=88/60 les/c/f=89/61/0 sis=90) [2] r=0 lpr=90 pi=[60,90)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:17:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:17:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Dec  5 01:17:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Dec  5 01:17:17 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Dec  5 01:17:17 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 92 pg[9.15( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=92) [1]/[0] r=-1 lpr=92 pi=[61,92)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:17:17 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 92 pg[9.15( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=92) [1]/[0] r=-1 lpr=92 pi=[61,92)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  5 01:17:17 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 92 pg[9.15( v 50'586 (0'0,50'586] local-lis/les=61/62 n=6 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=92) [1]/[0] r=0 lpr=92 pi=[61,92)/1 crt=50'586 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:17:17 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 92 pg[9.15( v 50'586 (0'0,50'586] local-lis/les=61/62 n=6 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=92) [1]/[0] r=0 lpr=92 pi=[61,92)/1 crt=50'586 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  5 01:17:17 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec  5 01:17:17 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Dec  5 01:17:17 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Dec  5 01:17:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Dec  5 01:17:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Dec  5 01:17:18 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Dec  5 01:17:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v205: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:17:18 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 93 pg[9.15( v 50'586 (0'0,50'586] local-lis/les=92/93 n=6 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=92) [1]/[0] async=[1] r=0 lpr=92 pi=[61,92)/1 crt=50'586 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:17:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Dec  5 01:17:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Dec  5 01:17:19 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Dec  5 01:17:19 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 94 pg[9.15( v 50'586 (0'0,50'586] local-lis/les=92/93 n=6 ec=53/44 lis/c=92/61 les/c/f=93/62/0 sis=94 pruub=15.255028725s) [1] async=[1] r=-1 lpr=94 pi=[61,94)/1 crt=50'586 mlcod 50'586 active pruub 191.404785156s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:17:19 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 94 pg[9.15( v 50'586 (0'0,50'586] local-lis/les=92/93 n=6 ec=53/44 lis/c=92/61 les/c/f=93/62/0 sis=94 pruub=15.254875183s) [1] r=-1 lpr=94 pi=[61,94)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 191.404785156s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:17:19 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 94 pg[9.15( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=92/61 les/c/f=93/62/0 sis=94) [1] r=0 lpr=94 pi=[61,94)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:17:19 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 94 pg[9.15( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=92/61 les/c/f=93/62/0 sis=94) [1] r=0 lpr=94 pi=[61,94)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:17:19 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.f scrub starts
Dec  5 01:17:19 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.f scrub ok
Dec  5 01:17:20 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Dec  5 01:17:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v207: 321 pgs: 1 unknown, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:17:20 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Dec  5 01:17:20 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Dec  5 01:17:20 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 95 pg[9.15( v 50'586 (0'0,50'586] local-lis/les=94/95 n=6 ec=53/44 lis/c=92/61 les/c/f=93/62/0 sis=94) [1] r=0 lpr=94 pi=[61,94)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:17:20 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Dec  5 01:17:20 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Dec  5 01:17:21 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.1f deep-scrub starts
Dec  5 01:17:21 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.1f deep-scrub ok
Dec  5 01:17:21 compute-0 podman[226581]: 2025-12-05 01:17:21.713674842 +0000 UTC m=+0.120185122 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0)
Dec  5 01:17:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:17:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v209: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 4.4 KiB/s rd, 202 B/s wr, 9 op/s; 43 B/s, 1 objects/s recovering
Dec  5 01:17:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Dec  5 01:17:22 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Dec  5 01:17:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Dec  5 01:17:22 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec  5 01:17:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Dec  5 01:17:22 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Dec  5 01:17:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 96 pg[9.16( v 50'586 (0'0,50'586] local-lis/les=69/70 n=6 ec=53/44 lis/c=69/69 les/c/f=70/70/0 sis=96 pruub=12.366931915s) [0] r=-1 lpr=96 pi=[69,96)/1 crt=50'586 mlcod 0'0 active pruub 178.917022705s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:17:22 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 96 pg[9.16( v 50'586 (0'0,50'586] local-lis/les=69/70 n=6 ec=53/44 lis/c=69/69 les/c/f=70/70/0 sis=96 pruub=12.365426064s) [0] r=-1 lpr=96 pi=[69,96)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 178.917022705s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:17:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Dec  5 01:17:22 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 96 pg[9.16( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=69/69 les/c/f=70/70/0 sis=96) [0] r=0 lpr=96 pi=[69,96)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:17:22 compute-0 podman[226699]: 2025-12-05 01:17:22.8734204 +0000 UTC m=+0.111768770 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 01:17:23 compute-0 podman[226730]: 2025-12-05 01:17:23.099492366 +0000 UTC m=+0.174952010 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  5 01:17:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Dec  5 01:17:23 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec  5 01:17:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Dec  5 01:17:23 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Dec  5 01:17:23 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 97 pg[9.16( v 50'586 (0'0,50'586] local-lis/les=69/70 n=6 ec=53/44 lis/c=69/69 les/c/f=70/70/0 sis=97) [0]/[2] r=0 lpr=97 pi=[69,97)/1 crt=50'586 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:17:23 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 97 pg[9.16( v 50'586 (0'0,50'586] local-lis/les=69/70 n=6 ec=53/44 lis/c=69/69 les/c/f=70/70/0 sis=97) [0]/[2] r=0 lpr=97 pi=[69,97)/1 crt=50'586 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  5 01:17:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 97 pg[9.16( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=69/69 les/c/f=70/70/0 sis=97) [0]/[2] r=-1 lpr=97 pi=[69,97)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:17:23 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 97 pg[9.16( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=69/69 les/c/f=70/70/0 sis=97) [0]/[2] r=-1 lpr=97 pi=[69,97)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  5 01:17:23 compute-0 podman[226820]: 2025-12-05 01:17:23.640177334 +0000 UTC m=+0.133442850 container exec aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  5 01:17:23 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.11 deep-scrub starts
Dec  5 01:17:23 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.11 deep-scrub ok
Dec  5 01:17:23 compute-0 podman[226820]: 2025-12-05 01:17:23.75762771 +0000 UTC m=+0.250893216 container exec_died aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  5 01:17:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v212: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 4.4 KiB/s rd, 203 B/s wr, 9 op/s; 43 B/s, 1 objects/s recovering
Dec  5 01:17:24 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Dec  5 01:17:24 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Dec  5 01:17:24 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Dec  5 01:17:24 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec  5 01:17:24 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Dec  5 01:17:24 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Dec  5 01:17:24 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Dec  5 01:17:24 compute-0 podman[226907]: 2025-12-05 01:17:24.521025772 +0000 UTC m=+0.110539925 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  5 01:17:24 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 98 pg[9.16( v 50'586 (0'0,50'586] local-lis/les=97/98 n=6 ec=53/44 lis/c=69/69 les/c/f=70/70/0 sis=97) [0]/[2] async=[0] r=0 lpr=97 pi=[69,97)/1 crt=50'586 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:17:25 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:17:25 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:17:25 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:17:25 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:17:25 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Dec  5 01:17:25 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec  5 01:17:25 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:17:25 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:17:25 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Dec  5 01:17:25 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Dec  5 01:17:25 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 99 pg[9.16( v 50'586 (0'0,50'586] local-lis/les=97/98 n=6 ec=53/44 lis/c=97/69 les/c/f=98/70/0 sis=99 pruub=15.491202354s) [0] async=[0] r=-1 lpr=99 pi=[69,99)/1 crt=50'586 mlcod 50'586 active pruub 185.130523682s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:17:25 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 99 pg[9.16( v 50'586 (0'0,50'586] local-lis/les=97/98 n=6 ec=53/44 lis/c=97/69 les/c/f=98/70/0 sis=99 pruub=15.491126060s) [0] r=-1 lpr=99 pi=[69,99)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 185.130523682s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:17:25 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 99 pg[9.16( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=97/69 les/c/f=98/70/0 sis=99) [0] r=0 lpr=99 pi=[69,99)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:17:25 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 99 pg[9.16( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=97/69 les/c/f=98/70/0 sis=99) [0] r=0 lpr=99 pi=[69,99)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.225674773718825e-06 of space, bias 1.0, pg target 0.0006677024321156476 quantized to 32 (current 32)
Dec  5 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:17:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 01:17:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v215: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:17:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Dec  5 01:17:26 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Dec  5 01:17:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Dec  5 01:17:26 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Dec  5 01:17:26 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec  5 01:17:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Dec  5 01:17:26 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Dec  5 01:17:26 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 100 pg[9.16( v 50'586 (0'0,50'586] local-lis/les=99/100 n=6 ec=53/44 lis/c=97/69 les/c/f=98/70/0 sis=99) [0] r=0 lpr=99 pi=[69,99)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:17:26 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Dec  5 01:17:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:17:26 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:17:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 01:17:26 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:17:26 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Dec  5 01:17:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 01:17:26 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:17:26 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 92cd3238-542e-4ea8-902a-65773c96e4b3 does not exist
Dec  5 01:17:26 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 2ed5db00-e6ab-4bab-befd-b74e0422a497 does not exist
Dec  5 01:17:26 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 9684fcf6-aeb9-4b8f-a184-4295986bdcf9 does not exist
Dec  5 01:17:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 01:17:26 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 01:17:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 01:17:26 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:17:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:17:26 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:17:26 compute-0 podman[227196]: 2025-12-05 01:17:26.941142857 +0000 UTC m=+0.131196968 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, container_name=kepler, release=1214.1726694543, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, config_id=edpm, distribution-scope=public, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, io.openshift.expose-services=, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  5 01:17:27 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Dec  5 01:17:27 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Dec  5 01:17:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:17:27 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec  5 01:17:27 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:17:27 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:17:27 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:17:27 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Dec  5 01:17:27 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Dec  5 01:17:27 compute-0 podman[227282]: 2025-12-05 01:17:27.532119589 +0000 UTC m=+0.089522382 container create c638f4b3c296ff6b37e6bb8f4ffde5b21de065aa97fd05aac7fe158f92a4922b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_swirles, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  5 01:17:27 compute-0 podman[227282]: 2025-12-05 01:17:27.495577626 +0000 UTC m=+0.052980469 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:17:27 compute-0 systemd[1]: Started libpod-conmon-c638f4b3c296ff6b37e6bb8f4ffde5b21de065aa97fd05aac7fe158f92a4922b.scope.
Dec  5 01:17:27 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:17:27 compute-0 podman[227282]: 2025-12-05 01:17:27.710020701 +0000 UTC m=+0.267423544 container init c638f4b3c296ff6b37e6bb8f4ffde5b21de065aa97fd05aac7fe158f92a4922b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  5 01:17:27 compute-0 podman[227282]: 2025-12-05 01:17:27.728091622 +0000 UTC m=+0.285494405 container start c638f4b3c296ff6b37e6bb8f4ffde5b21de065aa97fd05aac7fe158f92a4922b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:17:27 compute-0 podman[227282]: 2025-12-05 01:17:27.735498867 +0000 UTC m=+0.292901670 container attach c638f4b3c296ff6b37e6bb8f4ffde5b21de065aa97fd05aac7fe158f92a4922b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Dec  5 01:17:27 compute-0 crazy_swirles[227298]: 167 167
Dec  5 01:17:27 compute-0 systemd[1]: libpod-c638f4b3c296ff6b37e6bb8f4ffde5b21de065aa97fd05aac7fe158f92a4922b.scope: Deactivated successfully.
Dec  5 01:17:27 compute-0 podman[227282]: 2025-12-05 01:17:27.743701534 +0000 UTC m=+0.301104327 container died c638f4b3c296ff6b37e6bb8f4ffde5b21de065aa97fd05aac7fe158f92a4922b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_swirles, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:17:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c6aba9c84185f3dfd8ca514f8be55e970f49159630f1134d7b1e981e95a2e40-merged.mount: Deactivated successfully.
Dec  5 01:17:27 compute-0 podman[227282]: 2025-12-05 01:17:27.838698728 +0000 UTC m=+0.396101511 container remove c638f4b3c296ff6b37e6bb8f4ffde5b21de065aa97fd05aac7fe158f92a4922b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_swirles, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  5 01:17:27 compute-0 systemd[1]: libpod-conmon-c638f4b3c296ff6b37e6bb8f4ffde5b21de065aa97fd05aac7fe158f92a4922b.scope: Deactivated successfully.
Dec  5 01:17:28 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Dec  5 01:17:28 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Dec  5 01:17:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v217: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:17:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Dec  5 01:17:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Dec  5 01:17:28 compute-0 podman[227324]: 2025-12-05 01:17:28.157520795 +0000 UTC m=+0.105849515 container create cbdced91630455828dee7e0193758cd323addc542c8577ee515458a086445269 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_greider, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:17:28 compute-0 podman[227324]: 2025-12-05 01:17:28.118469533 +0000 UTC m=+0.066798313 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:17:28 compute-0 systemd[1]: Started libpod-conmon-cbdced91630455828dee7e0193758cd323addc542c8577ee515458a086445269.scope.
Dec  5 01:17:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Dec  5 01:17:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec  5 01:17:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Dec  5 01:17:28 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Dec  5 01:17:28 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:17:28 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Dec  5 01:17:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/367b47647045a64f1c363345e6835590b125779398cb65609fdb3f9d16862825/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:17:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/367b47647045a64f1c363345e6835590b125779398cb65609fdb3f9d16862825/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:17:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/367b47647045a64f1c363345e6835590b125779398cb65609fdb3f9d16862825/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:17:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/367b47647045a64f1c363345e6835590b125779398cb65609fdb3f9d16862825/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:17:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/367b47647045a64f1c363345e6835590b125779398cb65609fdb3f9d16862825/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:17:28 compute-0 podman[227324]: 2025-12-05 01:17:28.345752013 +0000 UTC m=+0.294080753 container init cbdced91630455828dee7e0193758cd323addc542c8577ee515458a086445269 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  5 01:17:28 compute-0 podman[227324]: 2025-12-05 01:17:28.368309968 +0000 UTC m=+0.316638688 container start cbdced91630455828dee7e0193758cd323addc542c8577ee515458a086445269 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:17:28 compute-0 podman[227324]: 2025-12-05 01:17:28.375632521 +0000 UTC m=+0.323961301 container attach cbdced91630455828dee7e0193758cd323addc542c8577ee515458a086445269 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  5 01:17:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 101 pg[9.19( v 50'586 (0'0,50'586] local-lis/les=61/62 n=6 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=101 pruub=12.510266304s) [2] r=-1 lpr=101 pi=[61,101)/1 crt=50'586 mlcod 0'0 active pruub 198.134582520s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:17:28 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 101 pg[9.19( v 50'586 (0'0,50'586] local-lis/les=61/62 n=6 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=101 pruub=12.510182381s) [2] r=-1 lpr=101 pi=[61,101)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 198.134582520s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:17:28 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 101 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=101) [2] r=0 lpr=101 pi=[61,101)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:17:28 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Dec  5 01:17:28 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Dec  5 01:17:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Dec  5 01:17:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec  5 01:17:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Dec  5 01:17:29 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Dec  5 01:17:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 102 pg[9.19( v 50'586 (0'0,50'586] local-lis/les=61/62 n=6 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=102) [2]/[0] r=0 lpr=102 pi=[61,102)/1 crt=50'586 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:17:29 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 102 pg[9.19( v 50'586 (0'0,50'586] local-lis/les=61/62 n=6 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=102) [2]/[0] r=0 lpr=102 pi=[61,102)/1 crt=50'586 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  5 01:17:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 102 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=102) [2]/[0] r=-1 lpr=102 pi=[61,102)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:17:29 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 102 pg[9.19( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=102) [2]/[0] r=-1 lpr=102 pi=[61,102)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  5 01:17:29 compute-0 hungry_greider[227340]: --> passed data devices: 0 physical, 3 LVM
Dec  5 01:17:29 compute-0 hungry_greider[227340]: --> relative data size: 1.0
Dec  5 01:17:29 compute-0 hungry_greider[227340]: --> All data devices are unavailable
Dec  5 01:17:29 compute-0 podman[227362]: 2025-12-05 01:17:29.751053788 +0000 UTC m=+0.149043343 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, build-date=2025-08-20T13:12:41, name=ubi9-minimal, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, managed_by=edpm_ansible, vendor=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  5 01:17:29 compute-0 podman[158197]: time="2025-12-05T01:17:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:17:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:17:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 34526 "" "Go-http-client/1.1"
Dec  5 01:17:29 compute-0 systemd[1]: libpod-cbdced91630455828dee7e0193758cd323addc542c8577ee515458a086445269.scope: Deactivated successfully.
Dec  5 01:17:29 compute-0 systemd[1]: libpod-cbdced91630455828dee7e0193758cd323addc542c8577ee515458a086445269.scope: Consumed 1.313s CPU time.
Dec  5 01:17:29 compute-0 podman[227324]: 2025-12-05 01:17:29.782218321 +0000 UTC m=+1.730547031 container died cbdced91630455828dee7e0193758cd323addc542c8577ee515458a086445269 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_greider, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  5 01:17:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:17:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6790 "" "Go-http-client/1.1"
Dec  5 01:17:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-367b47647045a64f1c363345e6835590b125779398cb65609fdb3f9d16862825-merged.mount: Deactivated successfully.
Dec  5 01:17:29 compute-0 podman[227324]: 2025-12-05 01:17:29.8788412 +0000 UTC m=+1.827169880 container remove cbdced91630455828dee7e0193758cd323addc542c8577ee515458a086445269 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:17:29 compute-0 systemd[1]: libpod-conmon-cbdced91630455828dee7e0193758cd323addc542c8577ee515458a086445269.scope: Deactivated successfully.
Dec  5 01:17:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v220: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 1 objects/s recovering
Dec  5 01:17:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Dec  5 01:17:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Dec  5 01:17:30 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Dec  5 01:17:30 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 103 pg[9.19( v 50'586 (0'0,50'586] local-lis/les=102/103 n=6 ec=53/44 lis/c=61/61 les/c/f=62/62/0 sis=102) [2]/[0] async=[2] r=0 lpr=102 pi=[61,102)/1 crt=50'586 mlcod 0'0 active+remapped mbc={255={(0+1)=11}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:17:31 compute-0 podman[227549]: 2025-12-05 01:17:31.060461525 +0000 UTC m=+0.081015107 container create 52cb86b8ccb63e50230dee68d1579cabefb2f435ed8ac5efa85eb818b7fe8241 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  5 01:17:31 compute-0 systemd[1]: Started libpod-conmon-52cb86b8ccb63e50230dee68d1579cabefb2f435ed8ac5efa85eb818b7fe8241.scope.
Dec  5 01:17:31 compute-0 podman[227549]: 2025-12-05 01:17:31.027784889 +0000 UTC m=+0.048338531 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:17:31 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:17:31 compute-0 podman[227549]: 2025-12-05 01:17:31.205368421 +0000 UTC m=+0.225922003 container init 52cb86b8ccb63e50230dee68d1579cabefb2f435ed8ac5efa85eb818b7fe8241 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jepsen, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  5 01:17:31 compute-0 podman[227549]: 2025-12-05 01:17:31.216192782 +0000 UTC m=+0.236746354 container start 52cb86b8ccb63e50230dee68d1579cabefb2f435ed8ac5efa85eb818b7fe8241 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  5 01:17:31 compute-0 podman[227549]: 2025-12-05 01:17:31.220387038 +0000 UTC m=+0.240940630 container attach 52cb86b8ccb63e50230dee68d1579cabefb2f435ed8ac5efa85eb818b7fe8241 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jepsen, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:17:31 compute-0 hardcore_jepsen[227565]: 167 167
Dec  5 01:17:31 compute-0 systemd[1]: libpod-52cb86b8ccb63e50230dee68d1579cabefb2f435ed8ac5efa85eb818b7fe8241.scope: Deactivated successfully.
Dec  5 01:17:31 compute-0 podman[227549]: 2025-12-05 01:17:31.230402205 +0000 UTC m=+0.250955847 container died 52cb86b8ccb63e50230dee68d1579cabefb2f435ed8ac5efa85eb818b7fe8241 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jepsen, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:17:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-6943ac771bfa0bc789a6c4df7ca3b0c20316bcab3851214083cba0f29dc05bd2-merged.mount: Deactivated successfully.
Dec  5 01:17:31 compute-0 podman[227549]: 2025-12-05 01:17:31.306351681 +0000 UTC m=+0.326905263 container remove 52cb86b8ccb63e50230dee68d1579cabefb2f435ed8ac5efa85eb818b7fe8241 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jepsen, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:17:31 compute-0 systemd[1]: libpod-conmon-52cb86b8ccb63e50230dee68d1579cabefb2f435ed8ac5efa85eb818b7fe8241.scope: Deactivated successfully.
Dec  5 01:17:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Dec  5 01:17:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Dec  5 01:17:31 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Dec  5 01:17:31 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 104 pg[9.19( v 50'586 (0'0,50'586] local-lis/les=102/103 n=6 ec=53/44 lis/c=102/61 les/c/f=103/62/0 sis=104 pruub=15.241423607s) [2] async=[2] r=-1 lpr=104 pi=[61,104)/1 crt=50'586 mlcod 50'586 active pruub 203.666870117s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:17:31 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 104 pg[9.19( v 50'586 (0'0,50'586] local-lis/les=102/103 n=6 ec=53/44 lis/c=102/61 les/c/f=103/62/0 sis=104 pruub=15.241143227s) [2] r=-1 lpr=104 pi=[61,104)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 203.666870117s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:17:31 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 104 pg[9.19( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=102/61 les/c/f=103/62/0 sis=104) [2] r=0 lpr=104 pi=[61,104)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:17:31 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 104 pg[9.19( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=102/61 les/c/f=103/62/0 sis=104) [2] r=0 lpr=104 pi=[61,104)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:17:31 compute-0 openstack_network_exporter[160350]: ERROR   01:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:17:31 compute-0 openstack_network_exporter[160350]: ERROR   01:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:17:31 compute-0 openstack_network_exporter[160350]: ERROR   01:17:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:17:31 compute-0 openstack_network_exporter[160350]: ERROR   01:17:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:17:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:17:31 compute-0 openstack_network_exporter[160350]: ERROR   01:17:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:17:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:17:31 compute-0 podman[227592]: 2025-12-05 01:17:31.544745989 +0000 UTC m=+0.072996594 container create 0c09d993b0d2ecd12941bbd444f5d4261fd711cade95120e5571c7863ba5916b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  5 01:17:31 compute-0 podman[227592]: 2025-12-05 01:17:31.513493083 +0000 UTC m=+0.041743708 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:17:31 compute-0 systemd[1]: Started libpod-conmon-0c09d993b0d2ecd12941bbd444f5d4261fd711cade95120e5571c7863ba5916b.scope.
Dec  5 01:17:31 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:17:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309442ed554c246658b5919c2d9917f85f443479f93a38fd1017d6f6c86a3841/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:17:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309442ed554c246658b5919c2d9917f85f443479f93a38fd1017d6f6c86a3841/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:17:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309442ed554c246658b5919c2d9917f85f443479f93a38fd1017d6f6c86a3841/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:17:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/309442ed554c246658b5919c2d9917f85f443479f93a38fd1017d6f6c86a3841/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:17:31 compute-0 podman[227592]: 2025-12-05 01:17:31.720038448 +0000 UTC m=+0.248289133 container init 0c09d993b0d2ecd12941bbd444f5d4261fd711cade95120e5571c7863ba5916b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_solomon, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:17:31 compute-0 podman[227592]: 2025-12-05 01:17:31.730437926 +0000 UTC m=+0.258688551 container start 0c09d993b0d2ecd12941bbd444f5d4261fd711cade95120e5571c7863ba5916b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_solomon, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Dec  5 01:17:31 compute-0 podman[227592]: 2025-12-05 01:17:31.736160815 +0000 UTC m=+0.264411440 container attach 0c09d993b0d2ecd12941bbd444f5d4261fd711cade95120e5571c7863ba5916b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec  5 01:17:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:17:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v223: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 4 objects/s recovering
Dec  5 01:17:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Dec  5 01:17:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Dec  5 01:17:32 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Dec  5 01:17:32 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 105 pg[9.19( v 50'586 (0'0,50'586] local-lis/les=104/105 n=6 ec=53/44 lis/c=102/61 les/c/f=103/62/0 sis=104) [2] r=0 lpr=104 pi=[61,104)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:17:32 compute-0 great_solomon[227613]: {
Dec  5 01:17:32 compute-0 great_solomon[227613]:    "0": [
Dec  5 01:17:32 compute-0 great_solomon[227613]:        {
Dec  5 01:17:32 compute-0 great_solomon[227613]:            "devices": [
Dec  5 01:17:32 compute-0 great_solomon[227613]:                "/dev/loop3"
Dec  5 01:17:32 compute-0 great_solomon[227613]:            ],
Dec  5 01:17:32 compute-0 great_solomon[227613]:            "lv_name": "ceph_lv0",
Dec  5 01:17:32 compute-0 great_solomon[227613]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:17:32 compute-0 great_solomon[227613]:            "lv_size": "21470642176",
Dec  5 01:17:32 compute-0 great_solomon[227613]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:17:32 compute-0 great_solomon[227613]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:17:32 compute-0 great_solomon[227613]:            "name": "ceph_lv0",
Dec  5 01:17:32 compute-0 great_solomon[227613]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:17:32 compute-0 great_solomon[227613]:            "tags": {
Dec  5 01:17:32 compute-0 great_solomon[227613]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:17:32 compute-0 great_solomon[227613]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:17:32 compute-0 great_solomon[227613]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:17:32 compute-0 great_solomon[227613]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:17:32 compute-0 great_solomon[227613]:                "ceph.cluster_name": "ceph",
Dec  5 01:17:32 compute-0 great_solomon[227613]:                "ceph.crush_device_class": "",
Dec  5 01:17:32 compute-0 great_solomon[227613]:                "ceph.encrypted": "0",
Dec  5 01:17:32 compute-0 great_solomon[227613]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:17:32 compute-0 great_solomon[227613]:                "ceph.osd_id": "0",
Dec  5 01:17:32 compute-0 great_solomon[227613]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:17:32 compute-0 great_solomon[227613]:                "ceph.type": "block",
Dec  5 01:17:32 compute-0 great_solomon[227613]:                "ceph.vdo": "0"
Dec  5 01:17:32 compute-0 great_solomon[227613]:            },
Dec  5 01:17:32 compute-0 great_solomon[227613]:            "type": "block",
Dec  5 01:17:32 compute-0 great_solomon[227613]:            "vg_name": "ceph_vg0"
Dec  5 01:17:32 compute-0 great_solomon[227613]:        }
Dec  5 01:17:32 compute-0 great_solomon[227613]:    ],
Dec  5 01:17:32 compute-0 great_solomon[227613]:    "1": [
Dec  5 01:17:32 compute-0 great_solomon[227613]:        {
Dec  5 01:17:32 compute-0 great_solomon[227613]:            "devices": [
Dec  5 01:17:32 compute-0 great_solomon[227613]:                "/dev/loop4"
Dec  5 01:17:32 compute-0 great_solomon[227613]:            ],
Dec  5 01:17:32 compute-0 great_solomon[227613]:            "lv_name": "ceph_lv1",
Dec  5 01:17:32 compute-0 great_solomon[227613]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:17:32 compute-0 great_solomon[227613]:            "lv_size": "21470642176",
Dec  5 01:17:32 compute-0 great_solomon[227613]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:17:32 compute-0 great_solomon[227613]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:17:32 compute-0 great_solomon[227613]:            "name": "ceph_lv1",
Dec  5 01:17:32 compute-0 great_solomon[227613]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:17:32 compute-0 great_solomon[227613]:            "tags": {
Dec  5 01:17:32 compute-0 great_solomon[227613]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:17:32 compute-0 great_solomon[227613]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:17:32 compute-0 great_solomon[227613]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:17:32 compute-0 great_solomon[227613]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:17:32 compute-0 great_solomon[227613]:                "ceph.cluster_name": "ceph",
Dec  5 01:17:32 compute-0 great_solomon[227613]:                "ceph.crush_device_class": "",
Dec  5 01:17:32 compute-0 great_solomon[227613]:                "ceph.encrypted": "0",
Dec  5 01:17:32 compute-0 great_solomon[227613]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:17:32 compute-0 great_solomon[227613]:                "ceph.osd_id": "1",
Dec  5 01:17:32 compute-0 great_solomon[227613]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:17:32 compute-0 great_solomon[227613]:                "ceph.type": "block",
Dec  5 01:17:32 compute-0 great_solomon[227613]:                "ceph.vdo": "0"
Dec  5 01:17:32 compute-0 great_solomon[227613]:            },
Dec  5 01:17:32 compute-0 great_solomon[227613]:            "type": "block",
Dec  5 01:17:32 compute-0 great_solomon[227613]:            "vg_name": "ceph_vg1"
Dec  5 01:17:32 compute-0 great_solomon[227613]:        }
Dec  5 01:17:32 compute-0 great_solomon[227613]:    ],
Dec  5 01:17:32 compute-0 great_solomon[227613]:    "2": [
Dec  5 01:17:32 compute-0 great_solomon[227613]:        {
Dec  5 01:17:32 compute-0 great_solomon[227613]:            "devices": [
Dec  5 01:17:32 compute-0 great_solomon[227613]:                "/dev/loop5"
Dec  5 01:17:32 compute-0 great_solomon[227613]:            ],
Dec  5 01:17:32 compute-0 great_solomon[227613]:            "lv_name": "ceph_lv2",
Dec  5 01:17:32 compute-0 great_solomon[227613]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:17:32 compute-0 great_solomon[227613]:            "lv_size": "21470642176",
Dec  5 01:17:32 compute-0 great_solomon[227613]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:17:32 compute-0 great_solomon[227613]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:17:32 compute-0 great_solomon[227613]:            "name": "ceph_lv2",
Dec  5 01:17:32 compute-0 great_solomon[227613]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:17:32 compute-0 great_solomon[227613]:            "tags": {
Dec  5 01:17:32 compute-0 great_solomon[227613]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:17:32 compute-0 great_solomon[227613]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:17:32 compute-0 great_solomon[227613]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:17:32 compute-0 great_solomon[227613]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:17:32 compute-0 great_solomon[227613]:                "ceph.cluster_name": "ceph",
Dec  5 01:17:32 compute-0 great_solomon[227613]:                "ceph.crush_device_class": "",
Dec  5 01:17:32 compute-0 great_solomon[227613]:                "ceph.encrypted": "0",
Dec  5 01:17:32 compute-0 great_solomon[227613]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:17:32 compute-0 great_solomon[227613]:                "ceph.osd_id": "2",
Dec  5 01:17:32 compute-0 great_solomon[227613]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:17:32 compute-0 great_solomon[227613]:                "ceph.type": "block",
Dec  5 01:17:32 compute-0 great_solomon[227613]:                "ceph.vdo": "0"
Dec  5 01:17:32 compute-0 great_solomon[227613]:            },
Dec  5 01:17:32 compute-0 great_solomon[227613]:            "type": "block",
Dec  5 01:17:32 compute-0 great_solomon[227613]:            "vg_name": "ceph_vg2"
Dec  5 01:17:32 compute-0 great_solomon[227613]:        }
Dec  5 01:17:32 compute-0 great_solomon[227613]:    ]
Dec  5 01:17:32 compute-0 great_solomon[227613]: }
Dec  5 01:17:32 compute-0 systemd[1]: libpod-0c09d993b0d2ecd12941bbd444f5d4261fd711cade95120e5571c7863ba5916b.scope: Deactivated successfully.
Dec  5 01:17:32 compute-0 podman[227592]: 2025-12-05 01:17:32.614366919 +0000 UTC m=+1.142617524 container died 0c09d993b0d2ecd12941bbd444f5d4261fd711cade95120e5571c7863ba5916b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_solomon, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:17:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-309442ed554c246658b5919c2d9917f85f443479f93a38fd1017d6f6c86a3841-merged.mount: Deactivated successfully.
Dec  5 01:17:32 compute-0 podman[227592]: 2025-12-05 01:17:32.720164321 +0000 UTC m=+1.248414926 container remove 0c09d993b0d2ecd12941bbd444f5d4261fd711cade95120e5571c7863ba5916b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_solomon, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  5 01:17:32 compute-0 systemd[1]: libpod-conmon-0c09d993b0d2ecd12941bbd444f5d4261fd711cade95120e5571c7863ba5916b.scope: Deactivated successfully.
Dec  5 01:17:33 compute-0 podman[227802]: 2025-12-05 01:17:33.743251761 +0000 UTC m=+0.076627945 container create cbbe367712c5b3f58a4ea3e3d5f24b14684dec1980bbbc5643509f37e78ad9d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:17:33 compute-0 podman[227802]: 2025-12-05 01:17:33.708500718 +0000 UTC m=+0.041876982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:17:33 compute-0 systemd[1]: Started libpod-conmon-cbbe367712c5b3f58a4ea3e3d5f24b14684dec1980bbbc5643509f37e78ad9d7.scope.
Dec  5 01:17:33 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:17:33 compute-0 podman[227802]: 2025-12-05 01:17:33.872679519 +0000 UTC m=+0.206055743 container init cbbe367712c5b3f58a4ea3e3d5f24b14684dec1980bbbc5643509f37e78ad9d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  5 01:17:33 compute-0 podman[227802]: 2025-12-05 01:17:33.890498893 +0000 UTC m=+0.223875107 container start cbbe367712c5b3f58a4ea3e3d5f24b14684dec1980bbbc5643509f37e78ad9d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mestorf, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:17:33 compute-0 podman[227802]: 2025-12-05 01:17:33.898252398 +0000 UTC m=+0.231628672 container attach cbbe367712c5b3f58a4ea3e3d5f24b14684dec1980bbbc5643509f37e78ad9d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mestorf, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  5 01:17:33 compute-0 intelligent_mestorf[227818]: 167 167
Dec  5 01:17:33 compute-0 systemd[1]: libpod-cbbe367712c5b3f58a4ea3e3d5f24b14684dec1980bbbc5643509f37e78ad9d7.scope: Deactivated successfully.
Dec  5 01:17:33 compute-0 podman[227802]: 2025-12-05 01:17:33.901128217 +0000 UTC m=+0.234504441 container died cbbe367712c5b3f58a4ea3e3d5f24b14684dec1980bbbc5643509f37e78ad9d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  5 01:17:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ec27684b44392eb1b9d336c1d62f00ac749a6063922c58c3a9cf079ac35bc36-merged.mount: Deactivated successfully.
Dec  5 01:17:33 compute-0 podman[227802]: 2025-12-05 01:17:33.981815874 +0000 UTC m=+0.315192058 container remove cbbe367712c5b3f58a4ea3e3d5f24b14684dec1980bbbc5643509f37e78ad9d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  5 01:17:33 compute-0 systemd[1]: libpod-conmon-cbbe367712c5b3f58a4ea3e3d5f24b14684dec1980bbbc5643509f37e78ad9d7.scope: Deactivated successfully.
Dec  5 01:17:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v225: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 45 B/s, 3 objects/s recovering
Dec  5 01:17:34 compute-0 podman[227839]: 2025-12-05 01:17:34.317953622 +0000 UTC m=+0.098898813 container create 7a03f112de573fafd50382ee625feb286e94570cec6ccbd2bfbed67b1e6066f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_wright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:17:34 compute-0 podman[227839]: 2025-12-05 01:17:34.275545726 +0000 UTC m=+0.056490967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:17:34 compute-0 systemd[1]: Started libpod-conmon-7a03f112de573fafd50382ee625feb286e94570cec6ccbd2bfbed67b1e6066f0.scope.
Dec  5 01:17:34 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:17:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e0dfd6cd5f1b55186ab3e82fdd069ed69487b668fd086d0acadcb966397559e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:17:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e0dfd6cd5f1b55186ab3e82fdd069ed69487b668fd086d0acadcb966397559e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:17:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e0dfd6cd5f1b55186ab3e82fdd069ed69487b668fd086d0acadcb966397559e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:17:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e0dfd6cd5f1b55186ab3e82fdd069ed69487b668fd086d0acadcb966397559e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:17:34 compute-0 podman[227839]: 2025-12-05 01:17:34.507802155 +0000 UTC m=+0.288747306 container init 7a03f112de573fafd50382ee625feb286e94570cec6ccbd2bfbed67b1e6066f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_wright, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:17:34 compute-0 podman[227839]: 2025-12-05 01:17:34.523487249 +0000 UTC m=+0.304432390 container start 7a03f112de573fafd50382ee625feb286e94570cec6ccbd2bfbed67b1e6066f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  5 01:17:34 compute-0 podman[227839]: 2025-12-05 01:17:34.533111566 +0000 UTC m=+0.314056737 container attach 7a03f112de573fafd50382ee625feb286e94570cec6ccbd2bfbed67b1e6066f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_wright, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  5 01:17:34 compute-0 podman[227853]: 2025-12-05 01:17:34.536691775 +0000 UTC m=+0.152189509 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 01:17:35 compute-0 nice_wright[227862]: {
Dec  5 01:17:35 compute-0 nice_wright[227862]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 01:17:35 compute-0 nice_wright[227862]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:17:35 compute-0 nice_wright[227862]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 01:17:35 compute-0 nice_wright[227862]:        "osd_id": 0,
Dec  5 01:17:35 compute-0 nice_wright[227862]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:17:35 compute-0 nice_wright[227862]:        "type": "bluestore"
Dec  5 01:17:35 compute-0 nice_wright[227862]:    },
Dec  5 01:17:35 compute-0 nice_wright[227862]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 01:17:35 compute-0 nice_wright[227862]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:17:35 compute-0 nice_wright[227862]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 01:17:35 compute-0 nice_wright[227862]:        "osd_id": 1,
Dec  5 01:17:35 compute-0 nice_wright[227862]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:17:35 compute-0 nice_wright[227862]:        "type": "bluestore"
Dec  5 01:17:35 compute-0 nice_wright[227862]:    },
Dec  5 01:17:35 compute-0 nice_wright[227862]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 01:17:35 compute-0 nice_wright[227862]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:17:35 compute-0 nice_wright[227862]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 01:17:35 compute-0 nice_wright[227862]:        "osd_id": 2,
Dec  5 01:17:35 compute-0 nice_wright[227862]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:17:35 compute-0 nice_wright[227862]:        "type": "bluestore"
Dec  5 01:17:35 compute-0 nice_wright[227862]:    }
Dec  5 01:17:35 compute-0 nice_wright[227862]: }
Dec  5 01:17:35 compute-0 systemd[1]: libpod-7a03f112de573fafd50382ee625feb286e94570cec6ccbd2bfbed67b1e6066f0.scope: Deactivated successfully.
Dec  5 01:17:35 compute-0 systemd[1]: libpod-7a03f112de573fafd50382ee625feb286e94570cec6ccbd2bfbed67b1e6066f0.scope: Consumed 1.064s CPU time.
Dec  5 01:17:35 compute-0 podman[227839]: 2025-12-05 01:17:35.594820867 +0000 UTC m=+1.375766108 container died 7a03f112de573fafd50382ee625feb286e94570cec6ccbd2bfbed67b1e6066f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_wright, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:17:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e0dfd6cd5f1b55186ab3e82fdd069ed69487b668fd086d0acadcb966397559e-merged.mount: Deactivated successfully.
Dec  5 01:17:35 compute-0 podman[227839]: 2025-12-05 01:17:35.697607297 +0000 UTC m=+1.478552458 container remove 7a03f112de573fafd50382ee625feb286e94570cec6ccbd2bfbed67b1e6066f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  5 01:17:35 compute-0 systemd[1]: libpod-conmon-7a03f112de573fafd50382ee625feb286e94570cec6ccbd2bfbed67b1e6066f0.scope: Deactivated successfully.
Dec  5 01:17:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:17:35 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:17:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:17:35 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:17:35 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev d2829ab6-dc82-4126-afdd-adf05cc5cba8 does not exist
Dec  5 01:17:35 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 85e3b550-8871-4c32-aa25-22592fc8669a does not exist
Dec  5 01:17:36 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Dec  5 01:17:36 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Dec  5 01:17:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v226: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Dec  5 01:17:36 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Dec  5 01:17:36 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Dec  5 01:17:36 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:17:36 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:17:37 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.b scrub starts
Dec  5 01:17:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:17:37 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.b scrub ok
Dec  5 01:17:37 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Dec  5 01:17:37 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Dec  5 01:17:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v227: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 1 objects/s recovering
Dec  5 01:17:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Dec  5 01:17:38 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Dec  5 01:17:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Dec  5 01:17:38 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec  5 01:17:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Dec  5 01:17:38 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Dec  5 01:17:38 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Dec  5 01:17:39 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Dec  5 01:17:39 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Dec  5 01:17:39 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Dec  5 01:17:39 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Dec  5 01:17:39 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec  5 01:17:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v229: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:17:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Dec  5 01:17:40 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Dec  5 01:17:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Dec  5 01:17:40 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec  5 01:17:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Dec  5 01:17:40 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Dec  5 01:17:40 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Dec  5 01:17:41 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Dec  5 01:17:41 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Dec  5 01:17:41 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.d scrub starts
Dec  5 01:17:41 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 2.d scrub ok
Dec  5 01:17:41 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec  5 01:17:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:17:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v231: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:17:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Dec  5 01:17:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Dec  5 01:17:42 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Dec  5 01:17:42 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Dec  5 01:17:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Dec  5 01:17:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec  5 01:17:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Dec  5 01:17:42 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Dec  5 01:17:42 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 108 pg[9.1c( v 50'586 (0'0,50'586] local-lis/les=80/81 n=6 ec=53/44 lis/c=80/80 les/c/f=81/81/0 sis=108 pruub=11.241013527s) [0] r=-1 lpr=108 pi=[80,108)/1 crt=50'586 mlcod 0'0 active pruub 198.502120972s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:17:42 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 108 pg[9.1c( v 50'586 (0'0,50'586] local-lis/les=80/81 n=6 ec=53/44 lis/c=80/80 les/c/f=81/81/0 sis=108 pruub=11.240839958s) [0] r=-1 lpr=108 pi=[80,108)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 198.502120972s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:17:42 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Dec  5 01:17:42 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 108 pg[9.1c( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=80/80 les/c/f=81/81/0 sis=108) [0] r=0 lpr=108 pi=[80,108)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:17:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Dec  5 01:17:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec  5 01:17:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Dec  5 01:17:43 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Dec  5 01:17:43 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 109 pg[9.1c( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=80/80 les/c/f=81/81/0 sis=109) [0]/[2] r=-1 lpr=109 pi=[80,109)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:17:43 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 109 pg[9.1c( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=80/80 les/c/f=81/81/0 sis=109) [0]/[2] r=-1 lpr=109 pi=[80,109)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  5 01:17:43 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 109 pg[9.1c( v 50'586 (0'0,50'586] local-lis/les=80/81 n=6 ec=53/44 lis/c=80/80 les/c/f=81/81/0 sis=109) [0]/[2] r=0 lpr=109 pi=[80,109)/1 crt=50'586 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:17:43 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 109 pg[9.1c( v 50'586 (0'0,50'586] local-lis/les=80/81 n=6 ec=53/44 lis/c=80/80 les/c/f=81/81/0 sis=109) [0]/[2] r=0 lpr=109 pi=[80,109)/1 crt=50'586 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  5 01:17:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v234: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:17:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Dec  5 01:17:44 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Dec  5 01:17:44 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Dec  5 01:17:44 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Dec  5 01:17:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Dec  5 01:17:44 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Dec  5 01:17:44 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec  5 01:17:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Dec  5 01:17:44 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Dec  5 01:17:45 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.16 deep-scrub starts
Dec  5 01:17:45 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.16 deep-scrub ok
Dec  5 01:17:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 110 pg[9.1c( v 50'586 (0'0,50'586] local-lis/les=109/110 n=6 ec=53/44 lis/c=80/80 les/c/f=81/81/0 sis=109) [0]/[2] async=[0] r=0 lpr=109 pi=[80,109)/1 crt=50'586 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:17:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Dec  5 01:17:45 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec  5 01:17:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Dec  5 01:17:45 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Dec  5 01:17:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 111 pg[9.1c( v 50'586 (0'0,50'586] local-lis/les=109/110 n=6 ec=53/44 lis/c=109/80 les/c/f=110/81/0 sis=111 pruub=15.265457153s) [0] async=[0] r=-1 lpr=111 pi=[80,111)/1 crt=50'586 mlcod 50'586 active pruub 205.598022461s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:17:45 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 111 pg[9.1c( v 50'586 (0'0,50'586] local-lis/les=109/110 n=6 ec=53/44 lis/c=109/80 les/c/f=110/81/0 sis=111 pruub=15.263625145s) [0] r=-1 lpr=111 pi=[80,111)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 205.598022461s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:17:45 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 111 pg[9.1c( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=109/80 les/c/f=110/81/0 sis=111) [0] r=0 lpr=111 pi=[80,111)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:17:45 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 111 pg[9.1c( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=109/80 les/c/f=110/81/0 sis=111) [0] r=0 lpr=111 pi=[80,111)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:17:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v237: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:17:46 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Dec  5 01:17:46 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Dec  5 01:17:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:17:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:17:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:17:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:17:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:17:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:17:46 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 3.e deep-scrub starts
Dec  5 01:17:46 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 3.e deep-scrub ok
Dec  5 01:17:46 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Dec  5 01:17:46 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Dec  5 01:17:46 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec  5 01:17:46 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Dec  5 01:17:46 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Dec  5 01:17:47 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 112 pg[9.1c( v 50'586 (0'0,50'586] local-lis/les=111/112 n=6 ec=53/44 lis/c=109/80 les/c/f=110/81/0 sis=111) [0] r=0 lpr=111 pi=[80,111)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:17:47 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 112 pg[9.1e( v 50'586 (0'0,50'586] local-lis/les=69/70 n=6 ec=53/44 lis/c=69/69 les/c/f=70/70/0 sis=112 pruub=11.538517952s) [0] r=-1 lpr=112 pi=[69,112)/1 crt=50'586 mlcod 0'0 active pruub 202.917968750s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:17:47 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 112 pg[9.1e( v 50'586 (0'0,50'586] local-lis/les=69/70 n=6 ec=53/44 lis/c=69/69 les/c/f=70/70/0 sis=112 pruub=11.538378716s) [0] r=-1 lpr=112 pi=[69,112)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 202.917968750s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:17:47 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 112 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=69/69 les/c/f=70/70/0 sis=112) [0] r=0 lpr=112 pi=[69,112)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:17:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:17:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Dec  5 01:17:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Dec  5 01:17:47 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Dec  5 01:17:47 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 113 pg[9.1e( v 50'586 (0'0,50'586] local-lis/les=69/70 n=6 ec=53/44 lis/c=69/69 les/c/f=70/70/0 sis=113) [0]/[2] r=0 lpr=113 pi=[69,113)/1 crt=50'586 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:17:47 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 113 pg[9.1e( v 50'586 (0'0,50'586] local-lis/les=69/70 n=6 ec=53/44 lis/c=69/69 les/c/f=70/70/0 sis=113) [0]/[2] r=0 lpr=113 pi=[69,113)/1 crt=50'586 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  5 01:17:47 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 113 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=69/69 les/c/f=70/70/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[69,113)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:17:47 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 113 pg[9.1e( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=69/69 les/c/f=70/70/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[69,113)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  5 01:17:47 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec  5 01:17:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Dec  5 01:17:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Dec  5 01:17:48 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Dec  5 01:17:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v241: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:17:48 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 114 pg[9.1e( v 50'586 (0'0,50'586] local-lis/les=113/114 n=6 ec=53/44 lis/c=69/69 les/c/f=70/70/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[69,113)/1 crt=50'586 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:17:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Dec  5 01:17:48 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  5 01:17:48 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.17 scrub starts
Dec  5 01:17:48 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.17 scrub ok
Dec  5 01:17:49 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Dec  5 01:17:49 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  5 01:17:49 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Dec  5 01:17:49 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Dec  5 01:17:49 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 115 pg[9.1e( v 50'586 (0'0,50'586] local-lis/les=113/114 n=6 ec=53/44 lis/c=113/69 les/c/f=114/70/0 sis=115 pruub=15.017637253s) [0] async=[0] r=-1 lpr=115 pi=[69,115)/1 crt=50'586 mlcod 50'586 active pruub 208.529220581s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:17:49 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 115 pg[9.1e( v 50'586 (0'0,50'586] local-lis/les=113/114 n=6 ec=53/44 lis/c=113/69 les/c/f=114/70/0 sis=115 pruub=15.017431259s) [0] r=-1 lpr=115 pi=[69,115)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 208.529220581s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:17:49 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 115 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=71/72 n=6 ec=53/44 lis/c=71/71 les/c/f=72/72/0 sis=115 pruub=10.987185478s) [1] r=-1 lpr=115 pi=[71,115)/1 crt=50'586 mlcod 0'0 active pruub 204.499038696s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:17:49 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 115 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=71/72 n=6 ec=53/44 lis/c=71/71 les/c/f=72/72/0 sis=115 pruub=10.987100601s) [1] r=-1 lpr=115 pi=[71,115)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 204.499038696s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:17:49 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  5 01:17:49 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 115 pg[9.1e( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=113/69 les/c/f=114/70/0 sis=115) [0] r=0 lpr=115 pi=[69,115)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:17:49 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 115 pg[9.1e( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=113/69 les/c/f=114/70/0 sis=115) [0] r=0 lpr=115 pi=[69,115)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:17:49 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 115 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=71/71 les/c/f=72/72/0 sis=115) [1] r=0 lpr=115 pi=[71,115)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:17:49 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Dec  5 01:17:49 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Dec  5 01:17:49 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Dec  5 01:17:49 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Dec  5 01:17:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Dec  5 01:17:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Dec  5 01:17:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v243: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 3 objects/s recovering
Dec  5 01:17:50 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Dec  5 01:17:50 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=71/71 les/c/f=72/72/0 sis=116) [1]/[2] r=-1 lpr=116 pi=[71,116)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:17:50 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 116 pg[9.1f( empty local-lis/les=0/0 n=0 ec=53/44 lis/c=71/71 les/c/f=72/72/0 sis=116) [1]/[2] r=-1 lpr=116 pi=[71,116)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  5 01:17:50 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 116 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=71/72 n=6 ec=53/44 lis/c=71/71 les/c/f=72/72/0 sis=116) [1]/[2] r=0 lpr=116 pi=[71,116)/1 crt=50'586 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:17:50 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 116 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=71/72 n=6 ec=53/44 lis/c=71/71 les/c/f=72/72/0 sis=116) [1]/[2] r=0 lpr=116 pi=[71,116)/1 crt=50'586 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  5 01:17:50 compute-0 ceph-osd[206647]: osd.0 pg_epoch: 116 pg[9.1e( v 50'586 (0'0,50'586] local-lis/les=115/116 n=6 ec=53/44 lis/c=113/69 les/c/f=114/70/0 sis=115) [0] r=0 lpr=115 pi=[69,115)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:17:50 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  5 01:17:50 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Dec  5 01:17:50 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Dec  5 01:17:51 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 3.17 deep-scrub starts
Dec  5 01:17:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Dec  5 01:17:51 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 3.17 deep-scrub ok
Dec  5 01:17:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Dec  5 01:17:51 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Dec  5 01:17:51 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 117 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=116/117 n=6 ec=53/44 lis/c=71/71 les/c/f=72/72/0 sis=116) [1]/[2] async=[1] r=0 lpr=116 pi=[71,116)/1 crt=50'586 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:17:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:17:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Dec  5 01:17:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Dec  5 01:17:52 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Dec  5 01:17:52 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 118 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=116/117 n=6 ec=53/44 lis/c=116/71 les/c/f=117/72/0 sis=118 pruub=15.226383209s) [1] async=[1] r=-1 lpr=118 pi=[71,118)/1 crt=50'586 mlcod 50'586 active pruub 211.724075317s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:17:52 compute-0 ceph-osd[208828]: osd.2 pg_epoch: 118 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=116/117 n=6 ec=53/44 lis/c=116/71 les/c/f=117/72/0 sis=118 pruub=15.226179123s) [1] r=-1 lpr=118 pi=[71,118)/1 crt=50'586 mlcod 0'0 unknown NOTIFY pruub 211.724075317s@ mbc={}] state<Start>: transitioning to Stray
Dec  5 01:17:52 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 118 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=116/71 les/c/f=117/72/0 sis=118) [1] r=0 lpr=118 pi=[71,118)/1 luod=0'0 crt=50'586 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec  5 01:17:52 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 118 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=0/0 n=6 ec=53/44 lis/c=116/71 les/c/f=117/72/0 sis=118) [1] r=0 lpr=118 pi=[71,118)/1 crt=50'586 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  5 01:17:52 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Dec  5 01:17:52 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Dec  5 01:17:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v247: 321 pgs: 1 active+remapped, 1 peering, 319 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 4 objects/s recovering
Dec  5 01:17:52 compute-0 podman[228014]: 2025-12-05 01:17:52.769377396 +0000 UTC m=+0.166915438 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm)
Dec  5 01:17:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Dec  5 01:17:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Dec  5 01:17:53 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Dec  5 01:17:53 compute-0 ceph-osd[207795]: osd.1 pg_epoch: 119 pg[9.1f( v 50'586 (0'0,50'586] local-lis/les=118/119 n=6 ec=53/44 lis/c=116/71 les/c/f=117/72/0 sis=118) [1] r=0 lpr=118 pi=[71,118)/1 crt=50'586 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  5 01:17:53 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Dec  5 01:17:53 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Dec  5 01:17:53 compute-0 podman[228033]: 2025-12-05 01:17:53.720617654 +0000 UTC m=+0.123558916 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  5 01:17:53 compute-0 podman[228034]: 2025-12-05 01:17:53.846490234 +0000 UTC m=+0.247723238 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec  5 01:17:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v249: 321 pgs: 1 active+remapped, 1 peering, 319 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Dec  5 01:17:54 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Dec  5 01:17:54 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Dec  5 01:17:54 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Dec  5 01:17:54 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Dec  5 01:17:55 compute-0 podman[228081]: 2025-12-05 01:17:55.740230226 +0000 UTC m=+0.141463842 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  5 01:17:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v250: 321 pgs: 1 active+remapped, 320 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Dec  5 01:17:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:17:57 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.a scrub starts
Dec  5 01:17:57 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.a scrub ok
Dec  5 01:17:57 compute-0 podman[228102]: 2025-12-05 01:17:57.743340332 +0000 UTC m=+0.141691268 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, config_id=edpm, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, managed_by=edpm_ansible, container_name=kepler, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, vendor=Red Hat, Inc.)
Dec  5 01:17:58 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Dec  5 01:17:58 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Dec  5 01:17:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v251: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 0 objects/s recovering
Dec  5 01:17:59 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Dec  5 01:17:59 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Dec  5 01:17:59 compute-0 python3.9[228273]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:17:59 compute-0 podman[158197]: time="2025-12-05T01:17:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:17:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:17:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec  5 01:17:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:17:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6808 "" "Go-http-client/1.1"
Dec  5 01:18:00 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Dec  5 01:18:00 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Dec  5 01:18:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v252: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:18:00 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Dec  5 01:18:00 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Dec  5 01:18:00 compute-0 podman[228409]: 2025-12-05 01:18:00.726677571 +0000 UTC m=+0.130907330 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, name=ubi9-minimal)
Dec  5 01:18:01 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 3.f scrub starts
Dec  5 01:18:01 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 3.f scrub ok
Dec  5 01:18:01 compute-0 openstack_network_exporter[160350]: ERROR   01:18:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:18:01 compute-0 openstack_network_exporter[160350]: ERROR   01:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:18:01 compute-0 openstack_network_exporter[160350]: ERROR   01:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:18:01 compute-0 openstack_network_exporter[160350]: ERROR   01:18:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:18:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:18:01 compute-0 openstack_network_exporter[160350]: ERROR   01:18:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:18:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:18:01 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 3.5 deep-scrub starts
Dec  5 01:18:01 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 3.5 deep-scrub ok
Dec  5 01:18:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:18:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v253: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:18:02 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 4.f scrub starts
Dec  5 01:18:02 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 4.f scrub ok
Dec  5 01:18:02 compute-0 python3.9[228581]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec  5 01:18:03 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Dec  5 01:18:03 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Dec  5 01:18:03 compute-0 python3.9[228733]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec  5 01:18:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v254: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:18:04 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.d scrub starts
Dec  5 01:18:04 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.d scrub ok
Dec  5 01:18:04 compute-0 python3.9[228885]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:18:04 compute-0 podman[228886]: 2025-12-05 01:18:04.744080733 +0000 UTC m=+0.146430820 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 01:18:05 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Dec  5 01:18:05 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Dec  5 01:18:05 compute-0 python3.9[229061]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec  5 01:18:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v255: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:18:06 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Dec  5 01:18:06 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Dec  5 01:18:06 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.c scrub starts
Dec  5 01:18:06 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.c scrub ok
Dec  5 01:18:06 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.1c deep-scrub starts
Dec  5 01:18:06 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.1c deep-scrub ok
Dec  5 01:18:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:18:07 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 7.f scrub starts
Dec  5 01:18:07 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 7.f scrub ok
Dec  5 01:18:07 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Dec  5 01:18:07 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Dec  5 01:18:07 compute-0 python3.9[229213]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:18:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v256: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:18:08 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 4.d scrub starts
Dec  5 01:18:08 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 4.d scrub ok
Dec  5 01:18:08 compute-0 python3.9[229365]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:18:09 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 3.c scrub starts
Dec  5 01:18:09 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 3.c scrub ok
Dec  5 01:18:09 compute-0 python3.9[229443]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:18:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v257: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:18:10 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Dec  5 01:18:10 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Dec  5 01:18:10 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.e scrub starts
Dec  5 01:18:10 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.e scrub ok
Dec  5 01:18:11 compute-0 python3.9[229595]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  5 01:18:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:18:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v258: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:18:12 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Dec  5 01:18:12 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Dec  5 01:18:12 compute-0 python3.9[229749]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec  5 01:18:13 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.c scrub starts
Dec  5 01:18:13 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.c scrub ok
Dec  5 01:18:13 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Dec  5 01:18:13 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Dec  5 01:18:13 compute-0 python3.9[229902]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec  5 01:18:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v259: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:18:15 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Dec  5 01:18:15 compute-0 python3.9[230055]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  5 01:18:15 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Dec  5 01:18:15 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Dec  5 01:18:15 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Dec  5 01:18:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:18:16
Dec  5 01:18:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 01:18:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 01:18:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', '.mgr', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', 'images', 'default.rgw.meta', '.rgw.root']
Dec  5 01:18:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 01:18:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v260: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:18:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:18:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:18:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:18:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:18:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:18:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:18:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 01:18:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:18:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 01:18:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:18:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:18:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:18:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:18:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:18:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:18:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:18:16 compute-0 python3.9[230207]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec  5 01:18:16 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Dec  5 01:18:16 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Dec  5 01:18:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:18:17 compute-0 python3.9[230359]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  5 01:18:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v261: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:18:18 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Dec  5 01:18:18 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Dec  5 01:18:18 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.e deep-scrub starts
Dec  5 01:18:18 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.e deep-scrub ok
Dec  5 01:18:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v262: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:18:20 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Dec  5 01:18:20 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Dec  5 01:18:20 compute-0 python3.9[230513]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:18:21 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 3.a scrub starts
Dec  5 01:18:21 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 3.a scrub ok
Dec  5 01:18:21 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Dec  5 01:18:21 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Dec  5 01:18:21 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Dec  5 01:18:21 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Dec  5 01:18:21 compute-0 python3.9[230665]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:18:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:18:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v263: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:18:22 compute-0 python3.9[230743]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:18:23 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Dec  5 01:18:23 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Dec  5 01:18:23 compute-0 podman[230869]: 2025-12-05 01:18:23.583780342 +0000 UTC m=+0.126150900 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  5 01:18:23 compute-0 python3.9[230916]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:18:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v264: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:18:24 compute-0 podman[230966]: 2025-12-05 01:18:24.302585936 +0000 UTC m=+0.105207960 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 01:18:24 compute-0 podman[230967]: 2025-12-05 01:18:24.368127155 +0000 UTC m=+0.168454934 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  5 01:18:24 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.b scrub starts
Dec  5 01:18:24 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.b scrub ok
Dec  5 01:18:24 compute-0 python3.9[231036]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:18:25 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Dec  5 01:18:25 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Dec  5 01:18:25 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Dec  5 01:18:25 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Dec  5 01:18:26 compute-0 podman[231193]: 2025-12-05 01:18:26.0016459 +0000 UTC m=+0.135904866 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  5 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:18:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 01:18:26 compute-0 python3.9[231194]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  5 01:18:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v265: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:18:26 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Dec  5 01:18:26 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Dec  5 01:18:26 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Dec  5 01:18:26 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Dec  5 01:18:26 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.4 deep-scrub starts
Dec  5 01:18:26 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.4 deep-scrub ok
Dec  5 01:18:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:18:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v266: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:18:28 compute-0 podman[231342]: 2025-12-05 01:18:28.720727385 +0000 UTC m=+0.135486144 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_id=edpm, distribution-scope=public, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., release-0.7.12=, container_name=kepler, release=1214.1726694543, name=ubi9, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  5 01:18:28 compute-0 python3.9[231383]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  5 01:18:29 compute-0 podman[158197]: time="2025-12-05T01:18:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:18:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:18:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec  5 01:18:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:18:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6807 "" "Go-http-client/1.1"
Dec  5 01:18:29 compute-0 python3.9[231540]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec  5 01:18:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v267: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:18:30 compute-0 python3.9[231690]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  5 01:18:31 compute-0 openstack_network_exporter[160350]: ERROR   01:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:18:31 compute-0 openstack_network_exporter[160350]: ERROR   01:18:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:18:31 compute-0 openstack_network_exporter[160350]: ERROR   01:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:18:31 compute-0 openstack_network_exporter[160350]: ERROR   01:18:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:18:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:18:31 compute-0 openstack_network_exporter[160350]: ERROR   01:18:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:18:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:18:31 compute-0 podman[231760]: 2025-12-05 01:18:31.718500616 +0000 UTC m=+0.118962408 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, vcs-type=git, architecture=x86_64, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, config_id=edpm, managed_by=edpm_ansible, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container)
Dec  5 01:18:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:18:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v268: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:18:32 compute-0 python3.9[231865]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  5 01:18:32 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec  5 01:18:32 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Dec  5 01:18:32 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec  5 01:18:32 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  5 01:18:33 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  5 01:18:33 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 7.18 deep-scrub starts
Dec  5 01:18:33 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 7.18 deep-scrub ok
Dec  5 01:18:33 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Dec  5 01:18:33 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Dec  5 01:18:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v269: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:18:34 compute-0 python3.9[232026]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec  5 01:18:34 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.1e deep-scrub starts
Dec  5 01:18:34 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.1e deep-scrub ok
Dec  5 01:18:35 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Dec  5 01:18:35 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Dec  5 01:18:35 compute-0 podman[232051]: 2025-12-05 01:18:35.550972767 +0000 UTC m=+0.111541008 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  5 01:18:35 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.1c scrub starts
Dec  5 01:18:35 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.1c scrub ok
Dec  5 01:18:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v270: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:18:36 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 7.1b deep-scrub starts
Dec  5 01:18:36 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 7.1b deep-scrub ok
Dec  5 01:18:36 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 10.5 deep-scrub starts
Dec  5 01:18:36 compute-0 ceph-osd[208828]: log_channel(cluster) log [DBG] : 10.5 deep-scrub ok
Dec  5 01:18:36 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Dec  5 01:18:36 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Dec  5 01:18:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:18:37 compute-0 python3.9[232314]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  5 01:18:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:18:37 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:18:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 01:18:37 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:18:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 01:18:37 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:18:37 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 65a90eeb-7a5d-40e3-ab75-f78d704b4ae0 does not exist
Dec  5 01:18:37 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 5547a444-3a2b-40c5-b06a-3deea5e80189 does not exist
Dec  5 01:18:37 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev bc4f2eff-bdb4-4ff8-abe2-ea599c924709 does not exist
Dec  5 01:18:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 01:18:37 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 01:18:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 01:18:37 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:18:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:18:37 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:18:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v271: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:18:38 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:18:38 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:18:38 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:18:38 compute-0 python3.9[232606]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  5 01:18:38 compute-0 podman[232627]: 2025-12-05 01:18:38.580216975 +0000 UTC m=+0.090658609 container create 04d928d8897781691e379cb96afb7f8e19c162aee2932a7aa888f5f56c6b8983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  5 01:18:38 compute-0 systemd[194721]: Created slice User Background Tasks Slice.
Dec  5 01:18:38 compute-0 systemd[194721]: Starting Cleanup of User's Temporary Files and Directories...
Dec  5 01:18:38 compute-0 podman[232627]: 2025-12-05 01:18:38.531444639 +0000 UTC m=+0.041886333 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:18:38 compute-0 systemd[194721]: Finished Cleanup of User's Temporary Files and Directories.
Dec  5 01:18:38 compute-0 systemd[1]: Started libpod-conmon-04d928d8897781691e379cb96afb7f8e19c162aee2932a7aa888f5f56c6b8983.scope.
Dec  5 01:18:38 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:18:38 compute-0 podman[232627]: 2025-12-05 01:18:38.730702731 +0000 UTC m=+0.241144375 container init 04d928d8897781691e379cb96afb7f8e19c162aee2932a7aa888f5f56c6b8983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  5 01:18:38 compute-0 podman[232627]: 2025-12-05 01:18:38.745632943 +0000 UTC m=+0.256074557 container start 04d928d8897781691e379cb96afb7f8e19c162aee2932a7aa888f5f56c6b8983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:18:38 compute-0 podman[232627]: 2025-12-05 01:18:38.751675973 +0000 UTC m=+0.262117627 container attach 04d928d8897781691e379cb96afb7f8e19c162aee2932a7aa888f5f56c6b8983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lederberg, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  5 01:18:38 compute-0 vibrant_lederberg[232646]: 167 167
Dec  5 01:18:38 compute-0 systemd[1]: libpod-04d928d8897781691e379cb96afb7f8e19c162aee2932a7aa888f5f56c6b8983.scope: Deactivated successfully.
Dec  5 01:18:38 compute-0 podman[232627]: 2025-12-05 01:18:38.765078361 +0000 UTC m=+0.275519995 container died 04d928d8897781691e379cb96afb7f8e19c162aee2932a7aa888f5f56c6b8983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  5 01:18:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa8541f404ee7442693cbe701a67b8dfca0d4fcfcefb1cc8d7cfedec27699a53-merged.mount: Deactivated successfully.
Dec  5 01:18:38 compute-0 podman[232627]: 2025-12-05 01:18:38.858166308 +0000 UTC m=+0.368607942 container remove 04d928d8897781691e379cb96afb7f8e19c162aee2932a7aa888f5f56c6b8983 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  5 01:18:38 compute-0 systemd[1]: libpod-conmon-04d928d8897781691e379cb96afb7f8e19c162aee2932a7aa888f5f56c6b8983.scope: Deactivated successfully.
Dec  5 01:18:39 compute-0 podman[232693]: 2025-12-05 01:18:39.151828995 +0000 UTC m=+0.098820490 container create 286171f8bd0664cccf0b06b1a35a50e0071b8150e8f501c3c65a3d380d4b94fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  5 01:18:39 compute-0 podman[232693]: 2025-12-05 01:18:39.105560639 +0000 UTC m=+0.052552114 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:18:39 compute-0 systemd[1]: Started libpod-conmon-286171f8bd0664cccf0b06b1a35a50e0071b8150e8f501c3c65a3d380d4b94fa.scope.
Dec  5 01:18:39 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:18:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd8ded3aa6165e7167bf4c4752f28cadf1e8bcfcedee0261560b20a37c75584d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:18:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd8ded3aa6165e7167bf4c4752f28cadf1e8bcfcedee0261560b20a37c75584d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:18:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd8ded3aa6165e7167bf4c4752f28cadf1e8bcfcedee0261560b20a37c75584d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:18:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd8ded3aa6165e7167bf4c4752f28cadf1e8bcfcedee0261560b20a37c75584d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:18:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd8ded3aa6165e7167bf4c4752f28cadf1e8bcfcedee0261560b20a37c75584d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:18:39 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Dec  5 01:18:39 compute-0 systemd[1]: session-42.scope: Consumed 1min 26.842s CPU time.
Dec  5 01:18:39 compute-0 systemd-logind[792]: Session 42 logged out. Waiting for processes to exit.
Dec  5 01:18:39 compute-0 systemd-logind[792]: Removed session 42.
Dec  5 01:18:39 compute-0 podman[232693]: 2025-12-05 01:18:39.330802015 +0000 UTC m=+0.277793500 container init 286171f8bd0664cccf0b06b1a35a50e0071b8150e8f501c3c65a3d380d4b94fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_rubin, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:18:39 compute-0 podman[232693]: 2025-12-05 01:18:39.367697466 +0000 UTC m=+0.314688961 container start 286171f8bd0664cccf0b06b1a35a50e0071b8150e8f501c3c65a3d380d4b94fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:18:39 compute-0 podman[232693]: 2025-12-05 01:18:39.375733863 +0000 UTC m=+0.322725408 container attach 286171f8bd0664cccf0b06b1a35a50e0071b8150e8f501c3c65a3d380d4b94fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_rubin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:18:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v272: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:18:40 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.1d deep-scrub starts
Dec  5 01:18:40 compute-0 ceph-osd[207795]: log_channel(cluster) log [DBG] : 6.1d deep-scrub ok
Dec  5 01:18:40 compute-0 intelligent_rubin[232710]: --> passed data devices: 0 physical, 3 LVM
Dec  5 01:18:40 compute-0 intelligent_rubin[232710]: --> relative data size: 1.0
Dec  5 01:18:40 compute-0 intelligent_rubin[232710]: --> All data devices are unavailable
Dec  5 01:18:40 compute-0 systemd[1]: libpod-286171f8bd0664cccf0b06b1a35a50e0071b8150e8f501c3c65a3d380d4b94fa.scope: Deactivated successfully.
Dec  5 01:18:40 compute-0 systemd[1]: libpod-286171f8bd0664cccf0b06b1a35a50e0071b8150e8f501c3c65a3d380d4b94fa.scope: Consumed 1.362s CPU time.
Dec  5 01:18:40 compute-0 podman[232693]: 2025-12-05 01:18:40.795532515 +0000 UTC m=+1.742524020 container died 286171f8bd0664cccf0b06b1a35a50e0071b8150e8f501c3c65a3d380d4b94fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  5 01:18:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd8ded3aa6165e7167bf4c4752f28cadf1e8bcfcedee0261560b20a37c75584d-merged.mount: Deactivated successfully.
Dec  5 01:18:40 compute-0 podman[232693]: 2025-12-05 01:18:40.927330984 +0000 UTC m=+1.874322449 container remove 286171f8bd0664cccf0b06b1a35a50e0071b8150e8f501c3c65a3d380d4b94fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_rubin, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  5 01:18:40 compute-0 systemd[1]: libpod-conmon-286171f8bd0664cccf0b06b1a35a50e0071b8150e8f501c3c65a3d380d4b94fa.scope: Deactivated successfully.
Dec  5 01:18:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:18:42 compute-0 podman[232893]: 2025-12-05 01:18:42.186136965 +0000 UTC m=+0.099280212 container create bc0f2f2f4db2171a2506e1edfcd70414b68c145a78a848f54dcfd7e66cddbf9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  5 01:18:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v273: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:18:42 compute-0 podman[232893]: 2025-12-05 01:18:42.149180002 +0000 UTC m=+0.062323299 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:18:42 compute-0 systemd[1]: Started libpod-conmon-bc0f2f2f4db2171a2506e1edfcd70414b68c145a78a848f54dcfd7e66cddbf9e.scope.
Dec  5 01:18:42 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:18:42 compute-0 podman[232893]: 2025-12-05 01:18:42.32028266 +0000 UTC m=+0.233425897 container init bc0f2f2f4db2171a2506e1edfcd70414b68c145a78a848f54dcfd7e66cddbf9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_shockley, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:18:42 compute-0 podman[232893]: 2025-12-05 01:18:42.331705353 +0000 UTC m=+0.244848570 container start bc0f2f2f4db2171a2506e1edfcd70414b68c145a78a848f54dcfd7e66cddbf9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:18:42 compute-0 podman[232893]: 2025-12-05 01:18:42.336124138 +0000 UTC m=+0.249267375 container attach bc0f2f2f4db2171a2506e1edfcd70414b68c145a78a848f54dcfd7e66cddbf9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  5 01:18:42 compute-0 relaxed_shockley[232909]: 167 167
Dec  5 01:18:42 compute-0 systemd[1]: libpod-bc0f2f2f4db2171a2506e1edfcd70414b68c145a78a848f54dcfd7e66cddbf9e.scope: Deactivated successfully.
Dec  5 01:18:42 compute-0 podman[232893]: 2025-12-05 01:18:42.342197379 +0000 UTC m=+0.255340616 container died bc0f2f2f4db2171a2506e1edfcd70414b68c145a78a848f54dcfd7e66cddbf9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:18:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-cde7122bd31390438d706f94baadad9c6222eca31e60c700c4201fa5c8ac8e3c-merged.mount: Deactivated successfully.
Dec  5 01:18:42 compute-0 podman[232893]: 2025-12-05 01:18:42.413700696 +0000 UTC m=+0.326843923 container remove bc0f2f2f4db2171a2506e1edfcd70414b68c145a78a848f54dcfd7e66cddbf9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_shockley, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Dec  5 01:18:42 compute-0 systemd[1]: libpod-conmon-bc0f2f2f4db2171a2506e1edfcd70414b68c145a78a848f54dcfd7e66cddbf9e.scope: Deactivated successfully.
Dec  5 01:18:42 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Dec  5 01:18:42 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.543 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.545 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.545 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f83151a5f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.546 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f83151a6690>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.546 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.546 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8316c39160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee59a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f941a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee79e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f942c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee6300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee74d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee76b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.550 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.550 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8314f94050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.551 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8314f940e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.551 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f831506dc10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.552 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.552 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8314ee7950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.552 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8314ee7a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8314f94170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8314ee79b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8314f94200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8314f94290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8314ee7ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8314f94320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8314ee59d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8314ee7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8314ee7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8314ee74a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8314ee7500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8314ee7560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8314ee75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8314f945f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8314ee7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8314ee7680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8314ee76e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8314ee7f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8314ee7740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8314ee7f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.556 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.558 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.558 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.558 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.558 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.558 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.558 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.558 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.558 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.559 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.559 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.559 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.559 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.559 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.559 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.559 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.559 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:18:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:18:42.559 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:18:42 compute-0 podman[232933]: 2025-12-05 01:18:42.652371651 +0000 UTC m=+0.059981893 container create 62a1f87e773d6181bd57fb7f849e734132434e7d9472986f310b8da17330670b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mclean, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  5 01:18:42 compute-0 podman[232933]: 2025-12-05 01:18:42.633382405 +0000 UTC m=+0.040992677 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:18:42 compute-0 systemd[1]: Started libpod-conmon-62a1f87e773d6181bd57fb7f849e734132434e7d9472986f310b8da17330670b.scope.
Dec  5 01:18:42 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:18:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9908a7b1aa45e442419ad9a72967f2fa10226124fafa061f7d9dea2a79eadc08/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:18:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9908a7b1aa45e442419ad9a72967f2fa10226124fafa061f7d9dea2a79eadc08/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:18:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9908a7b1aa45e442419ad9a72967f2fa10226124fafa061f7d9dea2a79eadc08/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:18:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9908a7b1aa45e442419ad9a72967f2fa10226124fafa061f7d9dea2a79eadc08/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:18:42 compute-0 podman[232933]: 2025-12-05 01:18:42.808517297 +0000 UTC m=+0.216127579 container init 62a1f87e773d6181bd57fb7f849e734132434e7d9472986f310b8da17330670b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  5 01:18:42 compute-0 podman[232933]: 2025-12-05 01:18:42.824040235 +0000 UTC m=+0.231650477 container start 62a1f87e773d6181bd57fb7f849e734132434e7d9472986f310b8da17330670b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mclean, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:18:42 compute-0 podman[232933]: 2025-12-05 01:18:42.830205839 +0000 UTC m=+0.237816081 container attach 62a1f87e773d6181bd57fb7f849e734132434e7d9472986f310b8da17330670b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mclean, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  5 01:18:43 compute-0 magical_mclean[232947]: {
Dec  5 01:18:43 compute-0 magical_mclean[232947]:    "0": [
Dec  5 01:18:43 compute-0 magical_mclean[232947]:        {
Dec  5 01:18:43 compute-0 magical_mclean[232947]:            "devices": [
Dec  5 01:18:43 compute-0 magical_mclean[232947]:                "/dev/loop3"
Dec  5 01:18:43 compute-0 magical_mclean[232947]:            ],
Dec  5 01:18:43 compute-0 magical_mclean[232947]:            "lv_name": "ceph_lv0",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:            "lv_size": "21470642176",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:            "name": "ceph_lv0",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:            "tags": {
Dec  5 01:18:43 compute-0 magical_mclean[232947]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:                "ceph.cluster_name": "ceph",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:                "ceph.crush_device_class": "",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:                "ceph.encrypted": "0",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:                "ceph.osd_id": "0",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:                "ceph.type": "block",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:                "ceph.vdo": "0"
Dec  5 01:18:43 compute-0 magical_mclean[232947]:            },
Dec  5 01:18:43 compute-0 magical_mclean[232947]:            "type": "block",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:            "vg_name": "ceph_vg0"
Dec  5 01:18:43 compute-0 magical_mclean[232947]:        }
Dec  5 01:18:43 compute-0 magical_mclean[232947]:    ],
Dec  5 01:18:43 compute-0 magical_mclean[232947]:    "1": [
Dec  5 01:18:43 compute-0 magical_mclean[232947]:        {
Dec  5 01:18:43 compute-0 magical_mclean[232947]:            "devices": [
Dec  5 01:18:43 compute-0 magical_mclean[232947]:                "/dev/loop4"
Dec  5 01:18:43 compute-0 magical_mclean[232947]:            ],
Dec  5 01:18:43 compute-0 magical_mclean[232947]:            "lv_name": "ceph_lv1",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:            "lv_size": "21470642176",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:            "name": "ceph_lv1",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:            "tags": {
Dec  5 01:18:43 compute-0 magical_mclean[232947]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:                "ceph.cluster_name": "ceph",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:                "ceph.crush_device_class": "",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:                "ceph.encrypted": "0",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:                "ceph.osd_id": "1",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:                "ceph.type": "block",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:                "ceph.vdo": "0"
Dec  5 01:18:43 compute-0 magical_mclean[232947]:            },
Dec  5 01:18:43 compute-0 magical_mclean[232947]:            "type": "block",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:            "vg_name": "ceph_vg1"
Dec  5 01:18:43 compute-0 magical_mclean[232947]:        }
Dec  5 01:18:43 compute-0 magical_mclean[232947]:    ],
Dec  5 01:18:43 compute-0 magical_mclean[232947]:    "2": [
Dec  5 01:18:43 compute-0 magical_mclean[232947]:        {
Dec  5 01:18:43 compute-0 magical_mclean[232947]:            "devices": [
Dec  5 01:18:43 compute-0 magical_mclean[232947]:                "/dev/loop5"
Dec  5 01:18:43 compute-0 magical_mclean[232947]:            ],
Dec  5 01:18:43 compute-0 magical_mclean[232947]:            "lv_name": "ceph_lv2",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:            "lv_size": "21470642176",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:            "name": "ceph_lv2",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:            "tags": {
Dec  5 01:18:43 compute-0 magical_mclean[232947]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:                "ceph.cluster_name": "ceph",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:                "ceph.crush_device_class": "",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:                "ceph.encrypted": "0",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:                "ceph.osd_id": "2",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:                "ceph.type": "block",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:                "ceph.vdo": "0"
Dec  5 01:18:43 compute-0 magical_mclean[232947]:            },
Dec  5 01:18:43 compute-0 magical_mclean[232947]:            "type": "block",
Dec  5 01:18:43 compute-0 magical_mclean[232947]:            "vg_name": "ceph_vg2"
Dec  5 01:18:43 compute-0 magical_mclean[232947]:        }
Dec  5 01:18:43 compute-0 magical_mclean[232947]:    ]
Dec  5 01:18:43 compute-0 magical_mclean[232947]: }
Dec  5 01:18:43 compute-0 systemd[1]: libpod-62a1f87e773d6181bd57fb7f849e734132434e7d9472986f310b8da17330670b.scope: Deactivated successfully.
Dec  5 01:18:43 compute-0 podman[232933]: 2025-12-05 01:18:43.76370003 +0000 UTC m=+1.171310302 container died 62a1f87e773d6181bd57fb7f849e734132434e7d9472986f310b8da17330670b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  5 01:18:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-9908a7b1aa45e442419ad9a72967f2fa10226124fafa061f7d9dea2a79eadc08-merged.mount: Deactivated successfully.
Dec  5 01:18:43 compute-0 podman[232933]: 2025-12-05 01:18:43.877269575 +0000 UTC m=+1.284879807 container remove 62a1f87e773d6181bd57fb7f849e734132434e7d9472986f310b8da17330670b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_mclean, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:18:43 compute-0 systemd[1]: libpod-conmon-62a1f87e773d6181bd57fb7f849e734132434e7d9472986f310b8da17330670b.scope: Deactivated successfully.
Dec  5 01:18:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v274: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:18:44 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Dec  5 01:18:44 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Dec  5 01:20:51 compute-0 python3.9[244478]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:20:51 compute-0 rsyslogd[188644]: imjournal: 1573 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Dec  5 01:20:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:20:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v338: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:20:52 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 9.11 deep-scrub starts
Dec  5 01:20:52 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 9.11 deep-scrub ok
Dec  5 01:20:52 compute-0 python3.9[244630]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:20:53 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Dec  5 01:20:53 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Dec  5 01:20:53 compute-0 python3.9[244708]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:20:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v339: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:20:54 compute-0 python3.9[244860]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:20:55 compute-0 python3.9[244938]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:20:56 compute-0 podman[245067]: 2025-12-05 01:20:56.238876377 +0000 UTC m=+0.114412046 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  5 01:20:56 compute-0 podman[245063]: 2025-12-05 01:20:56.265760065 +0000 UTC m=+0.145398380 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  5 01:20:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v340: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:20:56 compute-0 podman[245070]: 2025-12-05 01:20:56.313276154 +0000 UTC m=+0.182116714 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  5 01:20:56 compute-0 python3.9[245134]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  5 01:20:56 compute-0 systemd[1]: Reloading.
Dec  5 01:20:56 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:20:56 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:20:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:20:57 compute-0 podman[245317]: 2025-12-05 01:20:57.973510564 +0000 UTC m=+0.124388947 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 01:20:58 compute-0 python3.9[245365]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:20:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v341: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:20:58 compute-0 python3.9[245515]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:20:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:20:59 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:20:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 01:20:59 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:20:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 01:20:59 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:20:59 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev d01fcc6d-d310-4ca7-b745-d968aa92e7e6 does not exist
Dec  5 01:20:59 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 846cb35c-cfa8-4d1b-9425-58baddde12a9 does not exist
Dec  5 01:20:59 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev d78811b8-b6d4-4a3c-b535-dbbd18dc1832 does not exist
Dec  5 01:20:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 01:20:59 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 01:20:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 01:20:59 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:20:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:20:59 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:20:59 compute-0 podman[158197]: time="2025-12-05T01:20:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:20:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:20:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec  5 01:20:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:20:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6834 "" "Go-http-client/1.1"
Dec  5 01:20:59 compute-0 python3.9[245770]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:21:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v342: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:21:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:21:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:21:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:21:00 compute-0 python3.9[245927]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:21:00 compute-0 podman[245941]: 2025-12-05 01:21:00.62583485 +0000 UTC m=+0.079555783 container create 605225d971bcd7cbda80fecec1e958dc41de3fc2758e590a15ee691ee9d8efc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jennings, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Dec  5 01:21:00 compute-0 podman[245941]: 2025-12-05 01:21:00.601862334 +0000 UTC m=+0.055583357 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:21:00 compute-0 systemd[1]: Started libpod-conmon-605225d971bcd7cbda80fecec1e958dc41de3fc2758e590a15ee691ee9d8efc0.scope.
Dec  5 01:21:00 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:21:00 compute-0 podman[245941]: 2025-12-05 01:21:00.769260703 +0000 UTC m=+0.222981646 container init 605225d971bcd7cbda80fecec1e958dc41de3fc2758e590a15ee691ee9d8efc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  5 01:21:00 compute-0 podman[245941]: 2025-12-05 01:21:00.779681687 +0000 UTC m=+0.233402660 container start 605225d971bcd7cbda80fecec1e958dc41de3fc2758e590a15ee691ee9d8efc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jennings, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:21:00 compute-0 podman[245941]: 2025-12-05 01:21:00.786392156 +0000 UTC m=+0.240113099 container attach 605225d971bcd7cbda80fecec1e958dc41de3fc2758e590a15ee691ee9d8efc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:21:00 compute-0 compassionate_jennings[245977]: 167 167
Dec  5 01:21:00 compute-0 systemd[1]: libpod-605225d971bcd7cbda80fecec1e958dc41de3fc2758e590a15ee691ee9d8efc0.scope: Deactivated successfully.
Dec  5 01:21:00 compute-0 podman[245941]: 2025-12-05 01:21:00.795441731 +0000 UTC m=+0.249162704 container died 605225d971bcd7cbda80fecec1e958dc41de3fc2758e590a15ee691ee9d8efc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jennings, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:21:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-868785e3d73f9132ed6b15913403ad22a096fa6ad32de2e13f80acabda5dcd68-merged.mount: Deactivated successfully.
Dec  5 01:21:00 compute-0 podman[245941]: 2025-12-05 01:21:00.881262341 +0000 UTC m=+0.334983284 container remove 605225d971bcd7cbda80fecec1e958dc41de3fc2758e590a15ee691ee9d8efc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_jennings, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  5 01:21:00 compute-0 systemd[1]: libpod-conmon-605225d971bcd7cbda80fecec1e958dc41de3fc2758e590a15ee691ee9d8efc0.scope: Deactivated successfully.
Dec  5 01:21:01 compute-0 podman[246060]: 2025-12-05 01:21:01.151389015 +0000 UTC m=+0.089948666 container create fe7eddc0eb3a0e0ff9fe9d5b53d3282f59f84b825823d5d6138f88d435be5c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hermann, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  5 01:21:01 compute-0 podman[246060]: 2025-12-05 01:21:01.123103688 +0000 UTC m=+0.061663339 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:21:01 compute-0 systemd[1]: Started libpod-conmon-fe7eddc0eb3a0e0ff9fe9d5b53d3282f59f84b825823d5d6138f88d435be5c6c.scope.
Dec  5 01:21:01 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:21:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4fa784ec7787b8d76e978e9218ad3bb88f4febd257ed63856dd47a869932469/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:21:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4fa784ec7787b8d76e978e9218ad3bb88f4febd257ed63856dd47a869932469/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:21:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4fa784ec7787b8d76e978e9218ad3bb88f4febd257ed63856dd47a869932469/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:21:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4fa784ec7787b8d76e978e9218ad3bb88f4febd257ed63856dd47a869932469/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:21:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4fa784ec7787b8d76e978e9218ad3bb88f4febd257ed63856dd47a869932469/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:21:01 compute-0 podman[246060]: 2025-12-05 01:21:01.329457145 +0000 UTC m=+0.268016886 container init fe7eddc0eb3a0e0ff9fe9d5b53d3282f59f84b825823d5d6138f88d435be5c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hermann, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:21:01 compute-0 podman[246060]: 2025-12-05 01:21:01.349178251 +0000 UTC m=+0.287737892 container start fe7eddc0eb3a0e0ff9fe9d5b53d3282f59f84b825823d5d6138f88d435be5c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hermann, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:21:01 compute-0 podman[246060]: 2025-12-05 01:21:01.355053907 +0000 UTC m=+0.293613558 container attach fe7eddc0eb3a0e0ff9fe9d5b53d3282f59f84b825823d5d6138f88d435be5c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hermann, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  5 01:21:01 compute-0 openstack_network_exporter[160350]: ERROR   01:21:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:21:01 compute-0 openstack_network_exporter[160350]: ERROR   01:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:21:01 compute-0 openstack_network_exporter[160350]: ERROR   01:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:21:01 compute-0 openstack_network_exporter[160350]: ERROR   01:21:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:21:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:21:01 compute-0 openstack_network_exporter[160350]: ERROR   01:21:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:21:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:21:01 compute-0 python3.9[246150]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  5 01:21:01 compute-0 systemd[1]: Reloading.
Dec  5 01:21:01 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:21:01 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:21:02 compute-0 podman[246152]: 2025-12-05 01:21:02.000282435 +0000 UTC m=+0.198313121 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, release=1214.1726694543, release-0.7.12=, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, name=ubi9, io.buildah.version=1.29.0, maintainer=Red Hat, Inc.)
Dec  5 01:21:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:21:02 compute-0 systemd[1]: Starting Create netns directory...
Dec  5 01:21:02 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  5 01:21:02 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  5 01:21:02 compute-0 systemd[1]: Finished Create netns directory.
Dec  5 01:21:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v343: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:21:02 compute-0 nervous_hermann[246116]: --> passed data devices: 0 physical, 3 LVM
Dec  5 01:21:02 compute-0 nervous_hermann[246116]: --> relative data size: 1.0
Dec  5 01:21:02 compute-0 nervous_hermann[246116]: --> All data devices are unavailable
Dec  5 01:21:02 compute-0 systemd[1]: libpod-fe7eddc0eb3a0e0ff9fe9d5b53d3282f59f84b825823d5d6138f88d435be5c6c.scope: Deactivated successfully.
Dec  5 01:21:02 compute-0 systemd[1]: libpod-fe7eddc0eb3a0e0ff9fe9d5b53d3282f59f84b825823d5d6138f88d435be5c6c.scope: Consumed 1.181s CPU time.
Dec  5 01:21:02 compute-0 podman[246060]: 2025-12-05 01:21:02.640572184 +0000 UTC m=+1.579131825 container died fe7eddc0eb3a0e0ff9fe9d5b53d3282f59f84b825823d5d6138f88d435be5c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hermann, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:21:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4fa784ec7787b8d76e978e9218ad3bb88f4febd257ed63856dd47a869932469-merged.mount: Deactivated successfully.
Dec  5 01:21:02 compute-0 podman[246060]: 2025-12-05 01:21:02.746670175 +0000 UTC m=+1.685229846 container remove fe7eddc0eb3a0e0ff9fe9d5b53d3282f59f84b825823d5d6138f88d435be5c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  5 01:21:02 compute-0 systemd[1]: libpod-conmon-fe7eddc0eb3a0e0ff9fe9d5b53d3282f59f84b825823d5d6138f88d435be5c6c.scope: Deactivated successfully.
Dec  5 01:21:03 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Dec  5 01:21:03 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Dec  5 01:21:03 compute-0 python3.9[246494]: ansible-ansible.builtin.service_facts Invoked
Dec  5 01:21:03 compute-0 network[246536]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  5 01:21:03 compute-0 network[246537]: 'network-scripts' will be removed from distribution in near future.
Dec  5 01:21:03 compute-0 network[246542]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  5 01:21:03 compute-0 podman[246555]: 2025-12-05 01:21:03.841130307 +0000 UTC m=+0.077198587 container create 22c70057f07abf69dfed84da7df0205dea158fd55e19e05c20f5c89420561834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_khayyam, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  5 01:21:03 compute-0 podman[246555]: 2025-12-05 01:21:03.806678066 +0000 UTC m=+0.042746396 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:21:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v344: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:21:04 compute-0 systemd[1]: Started libpod-conmon-22c70057f07abf69dfed84da7df0205dea158fd55e19e05c20f5c89420561834.scope.
Dec  5 01:21:04 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:21:04 compute-0 podman[246555]: 2025-12-05 01:21:04.654220877 +0000 UTC m=+0.890289167 container init 22c70057f07abf69dfed84da7df0205dea158fd55e19e05c20f5c89420561834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_khayyam, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:21:04 compute-0 podman[246555]: 2025-12-05 01:21:04.675431415 +0000 UTC m=+0.911499675 container start 22c70057f07abf69dfed84da7df0205dea158fd55e19e05c20f5c89420561834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_khayyam, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:21:04 compute-0 podman[246555]: 2025-12-05 01:21:04.680077465 +0000 UTC m=+0.916145795 container attach 22c70057f07abf69dfed84da7df0205dea158fd55e19e05c20f5c89420561834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  5 01:21:04 compute-0 zen_khayyam[246573]: 167 167
Dec  5 01:21:04 compute-0 systemd[1]: libpod-22c70057f07abf69dfed84da7df0205dea158fd55e19e05c20f5c89420561834.scope: Deactivated successfully.
Dec  5 01:21:04 compute-0 podman[246555]: 2025-12-05 01:21:04.692585448 +0000 UTC m=+0.928653698 container died 22c70057f07abf69dfed84da7df0205dea158fd55e19e05c20f5c89420561834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_khayyam, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Dec  5 01:21:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4c067d9d830b76cde5d90373a2fb48473aee060a614c8ff2e79c22f013b32dc-merged.mount: Deactivated successfully.
Dec  5 01:21:04 compute-0 podman[246555]: 2025-12-05 01:21:04.759218806 +0000 UTC m=+0.995287056 container remove 22c70057f07abf69dfed84da7df0205dea158fd55e19e05c20f5c89420561834 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:21:04 compute-0 podman[246574]: 2025-12-05 01:21:04.775588598 +0000 UTC m=+0.161543825 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, maintainer=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.buildah.version=1.33.7, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2025-08-20T13:12:41, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal)
Dec  5 01:21:04 compute-0 systemd[1]: libpod-conmon-22c70057f07abf69dfed84da7df0205dea158fd55e19e05c20f5c89420561834.scope: Deactivated successfully.
Dec  5 01:21:05 compute-0 podman[246623]: 2025-12-05 01:21:05.023023533 +0000 UTC m=+0.103973492 container create 6e7f901daf180bc4e68a6335eb04d166250ecbc872d5eaeec0a49f3d39d5784d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_elion, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:21:05 compute-0 podman[246623]: 2025-12-05 01:21:04.988335015 +0000 UTC m=+0.069285064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:21:05 compute-0 systemd[1]: Started libpod-conmon-6e7f901daf180bc4e68a6335eb04d166250ecbc872d5eaeec0a49f3d39d5784d.scope.
Dec  5 01:21:05 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:21:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/864b5963274e08ddeea75086aff247ecbd2014138ca74235a13c94a91e203d46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:21:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/864b5963274e08ddeea75086aff247ecbd2014138ca74235a13c94a91e203d46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:21:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/864b5963274e08ddeea75086aff247ecbd2014138ca74235a13c94a91e203d46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:21:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/864b5963274e08ddeea75086aff247ecbd2014138ca74235a13c94a91e203d46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:21:05 compute-0 podman[246623]: 2025-12-05 01:21:05.190288018 +0000 UTC m=+0.271238057 container init 6e7f901daf180bc4e68a6335eb04d166250ecbc872d5eaeec0a49f3d39d5784d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  5 01:21:05 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Dec  5 01:21:05 compute-0 podman[246623]: 2025-12-05 01:21:05.214338096 +0000 UTC m=+0.295288085 container start 6e7f901daf180bc4e68a6335eb04d166250ecbc872d5eaeec0a49f3d39d5784d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_elion, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:21:05 compute-0 podman[246623]: 2025-12-05 01:21:05.220596602 +0000 UTC m=+0.301546641 container attach 6e7f901daf180bc4e68a6335eb04d166250ecbc872d5eaeec0a49f3d39d5784d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_elion, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Dec  5 01:21:05 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Dec  5 01:21:06 compute-0 vigilant_elion[246645]: {
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:    "0": [
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:        {
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:            "devices": [
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:                "/dev/loop3"
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:            ],
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:            "lv_name": "ceph_lv0",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:            "lv_size": "21470642176",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:            "name": "ceph_lv0",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:            "tags": {
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:                "ceph.cluster_name": "ceph",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:                "ceph.crush_device_class": "",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:                "ceph.encrypted": "0",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:                "ceph.osd_id": "0",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:                "ceph.type": "block",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:                "ceph.vdo": "0"
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:            },
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:            "type": "block",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:            "vg_name": "ceph_vg0"
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:        }
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:    ],
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:    "1": [
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:        {
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:            "devices": [
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:                "/dev/loop4"
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:            ],
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:            "lv_name": "ceph_lv1",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:            "lv_size": "21470642176",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:            "name": "ceph_lv1",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:            "tags": {
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:                "ceph.cluster_name": "ceph",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:                "ceph.crush_device_class": "",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:                "ceph.encrypted": "0",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:                "ceph.osd_id": "1",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:                "ceph.type": "block",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:                "ceph.vdo": "0"
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:            },
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:            "type": "block",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:            "vg_name": "ceph_vg1"
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:        }
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:    ],
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:    "2": [
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:        {
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:            "devices": [
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:                "/dev/loop5"
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:            ],
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:            "lv_name": "ceph_lv2",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:            "lv_size": "21470642176",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:            "name": "ceph_lv2",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:            "tags": {
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:                "ceph.cluster_name": "ceph",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:                "ceph.crush_device_class": "",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:                "ceph.encrypted": "0",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:                "ceph.osd_id": "2",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:                "ceph.type": "block",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:                "ceph.vdo": "0"
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:            },
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:            "type": "block",
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:            "vg_name": "ceph_vg2"
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:        }
Dec  5 01:21:06 compute-0 vigilant_elion[246645]:    ]
Dec  5 01:21:06 compute-0 vigilant_elion[246645]: }
Dec  5 01:21:06 compute-0 systemd[1]: libpod-6e7f901daf180bc4e68a6335eb04d166250ecbc872d5eaeec0a49f3d39d5784d.scope: Deactivated successfully.
Dec  5 01:21:06 compute-0 podman[246623]: 2025-12-05 01:21:06.135229285 +0000 UTC m=+1.216179264 container died 6e7f901daf180bc4e68a6335eb04d166250ecbc872d5eaeec0a49f3d39d5784d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_elion, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  5 01:21:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-864b5963274e08ddeea75086aff247ecbd2014138ca74235a13c94a91e203d46-merged.mount: Deactivated successfully.
Dec  5 01:21:06 compute-0 podman[246623]: 2025-12-05 01:21:06.266787294 +0000 UTC m=+1.347737253 container remove 6e7f901daf180bc4e68a6335eb04d166250ecbc872d5eaeec0a49f3d39d5784d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_elion, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  5 01:21:06 compute-0 systemd[1]: libpod-conmon-6e7f901daf180bc4e68a6335eb04d166250ecbc872d5eaeec0a49f3d39d5784d.scope: Deactivated successfully.
Dec  5 01:21:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v345: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:21:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:21:07 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 9.1e deep-scrub starts
Dec  5 01:21:07 compute-0 ceph-osd[206647]: log_channel(cluster) log [DBG] : 9.1e deep-scrub ok
Dec  5 01:21:07 compute-0 podman[246866]: 2025-12-05 01:21:07.485980111 +0000 UTC m=+0.087492907 container create 729cb852d0c6a243a3f4323e4d0426c14c0c2074bc3839a8f7e8d9572cedde2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_mclean, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  5 01:21:07 compute-0 podman[246866]: 2025-12-05 01:21:07.449799891 +0000 UTC m=+0.051312737 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:21:07 compute-0 systemd[1]: Started libpod-conmon-729cb852d0c6a243a3f4323e4d0426c14c0c2074bc3839a8f7e8d9572cedde2e.scope.
Dec  5 01:21:07 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:21:07 compute-0 podman[246866]: 2025-12-05 01:21:07.620025839 +0000 UTC m=+0.221538705 container init 729cb852d0c6a243a3f4323e4d0426c14c0c2074bc3839a8f7e8d9572cedde2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_mclean, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  5 01:21:07 compute-0 podman[246866]: 2025-12-05 01:21:07.636026691 +0000 UTC m=+0.237539477 container start 729cb852d0c6a243a3f4323e4d0426c14c0c2074bc3839a8f7e8d9572cedde2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_mclean, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  5 01:21:07 compute-0 podman[246866]: 2025-12-05 01:21:07.644354715 +0000 UTC m=+0.245867561 container attach 729cb852d0c6a243a3f4323e4d0426c14c0c2074bc3839a8f7e8d9572cedde2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  5 01:21:07 compute-0 romantic_mclean[246887]: 167 167
Dec  5 01:21:07 compute-0 systemd[1]: libpod-729cb852d0c6a243a3f4323e4d0426c14c0c2074bc3839a8f7e8d9572cedde2e.scope: Deactivated successfully.
Dec  5 01:21:07 compute-0 podman[246866]: 2025-12-05 01:21:07.650114398 +0000 UTC m=+0.251627184 container died 729cb852d0c6a243a3f4323e4d0426c14c0c2074bc3839a8f7e8d9572cedde2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:21:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-c168ee61dcd1ffdc481cf498599225c61813993f64b977d1c2c58c4cdb958711-merged.mount: Deactivated successfully.
Dec  5 01:21:07 compute-0 podman[246866]: 2025-12-05 01:21:07.727272253 +0000 UTC m=+0.328785019 container remove 729cb852d0c6a243a3f4323e4d0426c14c0c2074bc3839a8f7e8d9572cedde2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_mclean, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  5 01:21:07 compute-0 systemd[1]: libpod-conmon-729cb852d0c6a243a3f4323e4d0426c14c0c2074bc3839a8f7e8d9572cedde2e.scope: Deactivated successfully.
Dec  5 01:21:07 compute-0 podman[246921]: 2025-12-05 01:21:07.986259743 +0000 UTC m=+0.069839669 container create 808474f990db6bf5e01a5c06a27235c17476ab24dca2d08f042d36e4501781a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  5 01:21:08 compute-0 podman[246921]: 2025-12-05 01:21:07.958319106 +0000 UTC m=+0.041899062 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:21:08 compute-0 systemd[1]: Started libpod-conmon-808474f990db6bf5e01a5c06a27235c17476ab24dca2d08f042d36e4501781a3.scope.
Dec  5 01:21:08 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:21:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a01127a6b8799c994beb2b982b949f9cc458d996b2c557f38824421d9dd4d73/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:21:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a01127a6b8799c994beb2b982b949f9cc458d996b2c557f38824421d9dd4d73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:21:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a01127a6b8799c994beb2b982b949f9cc458d996b2c557f38824421d9dd4d73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:21:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a01127a6b8799c994beb2b982b949f9cc458d996b2c557f38824421d9dd4d73/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:21:08 compute-0 podman[246921]: 2025-12-05 01:21:08.149874076 +0000 UTC m=+0.233454022 container init 808474f990db6bf5e01a5c06a27235c17476ab24dca2d08f042d36e4501781a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_meninsky, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:21:08 compute-0 podman[246921]: 2025-12-05 01:21:08.173539753 +0000 UTC m=+0.257119699 container start 808474f990db6bf5e01a5c06a27235c17476ab24dca2d08f042d36e4501781a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  5 01:21:08 compute-0 podman[246921]: 2025-12-05 01:21:08.1805366 +0000 UTC m=+0.264116586 container attach 808474f990db6bf5e01a5c06a27235c17476ab24dca2d08f042d36e4501781a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  5 01:21:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v346: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:21:09 compute-0 admiring_meninsky[246941]: {
Dec  5 01:21:09 compute-0 admiring_meninsky[246941]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 01:21:09 compute-0 admiring_meninsky[246941]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:21:09 compute-0 admiring_meninsky[246941]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 01:21:09 compute-0 admiring_meninsky[246941]:        "osd_id": 0,
Dec  5 01:21:09 compute-0 admiring_meninsky[246941]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:21:09 compute-0 admiring_meninsky[246941]:        "type": "bluestore"
Dec  5 01:21:09 compute-0 admiring_meninsky[246941]:    },
Dec  5 01:21:09 compute-0 admiring_meninsky[246941]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 01:21:09 compute-0 admiring_meninsky[246941]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:21:09 compute-0 admiring_meninsky[246941]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 01:21:09 compute-0 admiring_meninsky[246941]:        "osd_id": 1,
Dec  5 01:21:09 compute-0 admiring_meninsky[246941]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:21:09 compute-0 admiring_meninsky[246941]:        "type": "bluestore"
Dec  5 01:21:09 compute-0 admiring_meninsky[246941]:    },
Dec  5 01:21:09 compute-0 admiring_meninsky[246941]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 01:21:09 compute-0 admiring_meninsky[246941]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:21:09 compute-0 admiring_meninsky[246941]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 01:21:09 compute-0 admiring_meninsky[246941]:        "osd_id": 2,
Dec  5 01:21:09 compute-0 admiring_meninsky[246941]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:21:09 compute-0 admiring_meninsky[246941]:        "type": "bluestore"
Dec  5 01:21:09 compute-0 admiring_meninsky[246941]:    }
Dec  5 01:21:09 compute-0 admiring_meninsky[246941]: }
Dec  5 01:21:09 compute-0 systemd[1]: libpod-808474f990db6bf5e01a5c06a27235c17476ab24dca2d08f042d36e4501781a3.scope: Deactivated successfully.
Dec  5 01:21:09 compute-0 podman[246921]: 2025-12-05 01:21:09.3391363 +0000 UTC m=+1.422716306 container died 808474f990db6bf5e01a5c06a27235c17476ab24dca2d08f042d36e4501781a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_meninsky, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:21:09 compute-0 systemd[1]: libpod-808474f990db6bf5e01a5c06a27235c17476ab24dca2d08f042d36e4501781a3.scope: Consumed 1.162s CPU time.
Dec  5 01:21:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a01127a6b8799c994beb2b982b949f9cc458d996b2c557f38824421d9dd4d73-merged.mount: Deactivated successfully.
Dec  5 01:21:09 compute-0 podman[246921]: 2025-12-05 01:21:09.456712814 +0000 UTC m=+1.540292750 container remove 808474f990db6bf5e01a5c06a27235c17476ab24dca2d08f042d36e4501781a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_meninsky, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:21:09 compute-0 systemd[1]: libpod-conmon-808474f990db6bf5e01a5c06a27235c17476ab24dca2d08f042d36e4501781a3.scope: Deactivated successfully.
Dec  5 01:21:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:21:09 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:21:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:21:09 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:21:09 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev d02dab09-89e8-4b15-8878-b7bb2756d48d does not exist
Dec  5 01:21:09 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev fcc66374-e570-4807-ac04-720a205552f6 does not exist
Dec  5 01:21:09 compute-0 podman[247024]: 2025-12-05 01:21:09.523936299 +0000 UTC m=+0.146819399 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 01:21:10 compute-0 python3.9[247229]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:21:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v347: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:21:10 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:21:10 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:21:10 compute-0 python3.9[247307]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:21:11 compute-0 systemd[1]: session-25.scope: Deactivated successfully.
Dec  5 01:21:11 compute-0 systemd[1]: session-25.scope: Consumed 2min 46.666s CPU time.
Dec  5 01:21:11 compute-0 systemd-logind[792]: Session 25 logged out. Waiting for processes to exit.
Dec  5 01:21:11 compute-0 systemd-logind[792]: Removed session 25.
Dec  5 01:21:11 compute-0 python3.9[247459]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:21:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:21:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v348: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:21:13 compute-0 python3.9[247611]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:21:13 compute-0 python3.9[247689]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:21:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v349: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:21:15 compute-0 python3.9[247841]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec  5 01:21:15 compute-0 systemd[1]: Starting Time & Date Service...
Dec  5 01:21:15 compute-0 systemd[1]: Started Time & Date Service.
Dec  5 01:21:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:21:16
Dec  5 01:21:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 01:21:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 01:21:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['images', 'vms', 'backups', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', '.mgr', 'volumes']
Dec  5 01:21:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 01:21:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:21:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:21:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:21:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:21:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:21:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:21:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v350: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:21:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 01:21:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:21:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 01:21:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:21:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:21:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:21:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:21:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:21:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:21:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:21:16 compute-0 python3.9[247997]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:21:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:21:17 compute-0 python3.9[248149]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:21:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v351: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:21:18 compute-0 python3.9[248227]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:21:19 compute-0 python3.9[248379]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:21:20 compute-0 python3.9[248458]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.52juk_mk recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:21:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v352: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:21:21 compute-0 python3.9[248610]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:21:22 compute-0 python3.9[248688]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:21:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:21:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v353: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:21:23 compute-0 python3.9[248840]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:21:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v354: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:21:24 compute-0 python3[248993]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  5 01:21:25 compute-0 python3.9[249145]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:21:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 01:21:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v355: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:21:26 compute-0 python3.9[249223]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:21:26 compute-0 podman[249245]: 2025-12-05 01:21:26.705016867 +0000 UTC m=+0.104938369 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute)
Dec  5 01:21:26 compute-0 podman[249249]: 2025-12-05 01:21:26.740150387 +0000 UTC m=+0.136359405 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 01:21:26 compute-0 podman[249250]: 2025-12-05 01:21:26.747560646 +0000 UTC m=+0.140271875 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 01:21:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:21:27 compute-0 rsyslogd[188644]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  5 01:21:27 compute-0 rsyslogd[188644]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  5 01:21:27 compute-0 python3.9[249442]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:21:28 compute-0 python3.9[249520]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:21:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v356: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:21:28 compute-0 podman[249572]: 2025-12-05 01:21:28.730145514 +0000 UTC m=+0.138608238 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  5 01:21:29 compute-0 python3.9[249690]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:21:29 compute-0 podman[158197]: time="2025-12-05T01:21:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:21:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:21:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec  5 01:21:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:21:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6826 "" "Go-http-client/1.1"
Dec  5 01:21:30 compute-0 python3.9[249768]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:21:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v357: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:21:31 compute-0 python3.9[249920]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:21:31 compute-0 openstack_network_exporter[160350]: ERROR   01:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:21:31 compute-0 openstack_network_exporter[160350]: ERROR   01:21:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:21:31 compute-0 openstack_network_exporter[160350]: ERROR   01:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:21:31 compute-0 openstack_network_exporter[160350]: ERROR   01:21:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:21:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:21:31 compute-0 openstack_network_exporter[160350]: ERROR   01:21:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:21:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:21:31 compute-0 python3.9[249998]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:21:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:21:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v358: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:21:32 compute-0 podman[250098]: 2025-12-05 01:21:32.702655696 +0000 UTC m=+0.109360853 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, distribution-scope=public, io.buildah.version=1.29.0, managed_by=edpm_ansible, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, version=9.4, architecture=x86_64, com.redhat.component=ubi9-container, io.openshift.expose-services=)
Dec  5 01:21:33 compute-0 python3.9[250167]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:21:33 compute-0 python3.9[250245]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:21:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v359: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:21:34 compute-0 python3.9[250397]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:21:35 compute-0 podman[250477]: 2025-12-05 01:21:35.749375341 +0000 UTC m=+0.157260484 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, version=9.6, architecture=x86_64, maintainer=Red Hat, Inc., vcs-type=git)
Dec  5 01:21:36 compute-0 python3.9[250571]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:21:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v360: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:21:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:21:37 compute-0 python3.9[250723]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:21:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v361: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:21:38 compute-0 python3.9[250875]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:21:39 compute-0 podman[250999]: 2025-12-05 01:21:39.719596707 +0000 UTC m=+0.126729752 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  5 01:21:39 compute-0 python3.9[251049]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  5 01:21:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v362: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:21:41 compute-0 python3.9[251202]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  5 01:21:41 compute-0 systemd[1]: session-47.scope: Deactivated successfully.
Dec  5 01:21:41 compute-0 systemd[1]: session-47.scope: Consumed 51.898s CPU time.
Dec  5 01:21:41 compute-0 systemd-logind[792]: Session 47 logged out. Waiting for processes to exit.
Dec  5 01:21:41 compute-0 systemd-logind[792]: Removed session 47.
Dec  5 01:21:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:21:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v363: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:21:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v364: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:21:45 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  5 01:21:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:21:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:21:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:21:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:21:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:21:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:21:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v365: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:21:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:21:47 compute-0 systemd-logind[792]: New session 48 of user zuul.
Dec  5 01:21:47 compute-0 systemd[1]: Started Session 48 of User zuul.
Dec  5 01:21:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v366: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:21:48 compute-0 python3.9[251384]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec  5 01:21:49 compute-0 python3.9[251537]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  5 01:21:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v367: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:21:51 compute-0 python3.9[251691]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Dec  5 01:21:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:21:52 compute-0 python3.9[251843]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.xo5an9og follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:21:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v368: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:21:53 compute-0 python3.9[251968]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.xo5an9og mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764897711.3883753-44-120664731327028/.source.xo5an9og _original_basename=.bkge3syv follow=False checksum=33c8d724534b96f4d35998d48c2d1395ac713ad6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:21:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v369: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:21:54 compute-0 python3.9[252120]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  5 01:21:56 compute-0 python3.9[252272]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD34iQxSDvRWxXWiq324tvvnkHz60HCvPTP/DU7o5oImJ7L5PeQTe9tPl2QVsPDuWSCrwTEupWDG8h+dMSTlGmE2dOPB66Zq0d9sww65ZtOq0JsaxhPfTB3aJe6aQDcYq9WQ/1T/lNE0Do7wQL88mneNtNMuLZD9Irm2WwDI38II50hBLyhLkuA6ik5m8wn++kFZPdu0pcYz24ameu4wB8DSKH8UAT3GBfc11AP8MuI6xtpcOT5Dr88jHtVEYH8eW4XWrKQeyZddDcJui/f6NqC4NrPSF4YgDRQ1z6/33N2E9EycvbOgdOt9pq1jpYaWkMHl2KeaAbNoAdSuXTGDhvCzv18a5QdOMVV7965nJMnpteZZjrhzpHSFkbnMvAaoktDOMhKkfPYUY6HhVdkVM7FntS5oT76c92NL3HNHDuV7Oh57/0epCuWK6LT+2z9SlP7VUPaUa2c/nZDSTeZO/gJmuyeJ9Iu8XtE1KvGRpHt6zVpKl1uyEoc+M5SO7YG+r8=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIIWlZK7FF2zVpeujHX1SXvuy5F4vd69JtXI65jfCGUb#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG3QjvzM+uHT65E6nwIhM59XNE6tJ4oKmErztLJ1wZJkltdzzAyZYA6BiT1RzCPoMNPk9MeYIRcQ8NtPcaWiPtU=#012 create=True mode=0644 path=/tmp/ansible.xo5an9og state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:21:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v370: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:21:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:21:57 compute-0 podman[252396]: 2025-12-05 01:21:57.174370331 +0000 UTC m=+0.130283793 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  5 01:21:57 compute-0 podman[252397]: 2025-12-05 01:21:57.180653718 +0000 UTC m=+0.134960645 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 01:21:57 compute-0 podman[252398]: 2025-12-05 01:21:57.216470098 +0000 UTC m=+0.164280682 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec  5 01:21:57 compute-0 python3.9[252479]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.xo5an9og' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:21:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v371: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:21:58 compute-0 python3.9[252639]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.xo5an9og state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:21:59 compute-0 systemd-logind[792]: Session 48 logged out. Waiting for processes to exit.
Dec  5 01:21:59 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Dec  5 01:21:59 compute-0 systemd[1]: session-48.scope: Consumed 9.117s CPU time.
Dec  5 01:21:59 compute-0 systemd-logind[792]: Removed session 48.
Dec  5 01:21:59 compute-0 podman[252664]: 2025-12-05 01:21:59.333373941 +0000 UTC m=+0.164585480 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  5 01:21:59 compute-0 podman[158197]: time="2025-12-05T01:21:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:21:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:21:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec  5 01:21:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:21:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6833 "" "Go-http-client/1.1"
Dec  5 01:22:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v372: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:22:01 compute-0 openstack_network_exporter[160350]: ERROR   01:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:22:01 compute-0 openstack_network_exporter[160350]: ERROR   01:22:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:22:01 compute-0 openstack_network_exporter[160350]: ERROR   01:22:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:22:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:22:01 compute-0 openstack_network_exporter[160350]: ERROR   01:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:22:01 compute-0 openstack_network_exporter[160350]: ERROR   01:22:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:22:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:22:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:22:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v373: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:22:03 compute-0 podman[252683]: 2025-12-05 01:22:03.748523689 +0000 UTC m=+0.151524272 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, release=1214.1726694543, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, container_name=kepler, managed_by=edpm_ansible, name=ubi9)
Dec  5 01:22:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v374: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:22:04 compute-0 systemd-logind[792]: New session 49 of user zuul.
Dec  5 01:22:04 compute-0 systemd[1]: Started Session 49 of User zuul.
Dec  5 01:22:05 compute-0 python3.9[252855]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  5 01:22:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v375: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:22:06 compute-0 podman[252907]: 2025-12-05 01:22:06.737558045 +0000 UTC m=+0.142482577 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., version=9.6, distribution-scope=public, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, release=1755695350, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  5 01:22:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:22:07 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Dec  5 01:22:07 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:22:07.176682) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  5 01:22:07 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Dec  5 01:22:07 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897727176744, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1669, "num_deletes": 251, "total_data_size": 2324004, "memory_usage": 2366936, "flush_reason": "Manual Compaction"}
Dec  5 01:22:07 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Dec  5 01:22:07 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897727193200, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1368293, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7297, "largest_seqno": 8965, "table_properties": {"data_size": 1362814, "index_size": 2426, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 16506, "raw_average_key_size": 20, "raw_value_size": 1349632, "raw_average_value_size": 1714, "num_data_blocks": 114, "num_entries": 787, "num_filter_entries": 787, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897576, "oldest_key_time": 1764897576, "file_creation_time": 1764897727, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Dec  5 01:22:07 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 16612 microseconds, and 9526 cpu microseconds.
Dec  5 01:22:07 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 01:22:07 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:22:07.193295) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1368293 bytes OK
Dec  5 01:22:07 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:22:07.193322) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Dec  5 01:22:07 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:22:07.195869) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Dec  5 01:22:07 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:22:07.195940) EVENT_LOG_v1 {"time_micros": 1764897727195931, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  5 01:22:07 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:22:07.195965) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  5 01:22:07 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2316510, prev total WAL file size 2316510, number of live WAL files 2.
Dec  5 01:22:07 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 01:22:07 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:22:07.197472) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323532' seq:0, type:0; will stop at (end)
Dec  5 01:22:07 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  5 01:22:07 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1336KB)], [20(6969KB)]
Dec  5 01:22:07 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897727197565, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 8505379, "oldest_snapshot_seqno": -1}
Dec  5 01:22:07 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3362 keys, 6764616 bytes, temperature: kUnknown
Dec  5 01:22:07 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897727268499, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 6764616, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6738999, "index_size": 16100, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8453, "raw_key_size": 80442, "raw_average_key_size": 23, "raw_value_size": 6675084, "raw_average_value_size": 1985, "num_data_blocks": 714, "num_entries": 3362, "num_filter_entries": 3362, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764897727, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Dec  5 01:22:07 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 01:22:07 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:22:07.268736) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 6764616 bytes
Dec  5 01:22:07 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:22:07.271295) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 119.8 rd, 95.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 6.8 +0.0 blob) out(6.5 +0.0 blob), read-write-amplify(11.2) write-amplify(4.9) OK, records in: 3806, records dropped: 444 output_compression: NoCompression
Dec  5 01:22:07 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:22:07.271319) EVENT_LOG_v1 {"time_micros": 1764897727271307, "job": 6, "event": "compaction_finished", "compaction_time_micros": 71002, "compaction_time_cpu_micros": 35154, "output_level": 6, "num_output_files": 1, "total_output_size": 6764616, "num_input_records": 3806, "num_output_records": 3362, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  5 01:22:07 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 01:22:07 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897727271718, "job": 6, "event": "table_file_deletion", "file_number": 22}
Dec  5 01:22:07 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 01:22:07 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897727273152, "job": 6, "event": "table_file_deletion", "file_number": 20}
Dec  5 01:22:07 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:22:07.197215) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:22:07 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:22:07.273480) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:22:07 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:22:07.273487) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:22:07 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:22:07.273490) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:22:07 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:22:07.273494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:22:07 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:22:07.273497) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:22:07 compute-0 python3.9[253032]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec  5 01:22:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v376: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:22:09 compute-0 python3.9[253186]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  5 01:22:10 compute-0 podman[253295]: 2025-12-05 01:22:10.061336589 +0000 UTC m=+0.135775248 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  5 01:22:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v377: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:22:10 compute-0 python3.9[253465]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:22:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:22:10 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:22:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 01:22:10 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:22:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 01:22:10 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:22:10 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 47d0fcca-d11a-4321-a1b7-e154a77eeb34 does not exist
Dec  5 01:22:10 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 28de8e22-77a8-45c5-bed4-161e65fd60f2 does not exist
Dec  5 01:22:10 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 668affb1-948d-4c65-a7c4-40c5b48f8125 does not exist
Dec  5 01:22:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 01:22:10 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 01:22:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 01:22:10 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:22:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:22:10 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:22:11 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:22:11 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:22:11 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:22:11 compute-0 python3.9[253748]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  5 01:22:11 compute-0 podman[253810]: 2025-12-05 01:22:11.992236629 +0000 UTC m=+0.071615790 container create dd60e40133abcdeccfc104a417199540b556360f796693fb359749e92271844b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  5 01:22:12 compute-0 podman[253810]: 2025-12-05 01:22:11.961310057 +0000 UTC m=+0.040689238 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:22:12 compute-0 systemd[1]: Started libpod-conmon-dd60e40133abcdeccfc104a417199540b556360f796693fb359749e92271844b.scope.
Dec  5 01:22:12 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:22:12 compute-0 podman[253810]: 2025-12-05 01:22:12.120222197 +0000 UTC m=+0.199601408 container init dd60e40133abcdeccfc104a417199540b556360f796693fb359749e92271844b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_liskov, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Dec  5 01:22:12 compute-0 podman[253810]: 2025-12-05 01:22:12.131853295 +0000 UTC m=+0.211232436 container start dd60e40133abcdeccfc104a417199540b556360f796693fb359749e92271844b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:22:12 compute-0 podman[253810]: 2025-12-05 01:22:12.137569086 +0000 UTC m=+0.216948317 container attach dd60e40133abcdeccfc104a417199540b556360f796693fb359749e92271844b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  5 01:22:12 compute-0 quirky_liskov[253872]: 167 167
Dec  5 01:22:12 compute-0 systemd[1]: libpod-dd60e40133abcdeccfc104a417199540b556360f796693fb359749e92271844b.scope: Deactivated successfully.
Dec  5 01:22:12 compute-0 conmon[253872]: conmon dd60e40133abcdeccfc1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dd60e40133abcdeccfc104a417199540b556360f796693fb359749e92271844b.scope/container/memory.events
Dec  5 01:22:12 compute-0 podman[253810]: 2025-12-05 01:22:12.143425511 +0000 UTC m=+0.222804652 container died dd60e40133abcdeccfc104a417199540b556360f796693fb359749e92271844b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:22:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:22:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d1a295b33b11b86339b2ff2f7f456a577562b16dbbe1ca2fbc683ec65f6a593-merged.mount: Deactivated successfully.
Dec  5 01:22:12 compute-0 podman[253810]: 2025-12-05 01:22:12.198525404 +0000 UTC m=+0.277904545 container remove dd60e40133abcdeccfc104a417199540b556360f796693fb359749e92271844b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:22:12 compute-0 systemd[1]: libpod-conmon-dd60e40133abcdeccfc104a417199540b556360f796693fb359749e92271844b.scope: Deactivated successfully.
Dec  5 01:22:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v378: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:22:12 compute-0 podman[253914]: 2025-12-05 01:22:12.450126867 +0000 UTC m=+0.085707347 container create 1f876abc7569049eb4698c47d122e61867b20a4bc8857dd2cd3c09952bb4ea73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_nash, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  5 01:22:12 compute-0 podman[253914]: 2025-12-05 01:22:12.406880207 +0000 UTC m=+0.042460697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:22:12 compute-0 systemd[1]: Started libpod-conmon-1f876abc7569049eb4698c47d122e61867b20a4bc8857dd2cd3c09952bb4ea73.scope.
Dec  5 01:22:12 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:22:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d979e2a9e2d1658dace9c8c4008334a512fc4ff498083e50a6d029a9373b17ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:22:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d979e2a9e2d1658dace9c8c4008334a512fc4ff498083e50a6d029a9373b17ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:22:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d979e2a9e2d1658dace9c8c4008334a512fc4ff498083e50a6d029a9373b17ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:22:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d979e2a9e2d1658dace9c8c4008334a512fc4ff498083e50a6d029a9373b17ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:22:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d979e2a9e2d1658dace9c8c4008334a512fc4ff498083e50a6d029a9373b17ed/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:22:12 compute-0 podman[253914]: 2025-12-05 01:22:12.605227789 +0000 UTC m=+0.240808269 container init 1f876abc7569049eb4698c47d122e61867b20a4bc8857dd2cd3c09952bb4ea73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:22:12 compute-0 podman[253914]: 2025-12-05 01:22:12.630721037 +0000 UTC m=+0.266301507 container start 1f876abc7569049eb4698c47d122e61867b20a4bc8857dd2cd3c09952bb4ea73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_nash, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:22:12 compute-0 podman[253914]: 2025-12-05 01:22:12.636693936 +0000 UTC m=+0.272274416 container attach 1f876abc7569049eb4698c47d122e61867b20a4bc8857dd2cd3c09952bb4ea73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_nash, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:22:12 compute-0 python3.9[253997]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:22:13 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Dec  5 01:22:13 compute-0 systemd[1]: session-49.scope: Consumed 6.673s CPU time.
Dec  5 01:22:13 compute-0 systemd-logind[792]: Session 49 logged out. Waiting for processes to exit.
Dec  5 01:22:13 compute-0 systemd-logind[792]: Removed session 49.
Dec  5 01:22:13 compute-0 jolly_nash[253963]: --> passed data devices: 0 physical, 3 LVM
Dec  5 01:22:13 compute-0 jolly_nash[253963]: --> relative data size: 1.0
Dec  5 01:22:13 compute-0 jolly_nash[253963]: --> All data devices are unavailable
Dec  5 01:22:13 compute-0 systemd[1]: libpod-1f876abc7569049eb4698c47d122e61867b20a4bc8857dd2cd3c09952bb4ea73.scope: Deactivated successfully.
Dec  5 01:22:13 compute-0 systemd[1]: libpod-1f876abc7569049eb4698c47d122e61867b20a4bc8857dd2cd3c09952bb4ea73.scope: Consumed 1.210s CPU time.
Dec  5 01:22:13 compute-0 podman[253914]: 2025-12-05 01:22:13.885680674 +0000 UTC m=+1.521261204 container died 1f876abc7569049eb4698c47d122e61867b20a4bc8857dd2cd3c09952bb4ea73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_nash, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  5 01:22:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-d979e2a9e2d1658dace9c8c4008334a512fc4ff498083e50a6d029a9373b17ed-merged.mount: Deactivated successfully.
Dec  5 01:22:13 compute-0 podman[253914]: 2025-12-05 01:22:13.977022999 +0000 UTC m=+1.612603489 container remove 1f876abc7569049eb4698c47d122e61867b20a4bc8857dd2cd3c09952bb4ea73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  5 01:22:13 compute-0 systemd[1]: libpod-conmon-1f876abc7569049eb4698c47d122e61867b20a4bc8857dd2cd3c09952bb4ea73.scope: Deactivated successfully.
Dec  5 01:22:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v379: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:22:15 compute-0 podman[254196]: 2025-12-05 01:22:15.035455395 +0000 UTC m=+0.082738263 container create 9185669a55200ca3524a40e13be7e3edb0cf5a9fc540a91bc5f77dcde3dc15a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kepler, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  5 01:22:15 compute-0 podman[254196]: 2025-12-05 01:22:15.004724149 +0000 UTC m=+0.052007017 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:22:15 compute-0 systemd[1]: Started libpod-conmon-9185669a55200ca3524a40e13be7e3edb0cf5a9fc540a91bc5f77dcde3dc15a7.scope.
Dec  5 01:22:15 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:22:15 compute-0 podman[254196]: 2025-12-05 01:22:15.199469699 +0000 UTC m=+0.246752597 container init 9185669a55200ca3524a40e13be7e3edb0cf5a9fc540a91bc5f77dcde3dc15a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kepler, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:22:15 compute-0 podman[254196]: 2025-12-05 01:22:15.218978169 +0000 UTC m=+0.266261027 container start 9185669a55200ca3524a40e13be7e3edb0cf5a9fc540a91bc5f77dcde3dc15a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kepler, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:22:15 compute-0 podman[254196]: 2025-12-05 01:22:15.226073799 +0000 UTC m=+0.273356657 container attach 9185669a55200ca3524a40e13be7e3edb0cf5a9fc540a91bc5f77dcde3dc15a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  5 01:22:15 compute-0 cranky_kepler[254212]: 167 167
Dec  5 01:22:15 compute-0 systemd[1]: libpod-9185669a55200ca3524a40e13be7e3edb0cf5a9fc540a91bc5f77dcde3dc15a7.scope: Deactivated successfully.
Dec  5 01:22:15 compute-0 podman[254196]: 2025-12-05 01:22:15.23216717 +0000 UTC m=+0.279450028 container died 9185669a55200ca3524a40e13be7e3edb0cf5a9fc540a91bc5f77dcde3dc15a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kepler, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:22:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d8ad037f440980b1ce19a8ec44826361b6b8855acee7c1abd138348b90b5b13-merged.mount: Deactivated successfully.
Dec  5 01:22:15 compute-0 podman[254196]: 2025-12-05 01:22:15.316171918 +0000 UTC m=+0.363454786 container remove 9185669a55200ca3524a40e13be7e3edb0cf5a9fc540a91bc5f77dcde3dc15a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:22:15 compute-0 systemd[1]: libpod-conmon-9185669a55200ca3524a40e13be7e3edb0cf5a9fc540a91bc5f77dcde3dc15a7.scope: Deactivated successfully.
Dec  5 01:22:15 compute-0 podman[254235]: 2025-12-05 01:22:15.608251991 +0000 UTC m=+0.105682790 container create e326a4b2487385ee7167c6186a5880b95adf239279c7c204ec0abbbb51dba515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_zhukovsky, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  5 01:22:15 compute-0 podman[254235]: 2025-12-05 01:22:15.572407281 +0000 UTC m=+0.069838160 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:22:15 compute-0 systemd[1]: Started libpod-conmon-e326a4b2487385ee7167c6186a5880b95adf239279c7c204ec0abbbb51dba515.scope.
Dec  5 01:22:15 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:22:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95e6b01fd1c992d2337fd5ec62f6b20d0875ead1e39b0fbc291bbd6e309f53e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:22:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95e6b01fd1c992d2337fd5ec62f6b20d0875ead1e39b0fbc291bbd6e309f53e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:22:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95e6b01fd1c992d2337fd5ec62f6b20d0875ead1e39b0fbc291bbd6e309f53e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:22:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95e6b01fd1c992d2337fd5ec62f6b20d0875ead1e39b0fbc291bbd6e309f53e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:22:15 compute-0 podman[254235]: 2025-12-05 01:22:15.780758504 +0000 UTC m=+0.278189333 container init e326a4b2487385ee7167c6186a5880b95adf239279c7c204ec0abbbb51dba515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_zhukovsky, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  5 01:22:15 compute-0 podman[254235]: 2025-12-05 01:22:15.798207886 +0000 UTC m=+0.295638655 container start e326a4b2487385ee7167c6186a5880b95adf239279c7c204ec0abbbb51dba515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  5 01:22:15 compute-0 podman[254235]: 2025-12-05 01:22:15.802777175 +0000 UTC m=+0.300208034 container attach e326a4b2487385ee7167c6186a5880b95adf239279c7c204ec0abbbb51dba515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_zhukovsky, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  5 01:22:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:22:16
Dec  5 01:22:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 01:22:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 01:22:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'volumes', 'vms', 'default.rgw.meta', 'images', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', 'backups']
Dec  5 01:22:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 01:22:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:22:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:22:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:22:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:22:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:22:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:22:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 01:22:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:22:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 01:22:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v380: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:22:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:22:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:22:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:22:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:22:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:22:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:22:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]: {
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:    "0": [
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:        {
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:            "devices": [
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:                "/dev/loop3"
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:            ],
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:            "lv_name": "ceph_lv0",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:            "lv_size": "21470642176",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:            "name": "ceph_lv0",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:            "tags": {
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:                "ceph.cluster_name": "ceph",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:                "ceph.crush_device_class": "",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:                "ceph.encrypted": "0",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:                "ceph.osd_id": "0",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:                "ceph.type": "block",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:                "ceph.vdo": "0"
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:            },
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:            "type": "block",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:            "vg_name": "ceph_vg0"
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:        }
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:    ],
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:    "1": [
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:        {
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:            "devices": [
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:                "/dev/loop4"
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:            ],
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:            "lv_name": "ceph_lv1",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:            "lv_size": "21470642176",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:            "name": "ceph_lv1",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:            "tags": {
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:                "ceph.cluster_name": "ceph",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:                "ceph.crush_device_class": "",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:                "ceph.encrypted": "0",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:                "ceph.osd_id": "1",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:                "ceph.type": "block",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:                "ceph.vdo": "0"
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:            },
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:            "type": "block",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:            "vg_name": "ceph_vg1"
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:        }
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:    ],
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:    "2": [
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:        {
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:            "devices": [
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:                "/dev/loop5"
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:            ],
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:            "lv_name": "ceph_lv2",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:            "lv_size": "21470642176",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:            "name": "ceph_lv2",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:            "tags": {
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:                "ceph.cluster_name": "ceph",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:                "ceph.crush_device_class": "",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:                "ceph.encrypted": "0",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:                "ceph.osd_id": "2",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:                "ceph.type": "block",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:                "ceph.vdo": "0"
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:            },
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:            "type": "block",
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:            "vg_name": "ceph_vg2"
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:        }
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]:    ]
Dec  5 01:22:16 compute-0 peaceful_zhukovsky[254251]: }
Dec  5 01:22:16 compute-0 systemd[1]: libpod-e326a4b2487385ee7167c6186a5880b95adf239279c7c204ec0abbbb51dba515.scope: Deactivated successfully.
Dec  5 01:22:16 compute-0 conmon[254251]: conmon e326a4b2487385ee7167 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e326a4b2487385ee7167c6186a5880b95adf239279c7c204ec0abbbb51dba515.scope/container/memory.events
Dec  5 01:22:16 compute-0 podman[254235]: 2025-12-05 01:22:16.62384724 +0000 UTC m=+1.121278049 container died e326a4b2487385ee7167c6186a5880b95adf239279c7c204ec0abbbb51dba515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:22:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-95e6b01fd1c992d2337fd5ec62f6b20d0875ead1e39b0fbc291bbd6e309f53e3-merged.mount: Deactivated successfully.
Dec  5 01:22:16 compute-0 podman[254235]: 2025-12-05 01:22:16.713819396 +0000 UTC m=+1.211250205 container remove e326a4b2487385ee7167c6186a5880b95adf239279c7c204ec0abbbb51dba515 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Dec  5 01:22:16 compute-0 systemd[1]: libpod-conmon-e326a4b2487385ee7167c6186a5880b95adf239279c7c204ec0abbbb51dba515.scope: Deactivated successfully.
Dec  5 01:22:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:22:17 compute-0 podman[254410]: 2025-12-05 01:22:17.839827287 +0000 UTC m=+0.079969575 container create 898a8e595deaeb7520a8bd5611ceaafe10ae304fd47f7316f79e39894a544b94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  5 01:22:17 compute-0 podman[254410]: 2025-12-05 01:22:17.806797386 +0000 UTC m=+0.046939674 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:22:17 compute-0 systemd[1]: Started libpod-conmon-898a8e595deaeb7520a8bd5611ceaafe10ae304fd47f7316f79e39894a544b94.scope.
Dec  5 01:22:17 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:22:17 compute-0 podman[254410]: 2025-12-05 01:22:17.986574724 +0000 UTC m=+0.226717072 container init 898a8e595deaeb7520a8bd5611ceaafe10ae304fd47f7316f79e39894a544b94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_montalcini, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  5 01:22:18 compute-0 podman[254410]: 2025-12-05 01:22:18.004145119 +0000 UTC m=+0.244287397 container start 898a8e595deaeb7520a8bd5611ceaafe10ae304fd47f7316f79e39894a544b94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_montalcini, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  5 01:22:18 compute-0 podman[254410]: 2025-12-05 01:22:18.011024013 +0000 UTC m=+0.251166361 container attach 898a8e595deaeb7520a8bd5611ceaafe10ae304fd47f7316f79e39894a544b94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_montalcini, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  5 01:22:18 compute-0 competent_montalcini[254427]: 167 167
Dec  5 01:22:18 compute-0 systemd[1]: libpod-898a8e595deaeb7520a8bd5611ceaafe10ae304fd47f7316f79e39894a544b94.scope: Deactivated successfully.
Dec  5 01:22:18 compute-0 conmon[254427]: conmon 898a8e595deaeb7520a8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-898a8e595deaeb7520a8bd5611ceaafe10ae304fd47f7316f79e39894a544b94.scope/container/memory.events
Dec  5 01:22:18 compute-0 podman[254410]: 2025-12-05 01:22:18.016489247 +0000 UTC m=+0.256631535 container died 898a8e595deaeb7520a8bd5611ceaafe10ae304fd47f7316f79e39894a544b94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_montalcini, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:22:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-a344dce82905bbfc6e2934a2f0fd6eaf989577f887695f057ddc79ccf3a0a9d3-merged.mount: Deactivated successfully.
Dec  5 01:22:18 compute-0 podman[254410]: 2025-12-05 01:22:18.093607581 +0000 UTC m=+0.333749879 container remove 898a8e595deaeb7520a8bd5611ceaafe10ae304fd47f7316f79e39894a544b94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:22:18 compute-0 systemd[1]: libpod-conmon-898a8e595deaeb7520a8bd5611ceaafe10ae304fd47f7316f79e39894a544b94.scope: Deactivated successfully.
Dec  5 01:22:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v381: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:22:18 compute-0 podman[254450]: 2025-12-05 01:22:18.356610655 +0000 UTC m=+0.091648144 container create e706692ebac34d4c4bca8f1bd4190b672cc9e386afe5fc1d206496f08f2c5d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:22:18 compute-0 podman[254450]: 2025-12-05 01:22:18.323949455 +0000 UTC m=+0.058987004 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:22:18 compute-0 systemd[1]: Started libpod-conmon-e706692ebac34d4c4bca8f1bd4190b672cc9e386afe5fc1d206496f08f2c5d32.scope.
Dec  5 01:22:18 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:22:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d40c0f18fed29409e68750afa5a7be05e9b7662ab88c1fe7698719e39234d416/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:22:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d40c0f18fed29409e68750afa5a7be05e9b7662ab88c1fe7698719e39234d416/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:22:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d40c0f18fed29409e68750afa5a7be05e9b7662ab88c1fe7698719e39234d416/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:22:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d40c0f18fed29409e68750afa5a7be05e9b7662ab88c1fe7698719e39234d416/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:22:18 compute-0 podman[254450]: 2025-12-05 01:22:18.60260702 +0000 UTC m=+0.337644559 container init e706692ebac34d4c4bca8f1bd4190b672cc9e386afe5fc1d206496f08f2c5d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_matsumoto, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  5 01:22:18 compute-0 podman[254450]: 2025-12-05 01:22:18.641151676 +0000 UTC m=+0.376189155 container start e706692ebac34d4c4bca8f1bd4190b672cc9e386afe5fc1d206496f08f2c5d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_matsumoto, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:22:18 compute-0 podman[254450]: 2025-12-05 01:22:18.64907386 +0000 UTC m=+0.384111369 container attach e706692ebac34d4c4bca8f1bd4190b672cc9e386afe5fc1d206496f08f2c5d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  5 01:22:18 compute-0 systemd-logind[792]: New session 50 of user zuul.
Dec  5 01:22:18 compute-0 systemd[1]: Started Session 50 of User zuul.
Dec  5 01:22:19 compute-0 upbeat_matsumoto[254466]: {
Dec  5 01:22:19 compute-0 upbeat_matsumoto[254466]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 01:22:19 compute-0 upbeat_matsumoto[254466]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:22:19 compute-0 upbeat_matsumoto[254466]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 01:22:19 compute-0 upbeat_matsumoto[254466]:        "osd_id": 0,
Dec  5 01:22:19 compute-0 upbeat_matsumoto[254466]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:22:19 compute-0 upbeat_matsumoto[254466]:        "type": "bluestore"
Dec  5 01:22:19 compute-0 upbeat_matsumoto[254466]:    },
Dec  5 01:22:19 compute-0 upbeat_matsumoto[254466]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 01:22:19 compute-0 upbeat_matsumoto[254466]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:22:19 compute-0 upbeat_matsumoto[254466]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 01:22:19 compute-0 upbeat_matsumoto[254466]:        "osd_id": 1,
Dec  5 01:22:19 compute-0 upbeat_matsumoto[254466]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:22:19 compute-0 upbeat_matsumoto[254466]:        "type": "bluestore"
Dec  5 01:22:19 compute-0 upbeat_matsumoto[254466]:    },
Dec  5 01:22:19 compute-0 upbeat_matsumoto[254466]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 01:22:19 compute-0 upbeat_matsumoto[254466]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:22:19 compute-0 upbeat_matsumoto[254466]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 01:22:19 compute-0 upbeat_matsumoto[254466]:        "osd_id": 2,
Dec  5 01:22:19 compute-0 upbeat_matsumoto[254466]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:22:19 compute-0 upbeat_matsumoto[254466]:        "type": "bluestore"
Dec  5 01:22:19 compute-0 upbeat_matsumoto[254466]:    }
Dec  5 01:22:19 compute-0 upbeat_matsumoto[254466]: }
Dec  5 01:22:19 compute-0 systemd[1]: libpod-e706692ebac34d4c4bca8f1bd4190b672cc9e386afe5fc1d206496f08f2c5d32.scope: Deactivated successfully.
Dec  5 01:22:19 compute-0 podman[254450]: 2025-12-05 01:22:19.860981081 +0000 UTC m=+1.596018590 container died e706692ebac34d4c4bca8f1bd4190b672cc9e386afe5fc1d206496f08f2c5d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:22:19 compute-0 systemd[1]: libpod-e706692ebac34d4c4bca8f1bd4190b672cc9e386afe5fc1d206496f08f2c5d32.scope: Consumed 1.219s CPU time.
Dec  5 01:22:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-d40c0f18fed29409e68750afa5a7be05e9b7662ab88c1fe7698719e39234d416-merged.mount: Deactivated successfully.
Dec  5 01:22:19 compute-0 podman[254450]: 2025-12-05 01:22:19.986456178 +0000 UTC m=+1.721493637 container remove e706692ebac34d4c4bca8f1bd4190b672cc9e386afe5fc1d206496f08f2c5d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_matsumoto, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  5 01:22:20 compute-0 systemd[1]: libpod-conmon-e706692ebac34d4c4bca8f1bd4190b672cc9e386afe5fc1d206496f08f2c5d32.scope: Deactivated successfully.
Dec  5 01:22:20 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:22:20 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:22:20 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:22:20 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:22:20 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 69757484-275e-43bc-9abb-04d954c800f9 does not exist
Dec  5 01:22:20 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 1571835d-f16c-4ffd-937b-4e85d124500c does not exist
Dec  5 01:22:20 compute-0 python3.9[254665]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  5 01:22:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v382: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:22:21 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:22:21 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:22:21 compute-0 python3.9[254871]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  5 01:22:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:22:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v383: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:22:23 compute-0 python3.9[254955]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  5 01:22:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v384: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:22:25 compute-0 python3.9[255106]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:22:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 01:22:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v385: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:22:26 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 01:22:26 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2026 writes, 8997 keys, 2026 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 2026 writes, 2026 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2026 writes, 8997 keys, 2026 commit groups, 1.0 writes per commit group, ingest: 10.88 MB, 0.02 MB/s#012Interval WAL: 2026 writes, 2026 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    131.8      0.06              0.03         3    0.021       0      0       0.0       0.0#012  L6      1/0    6.45 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    110.8     98.2      0.14              0.07         2    0.068    7115    734       0.0       0.0#012 Sum      1/0    6.45 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     76.0    108.7      0.20              0.10         5    0.039    7115    734       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     77.9    111.3      0.19              0.10         4    0.048    7115    734       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    110.8     98.2      0.14              0.07         2    0.068    7115    734       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    142.4      0.06              0.03         2    0.028       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.4      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.008, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.01 GB read, 0.02 MB/s read, 0.2 seconds#012Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.01 GB read, 0.02 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56463779d1f0#2 capacity: 308.00 MB usage: 573.61 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 0.000101 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(35,486.89 KB,0.154376%) FilterBlock(6,27.55 KB,0.00873417%) IndexBlock(6,59.17 KB,0.0187614%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  5 01:22:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:22:27 compute-0 python3.9[255257]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  5 01:22:27 compute-0 podman[255314]: 2025-12-05 01:22:27.704076361 +0000 UTC m=+0.115663892 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  5 01:22:27 compute-0 podman[255310]: 2025-12-05 01:22:27.713676051 +0000 UTC m=+0.121392203 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec  5 01:22:27 compute-0 podman[255316]: 2025-12-05 01:22:27.766427418 +0000 UTC m=+0.167754960 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 01:22:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v386: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:22:28 compute-0 python3.9[255472]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  5 01:22:29 compute-0 python3.9[255622]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  5 01:22:29 compute-0 podman[255649]: 2025-12-05 01:22:29.724019882 +0000 UTC m=+0.139059911 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 01:22:29 compute-0 podman[158197]: time="2025-12-05T01:22:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:22:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:22:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec  5 01:22:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:22:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6833 "" "Go-http-client/1.1"
Dec  5 01:22:30 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Dec  5 01:22:30 compute-0 systemd[1]: session-50.scope: Consumed 8.967s CPU time.
Dec  5 01:22:30 compute-0 systemd-logind[792]: Session 50 logged out. Waiting for processes to exit.
Dec  5 01:22:30 compute-0 systemd-logind[792]: Removed session 50.
Dec  5 01:22:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v387: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:22:31 compute-0 openstack_network_exporter[160350]: ERROR   01:22:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:22:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:22:31 compute-0 openstack_network_exporter[160350]: ERROR   01:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:22:31 compute-0 openstack_network_exporter[160350]: ERROR   01:22:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:22:31 compute-0 openstack_network_exporter[160350]: ERROR   01:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:22:31 compute-0 openstack_network_exporter[160350]: ERROR   01:22:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:22:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:22:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:22:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v388: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:22:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v389: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:22:34 compute-0 podman[255668]: 2025-12-05 01:22:34.680424013 +0000 UTC m=+0.096766538 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, vendor=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, io.openshift.expose-services=, release-0.7.12=, name=ubi9, release=1214.1726694543, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, container_name=kepler, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  5 01:22:36 compute-0 systemd-logind[792]: New session 51 of user zuul.
Dec  5 01:22:36 compute-0 systemd[1]: Started Session 51 of User zuul.
Dec  5 01:22:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v390: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:22:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:22:37 compute-0 podman[255814]: 2025-12-05 01:22:37.644819037 +0000 UTC m=+0.170259018 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, version=9.6, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, container_name=openstack_network_exporter)
Dec  5 01:22:37 compute-0 python3.9[255853]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  5 01:22:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v391: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:22:40 compute-0 podman[255991]: 2025-12-05 01:22:40.344182615 +0000 UTC m=+0.120471736 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  5 01:22:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v392: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:22:40 compute-0 python3.9[256040]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:22:41 compute-0 python3.9[256192]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:22:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:22:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v393: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.544 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.545 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.546 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f83151a5f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.546 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f83151a6690>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8316c39160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee59a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f941a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee79e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f942c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee6300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee74d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee76b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.549 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8314f94050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8314f940e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f831506dc10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8314ee7950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8314ee7a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8314f94170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8314ee79b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8314f94200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8314f94290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8314ee7ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8314f94320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8314ee59d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8314ee7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8314ee7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8314ee74a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8314ee7500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8314ee7560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8314ee75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8314f945f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8314ee7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8314ee7680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8314ee76e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8314ee7f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.560 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8314ee7740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.560 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8314ee7f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.560 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:22:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:22:42.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:22:42 compute-0 python3.9[256345]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:22:43 compute-0 python3.9[256423]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/ovn/default/tls.crt _original_basename=compute-0.ctlplane.example.com-tls.crt recurse=False state=file path=/var/lib/openstack/certs/ovn/default/tls.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:22:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v394: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:22:44 compute-0 python3.9[256575]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:22:45 compute-0 python3.9[256653]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/ovn/default/ca.crt _original_basename=compute-0.ctlplane.example.com-ca.crt recurse=False state=file path=/var/lib/openstack/certs/ovn/default/ca.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:22:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:22:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:22:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:22:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:22:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:22:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:22:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v395: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:22:46 compute-0 python3.9[256805]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:22:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:22:47 compute-0 python3.9[256883]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/ovn/default/tls.key _original_basename=compute-0.ctlplane.example.com-tls.key recurse=False state=file path=/var/lib/openstack/certs/ovn/default/tls.key force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:22:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v396: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:22:48 compute-0 python3.9[257035]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:22:49 compute-0 python3.9[257187]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:22:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v397: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:22:50 compute-0 python3.9[257340]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:22:51 compute-0 python3.9[257418]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry/default/tls.crt _original_basename=compute-0.ctlplane.example.com-tls.crt recurse=False state=file path=/var/lib/openstack/certs/telemetry/default/tls.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:22:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:22:52 compute-0 python3.9[257570]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:22:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v398: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:22:52 compute-0 python3.9[257648]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry/default/ca.crt _original_basename=compute-0.ctlplane.example.com-ca.crt recurse=False state=file path=/var/lib/openstack/certs/telemetry/default/ca.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:22:53 compute-0 python3.9[257800]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:22:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v399: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:22:54 compute-0 python3.9[257878]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry/default/tls.key _original_basename=compute-0.ctlplane.example.com-tls.key recurse=False state=file path=/var/lib/openstack/certs/telemetry/default/tls.key force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:22:55 compute-0 python3.9[258030]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:22:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v400: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:22:56 compute-0 python3.9[258182]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:22:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:22:57 compute-0 python3.9[258334]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:22:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v401: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:22:58 compute-0 podman[258406]: 2025-12-05 01:22:58.708826467 +0000 UTC m=+0.106019504 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  5 01:22:58 compute-0 podman[258405]: 2025-12-05 01:22:58.720139511 +0000 UTC m=+0.123942072 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  5 01:22:58 compute-0 podman[258407]: 2025-12-05 01:22:58.778307106 +0000 UTC m=+0.166007230 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:22:58 compute-0 python3.9[258522]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764897777.1337445-165-142445557368831/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=52f8f3f3b93de18584ffd2f1bf0edd763c8f4107 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:22:59 compute-0 podman[158197]: time="2025-12-05T01:22:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:22:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:22:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec  5 01:22:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:22:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6827 "" "Go-http-client/1.1"
Dec  5 01:22:59 compute-0 podman[258674]: 2025-12-05 01:22:59.971743456 +0000 UTC m=+0.128478817 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec  5 01:23:00 compute-0 python3.9[258675]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:23:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v402: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:23:01 compute-0 python3.9[258817]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764897779.2800305-165-278497685336608/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=464076ef88dcc89aa3cbba91e13b4b726d71f651 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:23:01 compute-0 openstack_network_exporter[160350]: ERROR   01:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:23:01 compute-0 openstack_network_exporter[160350]: ERROR   01:23:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:23:01 compute-0 openstack_network_exporter[160350]: ERROR   01:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:23:01 compute-0 openstack_network_exporter[160350]: ERROR   01:23:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:23:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:23:01 compute-0 openstack_network_exporter[160350]: ERROR   01:23:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:23:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:23:02 compute-0 python3.9[258969]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:23:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:23:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v403: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:23:02 compute-0 python3.9[259092]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764897781.2572145-165-53604373244575/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=f1718879b0aeb9c0646235a3fdcd720acf7caa59 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:23:04 compute-0 python3.9[259244]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:23:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v404: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:23:04 compute-0 podman[259368]: 2025-12-05 01:23:04.875677083 +0000 UTC m=+0.119119078 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, maintainer=Red Hat, Inc., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, vcs-type=git, container_name=kepler, version=9.4, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container)
Dec  5 01:23:05 compute-0 python3.9[259415]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:23:06 compute-0 python3.9[259569]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:23:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v405: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:23:06 compute-0 python3.9[259647]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/libvirt/default/tls.crt _original_basename=compute-0.ctlplane.example.com-tls.crt recurse=False state=file path=/var/lib/openstack/certs/libvirt/default/tls.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:23:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:23:07 compute-0 python3.9[259799]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:23:08 compute-0 podman[259849]: 2025-12-05 01:23:08.347309668 +0000 UTC m=+0.135712879 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, build-date=2025-08-20T13:12:41, config_id=edpm, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, version=9.6, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  5 01:23:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v406: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:23:08 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Dec  5 01:23:08 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:23:08.442468) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  5 01:23:08 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Dec  5 01:23:08 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897788442581, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 721, "num_deletes": 251, "total_data_size": 926766, "memory_usage": 939560, "flush_reason": "Manual Compaction"}
Dec  5 01:23:08 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Dec  5 01:23:08 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897788454868, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 918662, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8966, "largest_seqno": 9686, "table_properties": {"data_size": 914936, "index_size": 1570, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8048, "raw_average_key_size": 18, "raw_value_size": 907452, "raw_average_value_size": 2090, "num_data_blocks": 73, "num_entries": 434, "num_filter_entries": 434, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897727, "oldest_key_time": 1764897727, "file_creation_time": 1764897788, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Dec  5 01:23:08 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 12542 microseconds, and 6372 cpu microseconds.
Dec  5 01:23:08 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 01:23:08 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:23:08.455017) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 918662 bytes OK
Dec  5 01:23:08 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:23:08.455042) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Dec  5 01:23:08 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:23:08.457309) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Dec  5 01:23:08 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:23:08.457331) EVENT_LOG_v1 {"time_micros": 1764897788457324, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  5 01:23:08 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:23:08.457355) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  5 01:23:08 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 923052, prev total WAL file size 923052, number of live WAL files 2.
Dec  5 01:23:08 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 01:23:08 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:23:08.458617) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Dec  5 01:23:08 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  5 01:23:08 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(897KB)], [23(6606KB)]
Dec  5 01:23:08 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897788458716, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 7683278, "oldest_snapshot_seqno": -1}
Dec  5 01:23:08 compute-0 python3.9[259898]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/libvirt/default/ca.crt _original_basename=compute-0.ctlplane.example.com-ca.crt recurse=False state=file path=/var/lib/openstack/certs/libvirt/default/ca.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:23:08 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3282 keys, 6082342 bytes, temperature: kUnknown
Dec  5 01:23:08 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897788522735, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6082342, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6058392, "index_size": 14625, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8261, "raw_key_size": 79565, "raw_average_key_size": 24, "raw_value_size": 5996976, "raw_average_value_size": 1827, "num_data_blocks": 639, "num_entries": 3282, "num_filter_entries": 3282, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764897788, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Dec  5 01:23:08 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 01:23:08 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:23:08.523604) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6082342 bytes
Dec  5 01:23:08 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:23:08.526690) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 119.8 rd, 94.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 6.5 +0.0 blob) out(5.8 +0.0 blob), read-write-amplify(15.0) write-amplify(6.6) OK, records in: 3796, records dropped: 514 output_compression: NoCompression
Dec  5 01:23:08 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:23:08.526734) EVENT_LOG_v1 {"time_micros": 1764897788526714, "job": 8, "event": "compaction_finished", "compaction_time_micros": 64109, "compaction_time_cpu_micros": 31640, "output_level": 6, "num_output_files": 1, "total_output_size": 6082342, "num_input_records": 3796, "num_output_records": 3282, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  5 01:23:08 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 01:23:08 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897788528047, "job": 8, "event": "table_file_deletion", "file_number": 25}
Dec  5 01:23:08 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 01:23:08 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764897788530986, "job": 8, "event": "table_file_deletion", "file_number": 23}
Dec  5 01:23:08 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:23:08.458271) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:23:08 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:23:08.531226) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:23:08 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:23:08.531233) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:23:08 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:23:08.531238) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:23:08 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:23:08.531241) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:23:08 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:23:08.531245) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:23:09 compute-0 python3.9[260050]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:23:10 compute-0 python3.9[260128]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/libvirt/default/tls.key _original_basename=compute-0.ctlplane.example.com-tls.key recurse=False state=file path=/var/lib/openstack/certs/libvirt/default/tls.key force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:23:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v407: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:23:10 compute-0 podman[260153]: 2025-12-05 01:23:10.719564752 +0000 UTC m=+0.128768145 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 01:23:11 compute-0 python3.9[260304]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:23:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:23:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v408: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:23:12 compute-0 python3.9[260456]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:23:13 compute-0 python3.9[260608]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:23:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v409: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:23:14 compute-0 python3.9[260686]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt _original_basename=compute-0.ctlplane.example.com-tls.crt recurse=False state=file path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:23:15 compute-0 python3.9[260838]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:23:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:23:16
Dec  5 01:23:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 01:23:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 01:23:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['volumes', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'images', 'default.rgw.meta', 'default.rgw.control']
Dec  5 01:23:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 01:23:16 compute-0 python3.9[260916]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt _original_basename=compute-0.ctlplane.example.com-ca.crt recurse=False state=file path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:23:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:23:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:23:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:23:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:23:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:23:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:23:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 01:23:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:23:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 01:23:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:23:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:23:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:23:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:23:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:23:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:23:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:23:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v410: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:23:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:23:17 compute-0 python3.9[261068]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:23:18 compute-0 python3.9[261146]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key _original_basename=compute-0.ctlplane.example.com-tls.key recurse=False state=file path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:23:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v411: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:23:19 compute-0 python3.9[261299]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:23:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v412: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:23:20 compute-0 python3.9[261502]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:23:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:23:21 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:23:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 01:23:21 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:23:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 01:23:21 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:23:21 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev a0393788-1c88-474b-836f-a898dfc8ed71 does not exist
Dec  5 01:23:21 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev b6e3c338-3276-4072-bdbb-f728d98eee8e does not exist
Dec  5 01:23:21 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 66534c34-b662-49f4-8e91-91ee001b37b8 does not exist
Dec  5 01:23:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 01:23:21 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 01:23:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 01:23:21 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:23:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:23:21 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:23:21 compute-0 python3.9[261658]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:23:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:23:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v413: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:23:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:23:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:23:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:23:22 compute-0 podman[261944]: 2025-12-05 01:23:22.649753571 +0000 UTC m=+0.082813270 container create d9a8e2834d2d85e6617f028ff922ab107ebfacb66e0fc658e13e62785ca2105e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_driscoll, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:23:22 compute-0 podman[261944]: 2025-12-05 01:23:22.620066567 +0000 UTC m=+0.053126346 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:23:22 compute-0 systemd[1]: Started libpod-conmon-d9a8e2834d2d85e6617f028ff922ab107ebfacb66e0fc658e13e62785ca2105e.scope.
Dec  5 01:23:22 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:23:22 compute-0 python3.9[261955]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:23:22 compute-0 podman[261944]: 2025-12-05 01:23:22.804664991 +0000 UTC m=+0.237724760 container init d9a8e2834d2d85e6617f028ff922ab107ebfacb66e0fc658e13e62785ca2105e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  5 01:23:22 compute-0 podman[261944]: 2025-12-05 01:23:22.819145014 +0000 UTC m=+0.252204693 container start d9a8e2834d2d85e6617f028ff922ab107ebfacb66e0fc658e13e62785ca2105e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_driscoll, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  5 01:23:22 compute-0 podman[261944]: 2025-12-05 01:23:22.824261076 +0000 UTC m=+0.257320755 container attach d9a8e2834d2d85e6617f028ff922ab107ebfacb66e0fc658e13e62785ca2105e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_driscoll, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  5 01:23:22 compute-0 quizzical_driscoll[261962]: 167 167
Dec  5 01:23:22 compute-0 systemd[1]: libpod-d9a8e2834d2d85e6617f028ff922ab107ebfacb66e0fc658e13e62785ca2105e.scope: Deactivated successfully.
Dec  5 01:23:22 compute-0 conmon[261962]: conmon d9a8e2834d2d85e6617f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d9a8e2834d2d85e6617f028ff922ab107ebfacb66e0fc658e13e62785ca2105e.scope/container/memory.events
Dec  5 01:23:22 compute-0 podman[261967]: 2025-12-05 01:23:22.911697793 +0000 UTC m=+0.057447296 container died d9a8e2834d2d85e6617f028ff922ab107ebfacb66e0fc658e13e62785ca2105e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:23:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-081c40a12d670955e9fe6f2fb8220ea55e29aed5b0186d7d30d2c2c63bfcfb9a-merged.mount: Deactivated successfully.
Dec  5 01:23:22 compute-0 podman[261967]: 2025-12-05 01:23:22.987800156 +0000 UTC m=+0.133549599 container remove d9a8e2834d2d85e6617f028ff922ab107ebfacb66e0fc658e13e62785ca2105e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_driscoll, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  5 01:23:22 compute-0 systemd[1]: libpod-conmon-d9a8e2834d2d85e6617f028ff922ab107ebfacb66e0fc658e13e62785ca2105e.scope: Deactivated successfully.
Dec  5 01:23:23 compute-0 podman[262056]: 2025-12-05 01:23:23.254784397 +0000 UTC m=+0.082027248 container create aaa77535f9a35c6452e5177475cf65a0f39cc44c5631d5933cfb0d6ad16c072f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_liskov, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  5 01:23:23 compute-0 podman[262056]: 2025-12-05 01:23:23.217513762 +0000 UTC m=+0.044756654 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:23:23 compute-0 systemd[1]: Started libpod-conmon-aaa77535f9a35c6452e5177475cf65a0f39cc44c5631d5933cfb0d6ad16c072f.scope.
Dec  5 01:23:23 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:23:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27dd6b3d6a05ebdbc9e91b9e113acc2f4c3fd5fba946d622d8aab17c40380324/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:23:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27dd6b3d6a05ebdbc9e91b9e113acc2f4c3fd5fba946d622d8aab17c40380324/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:23:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27dd6b3d6a05ebdbc9e91b9e113acc2f4c3fd5fba946d622d8aab17c40380324/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:23:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27dd6b3d6a05ebdbc9e91b9e113acc2f4c3fd5fba946d622d8aab17c40380324/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:23:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27dd6b3d6a05ebdbc9e91b9e113acc2f4c3fd5fba946d622d8aab17c40380324/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:23:23 compute-0 podman[262056]: 2025-12-05 01:23:23.422781701 +0000 UTC m=+0.250024532 container init aaa77535f9a35c6452e5177475cf65a0f39cc44c5631d5933cfb0d6ad16c072f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_liskov, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Dec  5 01:23:23 compute-0 podman[262056]: 2025-12-05 01:23:23.450108399 +0000 UTC m=+0.277351200 container start aaa77535f9a35c6452e5177475cf65a0f39cc44c5631d5933cfb0d6ad16c072f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  5 01:23:23 compute-0 podman[262056]: 2025-12-05 01:23:23.454608393 +0000 UTC m=+0.281851204 container attach aaa77535f9a35c6452e5177475cf65a0f39cc44c5631d5933cfb0d6ad16c072f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_liskov, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:23:23 compute-0 python3.9[262160]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:23:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v414: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:23:24 compute-0 python3.9[262249]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:23:24 compute-0 pensive_liskov[262103]: --> passed data devices: 0 physical, 3 LVM
Dec  5 01:23:24 compute-0 pensive_liskov[262103]: --> relative data size: 1.0
Dec  5 01:23:24 compute-0 pensive_liskov[262103]: --> All data devices are unavailable
Dec  5 01:23:24 compute-0 systemd[1]: libpod-aaa77535f9a35c6452e5177475cf65a0f39cc44c5631d5933cfb0d6ad16c072f.scope: Deactivated successfully.
Dec  5 01:23:24 compute-0 systemd[1]: libpod-aaa77535f9a35c6452e5177475cf65a0f39cc44c5631d5933cfb0d6ad16c072f.scope: Consumed 1.243s CPU time.
Dec  5 01:23:24 compute-0 podman[262056]: 2025-12-05 01:23:24.782843656 +0000 UTC m=+1.610086507 container died aaa77535f9a35c6452e5177475cf65a0f39cc44c5631d5933cfb0d6ad16c072f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_liskov, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Dec  5 01:23:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-27dd6b3d6a05ebdbc9e91b9e113acc2f4c3fd5fba946d622d8aab17c40380324-merged.mount: Deactivated successfully.
Dec  5 01:23:24 compute-0 podman[262056]: 2025-12-05 01:23:24.876094235 +0000 UTC m=+1.703337086 container remove aaa77535f9a35c6452e5177475cf65a0f39cc44c5631d5933cfb0d6ad16c072f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_liskov, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  5 01:23:24 compute-0 systemd[1]: libpod-conmon-aaa77535f9a35c6452e5177475cf65a0f39cc44c5631d5933cfb0d6ad16c072f.scope: Deactivated successfully.
Dec  5 01:23:25 compute-0 python3.9[262528]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:23:26 compute-0 podman[262595]: 2025-12-05 01:23:26.056632668 +0000 UTC m=+0.090065252 container create 0cf8b23ca318673529745fd3262d7e817089858ea371607800bca698006b084e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lalande, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:23:26 compute-0 podman[262595]: 2025-12-05 01:23:26.022372717 +0000 UTC m=+0.055805341 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:23:26 compute-0 systemd[1]: Started libpod-conmon-0cf8b23ca318673529745fd3262d7e817089858ea371607800bca698006b084e.scope.
Dec  5 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:23:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 01:23:26 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:23:26 compute-0 podman[262595]: 2025-12-05 01:23:26.195433561 +0000 UTC m=+0.228866185 container init 0cf8b23ca318673529745fd3262d7e817089858ea371607800bca698006b084e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:23:26 compute-0 podman[262595]: 2025-12-05 01:23:26.213347688 +0000 UTC m=+0.246780262 container start 0cf8b23ca318673529745fd3262d7e817089858ea371607800bca698006b084e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  5 01:23:26 compute-0 podman[262595]: 2025-12-05 01:23:26.221183546 +0000 UTC m=+0.254616130 container attach 0cf8b23ca318673529745fd3262d7e817089858ea371607800bca698006b084e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:23:26 compute-0 interesting_lalande[262650]: 167 167
Dec  5 01:23:26 compute-0 systemd[1]: libpod-0cf8b23ca318673529745fd3262d7e817089858ea371607800bca698006b084e.scope: Deactivated successfully.
Dec  5 01:23:26 compute-0 conmon[262650]: conmon 0cf8b23ca31867352974 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0cf8b23ca318673529745fd3262d7e817089858ea371607800bca698006b084e.scope/container/memory.events
Dec  5 01:23:26 compute-0 podman[262595]: 2025-12-05 01:23:26.228342825 +0000 UTC m=+0.261775409 container died 0cf8b23ca318673529745fd3262d7e817089858ea371607800bca698006b084e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lalande, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:23:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec5f42265189c78743d83baaa4192545635f7779fa1cb87d9fed17d31ff3996a-merged.mount: Deactivated successfully.
Dec  5 01:23:26 compute-0 podman[262595]: 2025-12-05 01:23:26.309512688 +0000 UTC m=+0.342945262 container remove 0cf8b23ca318673529745fd3262d7e817089858ea371607800bca698006b084e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_lalande, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  5 01:23:26 compute-0 systemd[1]: libpod-conmon-0cf8b23ca318673529745fd3262d7e817089858ea371607800bca698006b084e.scope: Deactivated successfully.
Dec  5 01:23:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v415: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:23:26 compute-0 podman[262750]: 2025-12-05 01:23:26.597483372 +0000 UTC m=+0.077891803 container create 9469e7dcc437e8dcc610a8018fcde3b0c34725d41f7695fab2cdcc99640057b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_chaum, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  5 01:23:26 compute-0 podman[262750]: 2025-12-05 01:23:26.573415304 +0000 UTC m=+0.053823775 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:23:26 compute-0 systemd[1]: Started libpod-conmon-9469e7dcc437e8dcc610a8018fcde3b0c34725d41f7695fab2cdcc99640057b7.scope.
Dec  5 01:23:26 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:23:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d87f8fe926990233977213a879a96dd90de19d030f8c376dde24a7d9164a6eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:23:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d87f8fe926990233977213a879a96dd90de19d030f8c376dde24a7d9164a6eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:23:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d87f8fe926990233977213a879a96dd90de19d030f8c376dde24a7d9164a6eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:23:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d87f8fe926990233977213a879a96dd90de19d030f8c376dde24a7d9164a6eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:23:26 compute-0 podman[262750]: 2025-12-05 01:23:26.761063363 +0000 UTC m=+0.241471824 container init 9469e7dcc437e8dcc610a8018fcde3b0c34725d41f7695fab2cdcc99640057b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_chaum, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:23:26 compute-0 python3.9[262768]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:23:26 compute-0 podman[262750]: 2025-12-05 01:23:26.793158124 +0000 UTC m=+0.273566585 container start 9469e7dcc437e8dcc610a8018fcde3b0c34725d41f7695fab2cdcc99640057b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_chaum, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  5 01:23:26 compute-0 podman[262750]: 2025-12-05 01:23:26.799424738 +0000 UTC m=+0.279833229 container attach 9469e7dcc437e8dcc610a8018fcde3b0c34725d41f7695fab2cdcc99640057b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_chaum, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:23:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:23:27 compute-0 gallant_chaum[262773]: {
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:    "0": [
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:        {
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:            "devices": [
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:                "/dev/loop3"
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:            ],
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:            "lv_name": "ceph_lv0",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:            "lv_size": "21470642176",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:            "name": "ceph_lv0",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:            "tags": {
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:                "ceph.cluster_name": "ceph",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:                "ceph.crush_device_class": "",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:                "ceph.encrypted": "0",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:                "ceph.osd_id": "0",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:                "ceph.type": "block",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:                "ceph.vdo": "0"
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:            },
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:            "type": "block",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:            "vg_name": "ceph_vg0"
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:        }
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:    ],
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:    "1": [
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:        {
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:            "devices": [
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:                "/dev/loop4"
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:            ],
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:            "lv_name": "ceph_lv1",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:            "lv_size": "21470642176",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:            "name": "ceph_lv1",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:            "tags": {
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:                "ceph.cluster_name": "ceph",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:                "ceph.crush_device_class": "",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:                "ceph.encrypted": "0",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:                "ceph.osd_id": "1",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:                "ceph.type": "block",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:                "ceph.vdo": "0"
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:            },
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:            "type": "block",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:            "vg_name": "ceph_vg1"
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:        }
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:    ],
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:    "2": [
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:        {
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:            "devices": [
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:                "/dev/loop5"
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:            ],
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:            "lv_name": "ceph_lv2",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:            "lv_size": "21470642176",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:            "name": "ceph_lv2",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:            "tags": {
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:                "ceph.cluster_name": "ceph",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:                "ceph.crush_device_class": "",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:                "ceph.encrypted": "0",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:                "ceph.osd_id": "2",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:                "ceph.type": "block",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:                "ceph.vdo": "0"
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:            },
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:            "type": "block",
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:            "vg_name": "ceph_vg2"
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:        }
Dec  5 01:23:27 compute-0 gallant_chaum[262773]:    ]
Dec  5 01:23:27 compute-0 gallant_chaum[262773]: }
Dec  5 01:23:27 compute-0 python3.9[262900]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764897806.030414-375-111248857391653/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=aad3215deeeb1eba7754fd1a27527afcf2bb5051 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:23:27 compute-0 systemd[1]: libpod-9469e7dcc437e8dcc610a8018fcde3b0c34725d41f7695fab2cdcc99640057b7.scope: Deactivated successfully.
Dec  5 01:23:27 compute-0 podman[262750]: 2025-12-05 01:23:27.642372678 +0000 UTC m=+1.122781139 container died 9469e7dcc437e8dcc610a8018fcde3b0c34725d41f7695fab2cdcc99640057b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_chaum, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  5 01:23:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d87f8fe926990233977213a879a96dd90de19d030f8c376dde24a7d9164a6eb-merged.mount: Deactivated successfully.
Dec  5 01:23:27 compute-0 podman[262750]: 2025-12-05 01:23:27.757761032 +0000 UTC m=+1.238169463 container remove 9469e7dcc437e8dcc610a8018fcde3b0c34725d41f7695fab2cdcc99640057b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_chaum, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:23:27 compute-0 systemd[1]: libpod-conmon-9469e7dcc437e8dcc610a8018fcde3b0c34725d41f7695fab2cdcc99640057b7.scope: Deactivated successfully.
Dec  5 01:23:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v416: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:23:28 compute-0 python3.9[263177]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:23:28 compute-0 podman[263207]: 2025-12-05 01:23:28.970647922 +0000 UTC m=+0.084515607 container create ae95d83de8985874c37d2d7e1e33636aecafc70342fa979d3d53047fa610a431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_aryabhata, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  5 01:23:29 compute-0 systemd[1]: Started libpod-conmon-ae95d83de8985874c37d2d7e1e33636aecafc70342fa979d3d53047fa610a431.scope.
Dec  5 01:23:29 compute-0 podman[263207]: 2025-12-05 01:23:28.937544633 +0000 UTC m=+0.051412378 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:23:29 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:23:29 compute-0 podman[263207]: 2025-12-05 01:23:29.097784872 +0000 UTC m=+0.211652577 container init ae95d83de8985874c37d2d7e1e33636aecafc70342fa979d3d53047fa610a431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_aryabhata, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:23:29 compute-0 podman[263207]: 2025-12-05 01:23:29.112208892 +0000 UTC m=+0.226076557 container start ae95d83de8985874c37d2d7e1e33636aecafc70342fa979d3d53047fa610a431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_aryabhata, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:23:29 compute-0 infallible_aryabhata[263258]: 167 167
Dec  5 01:23:29 compute-0 systemd[1]: libpod-ae95d83de8985874c37d2d7e1e33636aecafc70342fa979d3d53047fa610a431.scope: Deactivated successfully.
Dec  5 01:23:29 compute-0 podman[263207]: 2025-12-05 01:23:29.119510545 +0000 UTC m=+0.233378250 container attach ae95d83de8985874c37d2d7e1e33636aecafc70342fa979d3d53047fa610a431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:23:29 compute-0 podman[263207]: 2025-12-05 01:23:29.119860655 +0000 UTC m=+0.233728330 container died ae95d83de8985874c37d2d7e1e33636aecafc70342fa979d3d53047fa610a431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  5 01:23:29 compute-0 podman[263245]: 2025-12-05 01:23:29.130622473 +0000 UTC m=+0.108446071 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm)
Dec  5 01:23:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-5388b753c3d0ebddf0dfb472a1d55fec451bfd3207f17e41d74855f0a3ef8303-merged.mount: Deactivated successfully.
Dec  5 01:23:29 compute-0 podman[263252]: 2025-12-05 01:23:29.168743682 +0000 UTC m=+0.125763303 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec  5 01:23:29 compute-0 podman[263207]: 2025-12-05 01:23:29.176135497 +0000 UTC m=+0.290003152 container remove ae95d83de8985874c37d2d7e1e33636aecafc70342fa979d3d53047fa610a431 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_aryabhata, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  5 01:23:29 compute-0 podman[263248]: 2025-12-05 01:23:29.181774353 +0000 UTC m=+0.141538200 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  5 01:23:29 compute-0 systemd[1]: libpod-conmon-ae95d83de8985874c37d2d7e1e33636aecafc70342fa979d3d53047fa610a431.scope: Deactivated successfully.
Dec  5 01:23:29 compute-0 podman[263408]: 2025-12-05 01:23:29.365216226 +0000 UTC m=+0.061775166 container create a93000f623a68c53de01aea12d0fa32f16038f16acf2afb1afd826acb8af7172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:23:29 compute-0 systemd[1]: Started libpod-conmon-a93000f623a68c53de01aea12d0fa32f16038f16acf2afb1afd826acb8af7172.scope.
Dec  5 01:23:29 compute-0 podman[263408]: 2025-12-05 01:23:29.347061842 +0000 UTC m=+0.043620792 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:23:29 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:23:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1706f5290762916aea071dc474495e13c50783a0dfbf396f9280b4a7e9a54a04/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:23:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1706f5290762916aea071dc474495e13c50783a0dfbf396f9280b4a7e9a54a04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:23:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1706f5290762916aea071dc474495e13c50783a0dfbf396f9280b4a7e9a54a04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:23:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1706f5290762916aea071dc474495e13c50783a0dfbf396f9280b4a7e9a54a04/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:23:29 compute-0 podman[263408]: 2025-12-05 01:23:29.484495507 +0000 UTC m=+0.181054457 container init a93000f623a68c53de01aea12d0fa32f16038f16acf2afb1afd826acb8af7172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bardeen, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  5 01:23:29 compute-0 podman[263408]: 2025-12-05 01:23:29.496060018 +0000 UTC m=+0.192618958 container start a93000f623a68c53de01aea12d0fa32f16038f16acf2afb1afd826acb8af7172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  5 01:23:29 compute-0 podman[263408]: 2025-12-05 01:23:29.501322594 +0000 UTC m=+0.197881534 container attach a93000f623a68c53de01aea12d0fa32f16038f16acf2afb1afd826acb8af7172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bardeen, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:23:29 compute-0 podman[158197]: time="2025-12-05T01:23:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:23:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:23:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 34393 "" "Go-http-client/1.1"
Dec  5 01:23:29 compute-0 python3.9[263483]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:23:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:23:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7244 "" "Go-http-client/1.1"
Dec  5 01:23:30 compute-0 podman[263534]: 2025-12-05 01:23:30.315406034 +0000 UTC m=+0.125056853 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 01:23:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v417: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:23:30 compute-0 python3.9[263585]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:23:30 compute-0 unruffled_bardeen[263450]: {
Dec  5 01:23:30 compute-0 unruffled_bardeen[263450]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 01:23:30 compute-0 unruffled_bardeen[263450]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:23:30 compute-0 unruffled_bardeen[263450]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 01:23:30 compute-0 unruffled_bardeen[263450]:        "osd_id": 0,
Dec  5 01:23:30 compute-0 unruffled_bardeen[263450]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:23:30 compute-0 unruffled_bardeen[263450]:        "type": "bluestore"
Dec  5 01:23:30 compute-0 unruffled_bardeen[263450]:    },
Dec  5 01:23:30 compute-0 unruffled_bardeen[263450]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 01:23:30 compute-0 unruffled_bardeen[263450]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:23:30 compute-0 unruffled_bardeen[263450]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 01:23:30 compute-0 unruffled_bardeen[263450]:        "osd_id": 1,
Dec  5 01:23:30 compute-0 unruffled_bardeen[263450]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:23:30 compute-0 unruffled_bardeen[263450]:        "type": "bluestore"
Dec  5 01:23:30 compute-0 unruffled_bardeen[263450]:    },
Dec  5 01:23:30 compute-0 unruffled_bardeen[263450]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 01:23:30 compute-0 unruffled_bardeen[263450]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:23:30 compute-0 unruffled_bardeen[263450]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 01:23:30 compute-0 unruffled_bardeen[263450]:        "osd_id": 2,
Dec  5 01:23:30 compute-0 unruffled_bardeen[263450]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:23:30 compute-0 unruffled_bardeen[263450]:        "type": "bluestore"
Dec  5 01:23:30 compute-0 unruffled_bardeen[263450]:    }
Dec  5 01:23:30 compute-0 unruffled_bardeen[263450]: }
Dec  5 01:23:30 compute-0 systemd[1]: libpod-a93000f623a68c53de01aea12d0fa32f16038f16acf2afb1afd826acb8af7172.scope: Deactivated successfully.
Dec  5 01:23:30 compute-0 systemd[1]: libpod-a93000f623a68c53de01aea12d0fa32f16038f16acf2afb1afd826acb8af7172.scope: Consumed 1.075s CPU time.
Dec  5 01:23:30 compute-0 podman[263408]: 2025-12-05 01:23:30.591060136 +0000 UTC m=+1.287619086 container died a93000f623a68c53de01aea12d0fa32f16038f16acf2afb1afd826acb8af7172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:23:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-1706f5290762916aea071dc474495e13c50783a0dfbf396f9280b4a7e9a54a04-merged.mount: Deactivated successfully.
Dec  5 01:23:30 compute-0 podman[263408]: 2025-12-05 01:23:30.67370693 +0000 UTC m=+1.370265870 container remove a93000f623a68c53de01aea12d0fa32f16038f16acf2afb1afd826acb8af7172 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bardeen, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  5 01:23:30 compute-0 systemd[1]: libpod-conmon-a93000f623a68c53de01aea12d0fa32f16038f16acf2afb1afd826acb8af7172.scope: Deactivated successfully.
Dec  5 01:23:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:23:30 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:23:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:23:30 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:23:30 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 2c3fc684-eb87-466e-a1d3-32d9715143a1 does not exist
Dec  5 01:23:30 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 25616021-5693-4488-ba3a-567f689a08d8 does not exist
Dec  5 01:23:31 compute-0 openstack_network_exporter[160350]: ERROR   01:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:23:31 compute-0 openstack_network_exporter[160350]: ERROR   01:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:23:31 compute-0 openstack_network_exporter[160350]: ERROR   01:23:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:23:31 compute-0 openstack_network_exporter[160350]: ERROR   01:23:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:23:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:23:31 compute-0 openstack_network_exporter[160350]: ERROR   01:23:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:23:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:23:31 compute-0 python3.9[263822]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:23:31 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:23:31 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:23:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:23:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v418: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:23:32 compute-0 python3.9[263974]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:23:33 compute-0 python3.9[264052]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:23:34 compute-0 python3.9[264204]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:23:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v419: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:23:35 compute-0 podman[264328]: 2025-12-05 01:23:35.142679082 +0000 UTC m=+0.108598966 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.buildah.version=1.29.0, version=9.4, name=ubi9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, vcs-type=git, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  5 01:23:35 compute-0 python3.9[264376]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:23:36 compute-0 python3.9[264499]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764897814.5606225-441-63491421155376/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=aad3215deeeb1eba7754fd1a27527afcf2bb5051 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:23:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v420: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:23:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:23:37 compute-0 python3.9[264651]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:23:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v421: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:23:38 compute-0 podman[264803]: 2025-12-05 01:23:38.565135041 +0000 UTC m=+0.134909306 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, version=9.6, vcs-type=git, vendor=Red Hat, Inc., container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_id=edpm, io.buildah.version=1.33.7)
Dec  5 01:23:38 compute-0 python3.9[264804]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:23:39 compute-0 python3.9[264903]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:23:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v422: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:23:40 compute-0 python3.9[265055]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:23:41 compute-0 podman[265179]: 2025-12-05 01:23:41.59025981 +0000 UTC m=+0.109721537 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  5 01:23:41 compute-0 python3.9[265224]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:23:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:23:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v423: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:23:42 compute-0 python3.9[265307]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:23:42 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Dec  5 01:23:42 compute-0 systemd[1]: session-51.scope: Consumed 58.018s CPU time.
Dec  5 01:23:42 compute-0 systemd-logind[792]: Session 51 logged out. Waiting for processes to exit.
Dec  5 01:23:42 compute-0 systemd-logind[792]: Removed session 51.
Dec  5 01:23:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v424: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:23:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:23:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:23:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:23:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:23:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:23:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:23:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v425: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:23:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:23:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v426: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:23:49 compute-0 systemd-logind[792]: New session 52 of user zuul.
Dec  5 01:23:49 compute-0 systemd[1]: Started Session 52 of User zuul.
Dec  5 01:23:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v427: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:23:50 compute-0 python3.9[265488]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:23:51 compute-0 python3.9[265640]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:23:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:23:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v428: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:23:52 compute-0 python3.9[265763]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764897830.7374527-34-141551198185031/.source.conf _original_basename=ceph.conf follow=False checksum=2f9fd2109b8acc302f3e55353e83658c9c265fc5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:23:53 compute-0 python3.9[265915]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:23:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v429: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:23:54 compute-0 python3.9[266038]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764897832.9908364-34-60207153475353/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=1ccf2af1c4d9cd0d8c5f12e3a57b95f6f703bc49 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:23:55 compute-0 systemd[1]: session-52.scope: Deactivated successfully.
Dec  5 01:23:55 compute-0 systemd-logind[792]: Session 52 logged out. Waiting for processes to exit.
Dec  5 01:23:55 compute-0 systemd[1]: session-52.scope: Consumed 4.993s CPU time.
Dec  5 01:23:55 compute-0 systemd-logind[792]: Removed session 52.
Dec  5 01:23:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v430: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:23:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:23:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v431: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:23:59 compute-0 podman[266064]: 2025-12-05 01:23:59.717182144 +0000 UTC m=+0.116177546 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 01:23:59 compute-0 podman[266063]: 2025-12-05 01:23:59.730134313 +0000 UTC m=+0.139476092 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  5 01:23:59 compute-0 podman[158197]: time="2025-12-05T01:23:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:23:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:23:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec  5 01:23:59 compute-0 podman[266065]: 2025-12-05 01:23:59.754814459 +0000 UTC m=+0.143113114 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec  5 01:23:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:23:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6829 "" "Go-http-client/1.1"
Dec  5 01:24:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v432: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:24:00 compute-0 podman[266129]: 2025-12-05 01:24:00.699449142 +0000 UTC m=+0.110424486 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm)
Dec  5 01:24:00 compute-0 systemd-logind[792]: New session 53 of user zuul.
Dec  5 01:24:00 compute-0 systemd[1]: Started Session 53 of User zuul.
Dec  5 01:24:01 compute-0 openstack_network_exporter[160350]: ERROR   01:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:24:01 compute-0 openstack_network_exporter[160350]: ERROR   01:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:24:01 compute-0 openstack_network_exporter[160350]: ERROR   01:24:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:24:01 compute-0 openstack_network_exporter[160350]: ERROR   01:24:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:24:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:24:01 compute-0 openstack_network_exporter[160350]: ERROR   01:24:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:24:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:24:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:24:02 compute-0 python3.9[266301]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  5 01:24:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v433: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:24:03 compute-0 python3.9[266457]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:24:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v434: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:24:04 compute-0 python3.9[266609]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:24:05 compute-0 podman[266715]: 2025-12-05 01:24:05.72772015 +0000 UTC m=+0.127446209 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, vcs-type=git, container_name=kepler, io.openshift.expose-services=, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.4, distribution-scope=public, managed_by=edpm_ansible, io.openshift.tags=base rhel9, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30)
Dec  5 01:24:06 compute-0 python3.9[266777]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  5 01:24:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v435: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:24:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:24:07 compute-0 python3.9[266929]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec  5 01:24:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v436: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:24:09 compute-0 podman[267053]: 2025-12-05 01:24:09.329502348 +0000 UTC m=+0.164390515 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, config_id=edpm, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, release=1755695350, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, container_name=openstack_network_exporter, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  5 01:24:09 compute-0 python3.9[267101]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  5 01:24:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v437: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:24:10 compute-0 python3.9[267185]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  5 01:24:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:24:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v438: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:24:12 compute-0 podman[267242]: 2025-12-05 01:24:12.681279325 +0000 UTC m=+0.095851232 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  5 01:24:13 compute-0 python3.9[267360]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  5 01:24:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v439: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:24:15 compute-0 python3[267515]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Dec  5 01:24:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:24:16
Dec  5 01:24:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 01:24:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 01:24:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', 'default.rgw.control']
Dec  5 01:24:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 01:24:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:24:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:24:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:24:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:24:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:24:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:24:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 01:24:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:24:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 01:24:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:24:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:24:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:24:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:24:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:24:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:24:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:24:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v440: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:24:16 compute-0 python3.9[267667]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:24:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:24:17 compute-0 python3.9[267819]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:24:18 compute-0 python3.9[267897]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:24:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v441: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:24:19 compute-0 python3.9[268049]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:24:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v442: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:24:20 compute-0 python3.9[268128]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.g5v6a4tq recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:24:21 compute-0 python3.9[268280]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:24:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:24:22 compute-0 python3.9[268358]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:24:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v443: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:24:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 01:24:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5455 writes, 23K keys, 5455 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5455 writes, 783 syncs, 6.97 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5455 writes, 23K keys, 5455 commit groups, 1.0 writes per commit group, ingest: 18.45 MB, 0.03 MB/s#012Interval WAL: 5455 writes, 783 syncs, 6.97 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5630e4c90dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5630e4c90dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Dec  5 01:24:23 compute-0 python3.9[268510]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:24:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v444: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:24:25 compute-0 python3[268663]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  5 01:24:26 compute-0 python3.9[268815]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:24:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 01:24:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v445: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:24:26 compute-0 python3.9[268893]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:24:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:24:27 compute-0 python3.9[269045]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:24:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v446: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:24:28 compute-0 python3.9[269123]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:24:29 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 01:24:29 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.2 total, 600.0 interval#012Cumulative writes: 6827 writes, 28K keys, 6827 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6827 writes, 1147 syncs, 5.95 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6827 writes, 28K keys, 6827 commit groups, 1.0 writes per commit group, ingest: 19.51 MB, 0.03 MB/s#012Interval WAL: 6827 writes, 1147 syncs, 5.95 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56484670add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56484670add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Dec  5 01:24:29 compute-0 python3.9[269275]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:24:29 compute-0 podman[158197]: time="2025-12-05T01:24:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:24:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:24:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec  5 01:24:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:24:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6834 "" "Go-http-client/1.1"
Dec  5 01:24:30 compute-0 podman[269330]: 2025-12-05 01:24:30.239328797 +0000 UTC m=+0.101023676 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 01:24:30 compute-0 podman[269326]: 2025-12-05 01:24:30.259202078 +0000 UTC m=+0.118993684 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.4)
Dec  5 01:24:30 compute-0 podman[269333]: 2025-12-05 01:24:30.305958156 +0000 UTC m=+0.154983063 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  5 01:24:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v447: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:24:30 compute-0 python3.9[269406]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:24:31 compute-0 podman[269566]: 2025-12-05 01:24:31.302262455 +0000 UTC m=+0.137895349 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  5 01:24:31 compute-0 openstack_network_exporter[160350]: ERROR   01:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:24:31 compute-0 openstack_network_exporter[160350]: ERROR   01:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:24:31 compute-0 openstack_network_exporter[160350]: ERROR   01:24:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:24:31 compute-0 openstack_network_exporter[160350]: ERROR   01:24:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:24:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:24:31 compute-0 openstack_network_exporter[160350]: ERROR   01:24:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:24:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:24:31 compute-0 python3.9[269637]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:24:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:24:31 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:24:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:24:31 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:24:32 compute-0 python3.9[269787]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:24:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:24:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v448: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:24:32 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:24:32 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:24:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:24:33 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:24:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 01:24:33 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:24:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 01:24:33 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:24:33 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev fc2fce19-d4c6-4095-a713-ef78f60e1453 does not exist
Dec  5 01:24:33 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 525d6ea9-0132-4da9-96b4-e1c760c0d5a9 does not exist
Dec  5 01:24:33 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 28956989-4073-4862-9d95-29fd9d056ba8 does not exist
Dec  5 01:24:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 01:24:33 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 01:24:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 01:24:33 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:24:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:24:33 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:24:33 compute-0 python3.9[270057]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:24:33 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:24:33 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:24:33 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:24:33 compute-0 python3.9[270245]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:24:34 compute-0 podman[270309]: 2025-12-05 01:24:34.233305382 +0000 UTC m=+0.091229664 container create 4432dc223af16b58ae709b900935a91eeb8a9de4b491f89810eff4c88ec3d329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_babbage, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Dec  5 01:24:34 compute-0 podman[270309]: 2025-12-05 01:24:34.194521495 +0000 UTC m=+0.052445847 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:24:34 compute-0 systemd[1]: Started libpod-conmon-4432dc223af16b58ae709b900935a91eeb8a9de4b491f89810eff4c88ec3d329.scope.
Dec  5 01:24:34 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:24:34 compute-0 podman[270309]: 2025-12-05 01:24:34.401855561 +0000 UTC m=+0.259779853 container init 4432dc223af16b58ae709b900935a91eeb8a9de4b491f89810eff4c88ec3d329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_babbage, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  5 01:24:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v449: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:24:34 compute-0 podman[270309]: 2025-12-05 01:24:34.429395336 +0000 UTC m=+0.287319618 container start 4432dc223af16b58ae709b900935a91eeb8a9de4b491f89810eff4c88ec3d329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  5 01:24:34 compute-0 podman[270309]: 2025-12-05 01:24:34.437449619 +0000 UTC m=+0.295373971 container attach 4432dc223af16b58ae709b900935a91eeb8a9de4b491f89810eff4c88ec3d329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:24:34 compute-0 stupefied_babbage[270362]: 167 167
Dec  5 01:24:34 compute-0 systemd[1]: libpod-4432dc223af16b58ae709b900935a91eeb8a9de4b491f89810eff4c88ec3d329.scope: Deactivated successfully.
Dec  5 01:24:34 compute-0 podman[270309]: 2025-12-05 01:24:34.443074265 +0000 UTC m=+0.300998547 container died 4432dc223af16b58ae709b900935a91eeb8a9de4b491f89810eff4c88ec3d329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_babbage, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  5 01:24:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-5073fe0cac13245b6930e8052a7f88b56a234751d21893a01f0640f521910ad2-merged.mount: Deactivated successfully.
Dec  5 01:24:34 compute-0 podman[270309]: 2025-12-05 01:24:34.524234078 +0000 UTC m=+0.382158340 container remove 4432dc223af16b58ae709b900935a91eeb8a9de4b491f89810eff4c88ec3d329 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:24:34 compute-0 systemd[1]: libpod-conmon-4432dc223af16b58ae709b900935a91eeb8a9de4b491f89810eff4c88ec3d329.scope: Deactivated successfully.
Dec  5 01:24:34 compute-0 podman[270447]: 2025-12-05 01:24:34.777980883 +0000 UTC m=+0.082994395 container create 3eb896fa70a3aaaac226813c1b819b74a042e065f5470ed2f9334455475a34ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lamarr, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  5 01:24:34 compute-0 podman[270447]: 2025-12-05 01:24:34.739370151 +0000 UTC m=+0.044383743 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:24:34 compute-0 systemd[1]: Started libpod-conmon-3eb896fa70a3aaaac226813c1b819b74a042e065f5470ed2f9334455475a34ab.scope.
Dec  5 01:24:34 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:24:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c57d7cd6c78a7412510578ee8fc62b5b5b415d47b59be38c939d5d40b8c5a7e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:24:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c57d7cd6c78a7412510578ee8fc62b5b5b415d47b59be38c939d5d40b8c5a7e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:24:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c57d7cd6c78a7412510578ee8fc62b5b5b415d47b59be38c939d5d40b8c5a7e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:24:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c57d7cd6c78a7412510578ee8fc62b5b5b415d47b59be38c939d5d40b8c5a7e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:24:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c57d7cd6c78a7412510578ee8fc62b5b5b415d47b59be38c939d5d40b8c5a7e1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:24:34 compute-0 podman[270447]: 2025-12-05 01:24:34.923700438 +0000 UTC m=+0.228713970 container init 3eb896fa70a3aaaac226813c1b819b74a042e065f5470ed2f9334455475a34ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lamarr, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  5 01:24:34 compute-0 podman[270447]: 2025-12-05 01:24:34.937956414 +0000 UTC m=+0.242969956 container start 3eb896fa70a3aaaac226813c1b819b74a042e065f5470ed2f9334455475a34ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  5 01:24:34 compute-0 podman[270447]: 2025-12-05 01:24:34.944745762 +0000 UTC m=+0.249759334 container attach 3eb896fa70a3aaaac226813c1b819b74a042e065f5470ed2f9334455475a34ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec  5 01:24:35 compute-0 python3.9[270489]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:24:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 01:24:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5557 writes, 24K keys, 5557 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5557 writes, 841 syncs, 6.61 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5557 writes, 24K keys, 5557 commit groups, 1.0 writes per commit group, ingest: 18.47 MB, 0.03 MB/s#012Interval WAL: 5557 writes, 841 syncs, 6.61 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c43575edd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c43575edd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Dec  5 01:24:36 compute-0 elastic_lamarr[270492]: --> passed data devices: 0 physical, 3 LVM
Dec  5 01:24:36 compute-0 elastic_lamarr[270492]: --> relative data size: 1.0
Dec  5 01:24:36 compute-0 elastic_lamarr[270492]: --> All data devices are unavailable
Dec  5 01:24:36 compute-0 systemd[1]: libpod-3eb896fa70a3aaaac226813c1b819b74a042e065f5470ed2f9334455475a34ab.scope: Deactivated successfully.
Dec  5 01:24:36 compute-0 systemd[1]: libpod-3eb896fa70a3aaaac226813c1b819b74a042e065f5470ed2f9334455475a34ab.scope: Consumed 1.238s CPU time.
Dec  5 01:24:36 compute-0 podman[270447]: 2025-12-05 01:24:36.245596584 +0000 UTC m=+1.550610116 container died 3eb896fa70a3aaaac226813c1b819b74a042e065f5470ed2f9334455475a34ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lamarr, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:24:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-c57d7cd6c78a7412510578ee8fc62b5b5b415d47b59be38c939d5d40b8c5a7e1-merged.mount: Deactivated successfully.
Dec  5 01:24:36 compute-0 podman[270644]: 2025-12-05 01:24:36.289847792 +0000 UTC m=+0.133838836 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, container_name=kepler, release=1214.1726694543, name=ubi9, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, managed_by=edpm_ansible, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-container, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  5 01:24:36 compute-0 podman[270447]: 2025-12-05 01:24:36.334595375 +0000 UTC m=+1.639608897 container remove 3eb896fa70a3aaaac226813c1b819b74a042e065f5470ed2f9334455475a34ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  5 01:24:36 compute-0 systemd[1]: libpod-conmon-3eb896fa70a3aaaac226813c1b819b74a042e065f5470ed2f9334455475a34ab.scope: Deactivated successfully.
Dec  5 01:24:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v450: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:24:36 compute-0 python3.9[270703]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:24:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:24:37 compute-0 podman[270965]: 2025-12-05 01:24:37.332826207 +0000 UTC m=+0.087089199 container create bbf0e4033daf16aa11c72924df1cfbf87c3440f126b3d1733053089d25b6850b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_brown, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:24:37 compute-0 podman[270965]: 2025-12-05 01:24:37.2947628 +0000 UTC m=+0.049025802 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:24:37 compute-0 systemd[1]: Started libpod-conmon-bbf0e4033daf16aa11c72924df1cfbf87c3440f126b3d1733053089d25b6850b.scope.
Dec  5 01:24:37 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:24:37 compute-0 podman[270965]: 2025-12-05 01:24:37.474261023 +0000 UTC m=+0.228524055 container init bbf0e4033daf16aa11c72924df1cfbf87c3440f126b3d1733053089d25b6850b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_brown, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:24:37 compute-0 podman[270965]: 2025-12-05 01:24:37.49467726 +0000 UTC m=+0.248940252 container start bbf0e4033daf16aa11c72924df1cfbf87c3440f126b3d1733053089d25b6850b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_brown, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:24:37 compute-0 great_brown[271009]: 167 167
Dec  5 01:24:37 compute-0 podman[270965]: 2025-12-05 01:24:37.507450244 +0000 UTC m=+0.261713296 container attach bbf0e4033daf16aa11c72924df1cfbf87c3440f126b3d1733053089d25b6850b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_brown, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  5 01:24:37 compute-0 systemd[1]: libpod-bbf0e4033daf16aa11c72924df1cfbf87c3440f126b3d1733053089d25b6850b.scope: Deactivated successfully.
Dec  5 01:24:37 compute-0 podman[270965]: 2025-12-05 01:24:37.511596059 +0000 UTC m=+0.265859041 container died bbf0e4033daf16aa11c72924df1cfbf87c3440f126b3d1733053089d25b6850b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_brown, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  5 01:24:37 compute-0 python3.9[271006]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:24:37 compute-0 ceph-mgr[193209]: [devicehealth INFO root] Check health
Dec  5 01:24:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-337e26872be0656da58bb823a3d0d73cde3739f107584a4143e12b13b5a10e82-merged.mount: Deactivated successfully.
Dec  5 01:24:37 compute-0 podman[270965]: 2025-12-05 01:24:37.594530412 +0000 UTC m=+0.348793404 container remove bbf0e4033daf16aa11c72924df1cfbf87c3440f126b3d1733053089d25b6850b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_brown, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  5 01:24:37 compute-0 systemd[1]: libpod-conmon-bbf0e4033daf16aa11c72924df1cfbf87c3440f126b3d1733053089d25b6850b.scope: Deactivated successfully.
Dec  5 01:24:37 compute-0 podman[271056]: 2025-12-05 01:24:37.846090445 +0000 UTC m=+0.077537413 container create f9927334bc64bf01bd295ff1ec02ed92d47c40f7a1d09b4dd12d769c140466c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  5 01:24:37 compute-0 podman[271056]: 2025-12-05 01:24:37.818431107 +0000 UTC m=+0.049878105 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:24:37 compute-0 systemd[1]: Started libpod-conmon-f9927334bc64bf01bd295ff1ec02ed92d47c40f7a1d09b4dd12d769c140466c8.scope.
Dec  5 01:24:37 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:24:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0da455947fda80a2a619c6e0faec77219520bd1a714c7518f9cb9ad3300b07d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:24:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0da455947fda80a2a619c6e0faec77219520bd1a714c7518f9cb9ad3300b07d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:24:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0da455947fda80a2a619c6e0faec77219520bd1a714c7518f9cb9ad3300b07d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:24:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0da455947fda80a2a619c6e0faec77219520bd1a714c7518f9cb9ad3300b07d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:24:38 compute-0 podman[271056]: 2025-12-05 01:24:38.030234477 +0000 UTC m=+0.261681475 container init f9927334bc64bf01bd295ff1ec02ed92d47c40f7a1d09b4dd12d769c140466c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_dewdney, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:24:38 compute-0 podman[271056]: 2025-12-05 01:24:38.063802329 +0000 UTC m=+0.295249297 container start f9927334bc64bf01bd295ff1ec02ed92d47c40f7a1d09b4dd12d769c140466c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:24:38 compute-0 podman[271056]: 2025-12-05 01:24:38.0699584 +0000 UTC m=+0.301405408 container attach f9927334bc64bf01bd295ff1ec02ed92d47c40f7a1d09b4dd12d769c140466c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_dewdney, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:24:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v451: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:24:38 compute-0 python3.9[271204]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]: {
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:    "0": [
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:        {
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:            "devices": [
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:                "/dev/loop3"
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:            ],
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:            "lv_name": "ceph_lv0",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:            "lv_size": "21470642176",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:            "name": "ceph_lv0",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:            "tags": {
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:                "ceph.cluster_name": "ceph",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:                "ceph.crush_device_class": "",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:                "ceph.encrypted": "0",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:                "ceph.osd_id": "0",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:                "ceph.type": "block",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:                "ceph.vdo": "0"
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:            },
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:            "type": "block",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:            "vg_name": "ceph_vg0"
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:        }
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:    ],
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:    "1": [
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:        {
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:            "devices": [
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:                "/dev/loop4"
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:            ],
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:            "lv_name": "ceph_lv1",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:            "lv_size": "21470642176",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:            "name": "ceph_lv1",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:            "tags": {
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:                "ceph.cluster_name": "ceph",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:                "ceph.crush_device_class": "",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:                "ceph.encrypted": "0",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:                "ceph.osd_id": "1",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:                "ceph.type": "block",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:                "ceph.vdo": "0"
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:            },
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:            "type": "block",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:            "vg_name": "ceph_vg1"
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:        }
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:    ],
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:    "2": [
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:        {
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:            "devices": [
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:                "/dev/loop5"
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:            ],
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:            "lv_name": "ceph_lv2",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:            "lv_size": "21470642176",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:            "name": "ceph_lv2",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:            "tags": {
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:                "ceph.cluster_name": "ceph",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:                "ceph.crush_device_class": "",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:                "ceph.encrypted": "0",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:                "ceph.osd_id": "2",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:                "ceph.type": "block",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:                "ceph.vdo": "0"
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:            },
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:            "type": "block",
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:            "vg_name": "ceph_vg2"
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:        }
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]:    ]
Dec  5 01:24:38 compute-0 mystifying_dewdney[271108]: }
Dec  5 01:24:38 compute-0 podman[271056]: 2025-12-05 01:24:38.945187006 +0000 UTC m=+1.176633994 container died f9927334bc64bf01bd295ff1ec02ed92d47c40f7a1d09b4dd12d769c140466c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_dewdney, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  5 01:24:38 compute-0 systemd[1]: libpod-f9927334bc64bf01bd295ff1ec02ed92d47c40f7a1d09b4dd12d769c140466c8.scope: Deactivated successfully.
Dec  5 01:24:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-0da455947fda80a2a619c6e0faec77219520bd1a714c7518f9cb9ad3300b07d1-merged.mount: Deactivated successfully.
Dec  5 01:24:39 compute-0 podman[271056]: 2025-12-05 01:24:39.066935386 +0000 UTC m=+1.298382354 container remove f9927334bc64bf01bd295ff1ec02ed92d47c40f7a1d09b4dd12d769c140466c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:24:39 compute-0 systemd[1]: libpod-conmon-f9927334bc64bf01bd295ff1ec02ed92d47c40f7a1d09b4dd12d769c140466c8.scope: Deactivated successfully.
Dec  5 01:24:39 compute-0 podman[271388]: 2025-12-05 01:24:39.529088086 +0000 UTC m=+0.120690032 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, config_id=edpm)
Dec  5 01:24:39 compute-0 python3.9[271467]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:24:40 compute-0 podman[271554]: 2025-12-05 01:24:40.22986224 +0000 UTC m=+0.087626394 container create 976785ceb889088c297c6d75126be7cdde0505354e7d8fb8986468f553289d86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hellman, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:24:40 compute-0 podman[271554]: 2025-12-05 01:24:40.19239571 +0000 UTC m=+0.050159884 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:24:40 compute-0 systemd[1]: Started libpod-conmon-976785ceb889088c297c6d75126be7cdde0505354e7d8fb8986468f553289d86.scope.
Dec  5 01:24:40 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:24:40 compute-0 podman[271554]: 2025-12-05 01:24:40.378520157 +0000 UTC m=+0.236284381 container init 976785ceb889088c297c6d75126be7cdde0505354e7d8fb8986468f553289d86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  5 01:24:40 compute-0 podman[271554]: 2025-12-05 01:24:40.396752793 +0000 UTC m=+0.254516947 container start 976785ceb889088c297c6d75126be7cdde0505354e7d8fb8986468f553289d86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  5 01:24:40 compute-0 podman[271554]: 2025-12-05 01:24:40.403473339 +0000 UTC m=+0.261237553 container attach 976785ceb889088c297c6d75126be7cdde0505354e7d8fb8986468f553289d86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hellman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Dec  5 01:24:40 compute-0 keen_hellman[271570]: 167 167
Dec  5 01:24:40 compute-0 systemd[1]: libpod-976785ceb889088c297c6d75126be7cdde0505354e7d8fb8986468f553289d86.scope: Deactivated successfully.
Dec  5 01:24:40 compute-0 conmon[271570]: conmon 976785ceb889088c297c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-976785ceb889088c297c6d75126be7cdde0505354e7d8fb8986468f553289d86.scope/container/memory.events
Dec  5 01:24:40 compute-0 podman[271554]: 2025-12-05 01:24:40.408512729 +0000 UTC m=+0.266276883 container died 976785ceb889088c297c6d75126be7cdde0505354e7d8fb8986468f553289d86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  5 01:24:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v452: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:24:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-729f0b32ee8f8109cde1ab3c0e677470a7d120518369eba258594edc86dc97af-merged.mount: Deactivated successfully.
Dec  5 01:24:40 compute-0 podman[271554]: 2025-12-05 01:24:40.478257135 +0000 UTC m=+0.336021269 container remove 976785ceb889088c297c6d75126be7cdde0505354e7d8fb8986468f553289d86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_hellman, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  5 01:24:40 compute-0 systemd[1]: libpod-conmon-976785ceb889088c297c6d75126be7cdde0505354e7d8fb8986468f553289d86.scope: Deactivated successfully.
Dec  5 01:24:40 compute-0 podman[271651]: 2025-12-05 01:24:40.681091337 +0000 UTC m=+0.048651054 container create 0da0962a48694b6b9b5234c73c4db00d18ba877c89fee6a9d1ca5c95fe69f68d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_brattain, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:24:40 compute-0 systemd[1]: Started libpod-conmon-0da0962a48694b6b9b5234c73c4db00d18ba877c89fee6a9d1ca5c95fe69f68d.scope.
Dec  5 01:24:40 compute-0 podman[271651]: 2025-12-05 01:24:40.664031169 +0000 UTC m=+0.031590916 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:24:40 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db8ad4efadffd26f978a7bc1e810014f31564dc177240d7b57a8813260c8be1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db8ad4efadffd26f978a7bc1e810014f31564dc177240d7b57a8813260c8be1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db8ad4efadffd26f978a7bc1e810014f31564dc177240d7b57a8813260c8be1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db8ad4efadffd26f978a7bc1e810014f31564dc177240d7b57a8813260c8be1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:24:40 compute-0 podman[271651]: 2025-12-05 01:24:40.868428036 +0000 UTC m=+0.235987843 container init 0da0962a48694b6b9b5234c73c4db00d18ba877c89fee6a9d1ca5c95fe69f68d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_brattain, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec  5 01:24:40 compute-0 podman[271651]: 2025-12-05 01:24:40.888991222 +0000 UTC m=+0.256550979 container start 0da0962a48694b6b9b5234c73c4db00d18ba877c89fee6a9d1ca5c95fe69f68d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_brattain, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  5 01:24:40 compute-0 podman[271651]: 2025-12-05 01:24:40.897614044 +0000 UTC m=+0.265173781 container attach 0da0962a48694b6b9b5234c73c4db00d18ba877c89fee6a9d1ca5c95fe69f68d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:24:41 compute-0 python3.9[271739]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  5 01:24:42 compute-0 vibrant_brattain[271690]: {
Dec  5 01:24:42 compute-0 vibrant_brattain[271690]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 01:24:42 compute-0 vibrant_brattain[271690]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:24:42 compute-0 vibrant_brattain[271690]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 01:24:42 compute-0 vibrant_brattain[271690]:        "osd_id": 0,
Dec  5 01:24:42 compute-0 vibrant_brattain[271690]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:24:42 compute-0 vibrant_brattain[271690]:        "type": "bluestore"
Dec  5 01:24:42 compute-0 vibrant_brattain[271690]:    },
Dec  5 01:24:42 compute-0 vibrant_brattain[271690]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 01:24:42 compute-0 vibrant_brattain[271690]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:24:42 compute-0 vibrant_brattain[271690]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 01:24:42 compute-0 vibrant_brattain[271690]:        "osd_id": 1,
Dec  5 01:24:42 compute-0 vibrant_brattain[271690]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:24:42 compute-0 vibrant_brattain[271690]:        "type": "bluestore"
Dec  5 01:24:42 compute-0 vibrant_brattain[271690]:    },
Dec  5 01:24:42 compute-0 vibrant_brattain[271690]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 01:24:42 compute-0 vibrant_brattain[271690]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:24:42 compute-0 vibrant_brattain[271690]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 01:24:42 compute-0 vibrant_brattain[271690]:        "osd_id": 2,
Dec  5 01:24:42 compute-0 vibrant_brattain[271690]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:24:42 compute-0 vibrant_brattain[271690]:        "type": "bluestore"
Dec  5 01:24:42 compute-0 vibrant_brattain[271690]:    }
Dec  5 01:24:42 compute-0 vibrant_brattain[271690]: }
Dec  5 01:24:42 compute-0 systemd[1]: libpod-0da0962a48694b6b9b5234c73c4db00d18ba877c89fee6a9d1ca5c95fe69f68d.scope: Deactivated successfully.
Dec  5 01:24:42 compute-0 systemd[1]: libpod-0da0962a48694b6b9b5234c73c4db00d18ba877c89fee6a9d1ca5c95fe69f68d.scope: Consumed 1.256s CPU time.
Dec  5 01:24:42 compute-0 podman[271651]: 2025-12-05 01:24:42.15569833 +0000 UTC m=+1.523258087 container died 0da0962a48694b6b9b5234c73c4db00d18ba877c89fee6a9d1ca5c95fe69f68d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_brattain, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  5 01:24:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-0db8ad4efadffd26f978a7bc1e810014f31564dc177240d7b57a8813260c8be1-merged.mount: Deactivated successfully.
Dec  5 01:24:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:24:42 compute-0 podman[271651]: 2025-12-05 01:24:42.230235527 +0000 UTC m=+1.597795244 container remove 0da0962a48694b6b9b5234c73c4db00d18ba877c89fee6a9d1ca5c95fe69f68d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_brattain, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:24:42 compute-0 systemd[1]: libpod-conmon-0da0962a48694b6b9b5234c73c4db00d18ba877c89fee6a9d1ca5c95fe69f68d.scope: Deactivated successfully.
Dec  5 01:24:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:24:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:24:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:24:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:24:42 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev c3fd36d6-2ea0-4b9b-8694-39fe2f0067ed does not exist
Dec  5 01:24:42 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 9f10def9-e592-43cd-ab6e-8b8ddb65559c does not exist
Dec  5 01:24:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v453: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.545 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.546 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.546 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f83151a5f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.547 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f83151a6690>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8316c39160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee59a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f941a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee79e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f942c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.550 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8314f94050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.551 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8314f940e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee6300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.552 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.551 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f831506dc10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8314ee7950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.552 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8314ee7a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8314f94170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8314ee79b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8314f94200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8314f94290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8314ee7ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8314f94320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8314ee59d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.554 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.557 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8314ee7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8314ee7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.558 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee74d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.559 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.559 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.560 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.560 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.560 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.560 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee76b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.561 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.561 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.561 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8314ee74a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.562 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.562 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.562 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8314ee7500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.563 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.563 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.563 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8314ee7560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.564 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.565 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8314ee75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.565 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.565 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8314f945f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.565 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.565 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8314ee7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.565 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.565 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8314ee7680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.565 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.566 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8314ee76e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.566 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.566 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8314ee7f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.566 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.566 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8314ee7740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.566 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.566 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8314ee7f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.566 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.567 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.568 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.568 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.568 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.568 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.568 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.568 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.568 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.568 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.569 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.569 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.569 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.569 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.569 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.569 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.569 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.569 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.569 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:24:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:24:42.570 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:24:42 compute-0 python3.9[271982]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:f2:93:49:d5" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:24:42 compute-0 ovs-vsctl[271983]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:f2:93:49:d5 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Dec  5 01:24:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:24:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:24:43 compute-0 podman[272101]: 2025-12-05 01:24:43.723847151 +0000 UTC m=+0.126627808 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  5 01:24:44 compute-0 python3.9[272159]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:24:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v454: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:24:45 compute-0 python3.9[272312]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  5 01:24:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:24:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:24:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:24:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:24:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:24:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:24:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v455: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:24:46 compute-0 python3.9[272466]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:24:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:24:47 compute-0 python3.9[272618]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:24:48 compute-0 python3.9[272696]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:24:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v456: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:24:49 compute-0 python3.9[272848]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:24:50 compute-0 python3.9[272927]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:24:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v457: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:24:51 compute-0 python3.9[273079]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:24:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:24:52 compute-0 python3.9[273231]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:24:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v458: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:24:52 compute-0 python3.9[273309]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:24:54 compute-0 python3.9[273461]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:24:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v459: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:24:54 compute-0 python3.9[273539]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:24:55 compute-0 python3.9[273691]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  5 01:24:55 compute-0 systemd[1]: Reloading.
Dec  5 01:24:56 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:24:56 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:24:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v460: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:24:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:24:57 compute-0 python3.9[273880]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:24:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v461: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:24:58 compute-0 python3.9[273958]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:24:59 compute-0 python3.9[274110]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:24:59 compute-0 podman[158197]: time="2025-12-05T01:24:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:24:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:24:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec  5 01:24:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:24:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6816 "" "Go-http-client/1.1"
Dec  5 01:25:00 compute-0 python3.9[274188]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:25:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v462: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:25:00 compute-0 podman[274237]: 2025-12-05 01:25:00.706470356 +0000 UTC m=+0.112263536 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  5 01:25:00 compute-0 podman[274231]: 2025-12-05 01:25:00.734025938 +0000 UTC m=+0.128882702 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec  5 01:25:00 compute-0 podman[274238]: 2025-12-05 01:25:00.754266975 +0000 UTC m=+0.151081484 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  5 01:25:01 compute-0 python3.9[274403]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  5 01:25:01 compute-0 openstack_network_exporter[160350]: ERROR   01:25:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:25:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:25:01 compute-0 openstack_network_exporter[160350]: ERROR   01:25:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:25:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:25:01 compute-0 openstack_network_exporter[160350]: ERROR   01:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:25:01 compute-0 openstack_network_exporter[160350]: ERROR   01:25:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:25:01 compute-0 openstack_network_exporter[160350]: ERROR   01:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:25:01 compute-0 systemd[1]: Reloading.
Dec  5 01:25:01 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:25:01 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:25:01 compute-0 podman[274405]: 2025-12-05 01:25:01.588502837 +0000 UTC m=+0.124125109 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3)
Dec  5 01:25:01 compute-0 systemd[1]: Starting Create netns directory...
Dec  5 01:25:01 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  5 01:25:01 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  5 01:25:01 compute-0 systemd[1]: Finished Create netns directory.
Dec  5 01:25:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:25:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v463: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:25:03 compute-0 python3.9[274614]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:25:04 compute-0 python3.9[274766]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:25:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v464: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:25:04 compute-0 python3.9[274844]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/ovn_controller/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/ovn_controller/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:25:06 compute-0 python3.9[274996]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:25:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v465: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:25:06 compute-0 podman[275039]: 2025-12-05 01:25:06.745820001 +0000 UTC m=+0.153476091 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, config_id=edpm, name=ubi9, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, managed_by=edpm_ansible, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., architecture=x86_64, io.openshift.tags=base rhel9, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public)
Dec  5 01:25:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:25:07 compute-0 python3.9[275166]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:25:07 compute-0 python3.9[275244]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ovn_controller.json _original_basename=.zn4mekf4 recurse=False state=file path=/var/lib/kolla/config_files/ovn_controller.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:25:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v466: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:25:09 compute-0 python3.9[275396]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:25:10 compute-0 podman[275520]: 2025-12-05 01:25:10.021743438 +0000 UTC m=+0.125494307 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, release=1755695350, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, config_id=edpm, managed_by=edpm_ansible)
Dec  5 01:25:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v467: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:25:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:25:12 compute-0 python3.9[275797]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Dec  5 01:25:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v468: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:25:13 compute-0 python3.9[275949]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  5 01:25:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v469: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:25:14 compute-0 podman[276050]: 2025-12-05 01:25:14.727789739 +0000 UTC m=+0.134059127 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  5 01:25:14 compute-0 python3.9[276125]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  5 01:25:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:25:16
Dec  5 01:25:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 01:25:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 01:25:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.log', 'backups', 'default.rgw.meta', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'volumes']
Dec  5 01:25:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 01:25:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:25:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:25:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:25:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:25:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:25:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:25:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 01:25:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:25:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 01:25:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:25:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:25:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:25:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:25:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:25:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:25:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:25:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v470: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:25:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:25:17 compute-0 python3[276302]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  5 01:25:17 compute-0 python3[276302]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012     {#012          "Id": "3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c",#012          "Digest": "sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c",#012          "RepoTags": [#012               "quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified"#012          ],#012          "RepoDigests": [#012               "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c"#012          ],#012          "Parent": "",#012          "Comment": "",#012          "Created": "2025-12-01T06:38:47.246477714Z",#012          "Config": {#012               "User": "root",#012               "Env": [#012                    "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012                    "LANG=en_US.UTF-8",#012                    "TZ=UTC",#012                    "container=oci"#012               ],#012               "Entrypoint": [#012                    "dumb-init",#012                    "--single-child",#012                    "--"#012               ],#012               "Cmd": [#012                    "kolla_start"#012               ],#012               "Labels": {#012                    "io.buildah.version": "1.41.3",#012                    "maintainer": "OpenStack Kubernetes Operator team",#012                    "org.label-schema.build-date": "20251125",#012                    "org.label-schema.license": "GPLv2",#012                    "org.label-schema.name": "CentOS Stream 9 Base Image",#012                    "org.label-schema.schema-version": "1.0",#012                    "org.label-schema.vendor": "CentOS",#012                    "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012                    "tcib_managed": "true"#012               },#012               "StopSignal": "SIGTERM"#012          },#012          "Version": "",#012          "Author": "",#012          "Architecture": "amd64",#012          "Os": "linux",#012          "Size": 345722821,#012          "VirtualSize": 345722821,#012          "GraphDriver": {#012               "Name": "overlay",#012               "Data": {#012                    "LowerDir": "/var/lib/containers/storage/overlay/06baa34adcac19ffd1cac321f0c14e5e32037c7b357d2eb54e065b4d177d72fd/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",#012                    "UpperDir": "/var/lib/containers/storage/overlay/0dae0ae2501f0b947a8e64948b264823feec8c7ddb8b7849cb102fbfe0c75da8/diff",#012                    "WorkDir": "/var/lib/containers/storage/overlay/0dae0ae2501f0b947a8e64948b264823feec8c7ddb8b7849cb102fbfe0c75da8/work"#012               }#012          },#012          "RootFS": {#012               "Type": "layers",#012               "Layers": [#012                    "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",#012                    "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",#012                    "sha256:ba9362d2aeb297e34b0679b2fc8168350c70a5b0ec414daf293bf2bc013e9088",#012                    "sha256:aae3b8a85314314b9db80a043fdf3f3b1d0b69927faca0303c73969a23dddd0f"#012               ]#012          },#012          "Labels": {#012               "io.buildah.version": "1.41.3",#012               "maintainer": "OpenStack Kubernetes Operator team",#012               "org.label-schema.build-date": "20251125",#012               "org.label-schema.license": "GPLv2",#012               "org.label-schema.name": "CentOS Stream 9 Base Image",#012               "org.label-schema.schema-version": "1.0",#012               "org.label-schema.vendor": "CentOS",#012               "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",#012               "tcib_managed": "true"#012          },#012          "Annotations": {},#012          "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012          "User": "root",#012          "History": [#012               {#012                    "created": "2025-11-25T04:02:36.223494528Z",#012                    "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-25T04:02:36.223562059Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251125\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-25T04:02:39.054452717Z",#012                    "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012               },#012               {#012                    "created": "2025-12-01T06:09:28.025707917Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012                    "comment": "FROM quay.io/centos/centos:stream9",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T06:09:28.025744608Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T06:09:28.025767729Z",#012                    "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T06:09:28.025791379Z",#012                    "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T06:09:28.02581523Z",#012                    "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T06:09:28.025867611Z",#012                    "created_by": "/bin/sh -c #(nop) USER root",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T06:09:28.469442331Z",#012                    "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T06:10:02.029095017Z",#012                    "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T06:10:05.672474685Z",#012                    "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-l
Dec  5 01:25:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v471: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:25:19 compute-0 python3.9[276511]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  5 01:25:20 compute-0 python3.9[276666]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:25:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v472: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:25:20 compute-0 python3.9[276742]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  5 01:25:22 compute-0 python3.9[276893]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764897921.0771534-536-24196944710177/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:25:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:25:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v473: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:25:22 compute-0 python3.9[276969]: ansible-systemd Invoked with state=started name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  5 01:25:24 compute-0 python3.9[277123]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:25:24 compute-0 ovs-vsctl[277124]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Dec  5 01:25:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v474: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:25:25 compute-0 python3.9[277276]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:25:25 compute-0 ovs-vsctl[277278]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Dec  5 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:25:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 01:25:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v475: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:25:26 compute-0 python3.9[277431]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:25:26 compute-0 ovs-vsctl[277432]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Dec  5 01:25:27 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Dec  5 01:25:27 compute-0 systemd[1]: session-53.scope: Consumed 1min 11.461s CPU time.
Dec  5 01:25:27 compute-0 systemd-logind[792]: Session 53 logged out. Waiting for processes to exit.
Dec  5 01:25:27 compute-0 systemd-logind[792]: Removed session 53.
Dec  5 01:25:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:25:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v476: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:25:29 compute-0 podman[158197]: time="2025-12-05T01:25:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:25:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:25:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec  5 01:25:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:25:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6841 "" "Go-http-client/1.1"
Dec  5 01:25:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v477: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:25:31 compute-0 openstack_network_exporter[160350]: ERROR   01:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:25:31 compute-0 openstack_network_exporter[160350]: ERROR   01:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:25:31 compute-0 openstack_network_exporter[160350]: ERROR   01:25:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:25:31 compute-0 openstack_network_exporter[160350]: ERROR   01:25:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:25:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:25:31 compute-0 openstack_network_exporter[160350]: ERROR   01:25:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:25:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:25:31 compute-0 podman[277457]: 2025-12-05 01:25:31.714492279 +0000 UTC m=+0.121027371 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  5 01:25:31 compute-0 podman[277459]: 2025-12-05 01:25:31.741684711 +0000 UTC m=+0.148100640 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller)
Dec  5 01:25:31 compute-0 podman[277458]: 2025-12-05 01:25:31.745874429 +0000 UTC m=+0.146863096 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  5 01:25:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:25:32 compute-0 systemd-logind[792]: New session 54 of user zuul.
Dec  5 01:25:32 compute-0 systemd[1]: Started Session 54 of User zuul.
Dec  5 01:25:32 compute-0 podman[277525]: 2025-12-05 01:25:32.384634263 +0000 UTC m=+0.119441946 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  5 01:25:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v478: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:25:33 compute-0 python3.9[277696]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  5 01:25:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v479: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:25:35 compute-0 python3.9[277852]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:25:36 compute-0 python3.9[278004]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:25:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v480: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:25:37 compute-0 podman[278128]: 2025-12-05 01:25:37.209081842 +0000 UTC m=+0.155698723 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=edpm, architecture=x86_64, distribution-scope=public, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, vcs-type=git, version=9.4, io.openshift.tags=base rhel9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, build-date=2024-09-18T21:23:30)
Dec  5 01:25:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:25:37 compute-0 python3.9[278175]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:25:38 compute-0 python3.9[278327]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:25:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v481: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:25:39 compute-0 python3.9[278479]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:25:40 compute-0 podman[278603]: 2025-12-05 01:25:40.365510141 +0000 UTC m=+0.115293401 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=edpm_ansible)
Dec  5 01:25:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v482: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:25:40 compute-0 python3.9[278647]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  5 01:25:41 compute-0 python3.9[278802]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec  5 01:25:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:25:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v483: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:25:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec  5 01:25:43 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  5 01:25:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:25:43 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:25:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 01:25:43 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:25:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 01:25:43 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:25:43 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 0ea22727-3077-40b9-81d3-a451c7a7e3ed does not exist
Dec  5 01:25:43 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev b8c9da6d-d1d5-4cec-a456-bc27013d7b34 does not exist
Dec  5 01:25:43 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 7ca161fd-9f8b-4ee2-8474-af2a638cc50c does not exist
Dec  5 01:25:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 01:25:43 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 01:25:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 01:25:43 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:25:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:25:43 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:25:43 compute-0 python3.9[279082]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:25:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v484: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:25:44 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  5 01:25:44 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:25:44 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:25:44 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:25:44 compute-0 python3.9[279340]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764897942.9652357-86-187112415400862/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:25:44 compute-0 podman[279345]: 2025-12-05 01:25:44.839555973 +0000 UTC m=+0.100458766 container create 570965308b823bb816d0f659b560f4dbd0971f63ff0d8013a6d74154970a0e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hamilton, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:25:44 compute-0 podman[279345]: 2025-12-05 01:25:44.789958933 +0000 UTC m=+0.050861786 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:25:44 compute-0 systemd[1]: Started libpod-conmon-570965308b823bb816d0f659b560f4dbd0971f63ff0d8013a6d74154970a0e1e.scope.
Dec  5 01:25:44 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:25:44 compute-0 podman[279345]: 2025-12-05 01:25:44.985760679 +0000 UTC m=+0.246663482 container init 570965308b823bb816d0f659b560f4dbd0971f63ff0d8013a6d74154970a0e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hamilton, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:25:44 compute-0 podman[279359]: 2025-12-05 01:25:44.992919759 +0000 UTC m=+0.106539085 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 01:25:45 compute-0 podman[279345]: 2025-12-05 01:25:45.00399574 +0000 UTC m=+0.264898573 container start 570965308b823bb816d0f659b560f4dbd0971f63ff0d8013a6d74154970a0e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hamilton, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:25:45 compute-0 naughty_hamilton[279373]: 167 167
Dec  5 01:25:45 compute-0 systemd[1]: libpod-570965308b823bb816d0f659b560f4dbd0971f63ff0d8013a6d74154970a0e1e.scope: Deactivated successfully.
Dec  5 01:25:45 compute-0 podman[279345]: 2025-12-05 01:25:45.011532121 +0000 UTC m=+0.272434904 container attach 570965308b823bb816d0f659b560f4dbd0971f63ff0d8013a6d74154970a0e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  5 01:25:45 compute-0 podman[279345]: 2025-12-05 01:25:45.011923112 +0000 UTC m=+0.272825875 container died 570965308b823bb816d0f659b560f4dbd0971f63ff0d8013a6d74154970a0e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hamilton, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  5 01:25:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-22f8cc186e27aae0e976b23ddf6eab9d8e00ab44677f70f5f786b08adb11a11f-merged.mount: Deactivated successfully.
Dec  5 01:25:45 compute-0 podman[279345]: 2025-12-05 01:25:45.07824067 +0000 UTC m=+0.339143433 container remove 570965308b823bb816d0f659b560f4dbd0971f63ff0d8013a6d74154970a0e1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hamilton, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:25:45 compute-0 systemd[1]: libpod-conmon-570965308b823bb816d0f659b560f4dbd0971f63ff0d8013a6d74154970a0e1e.scope: Deactivated successfully.
Dec  5 01:25:45 compute-0 podman[279476]: 2025-12-05 01:25:45.320310691 +0000 UTC m=+0.092548033 container create b96c4ec1682e88bccd0eff6bd5009960b1204976c92b808ec61518bd2a9776f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_knuth, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:25:45 compute-0 podman[279476]: 2025-12-05 01:25:45.282551804 +0000 UTC m=+0.054789186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:25:45 compute-0 systemd[1]: Started libpod-conmon-b96c4ec1682e88bccd0eff6bd5009960b1204976c92b808ec61518bd2a9776f3.scope.
Dec  5 01:25:45 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:25:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1f439ee21e7c9d5a4bcd58aefe75191a464307883ba0305614590d7e097cc56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:25:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1f439ee21e7c9d5a4bcd58aefe75191a464307883ba0305614590d7e097cc56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:25:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1f439ee21e7c9d5a4bcd58aefe75191a464307883ba0305614590d7e097cc56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:25:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1f439ee21e7c9d5a4bcd58aefe75191a464307883ba0305614590d7e097cc56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:25:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1f439ee21e7c9d5a4bcd58aefe75191a464307883ba0305614590d7e097cc56/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:25:45 compute-0 podman[279476]: 2025-12-05 01:25:45.488990077 +0000 UTC m=+0.261227449 container init b96c4ec1682e88bccd0eff6bd5009960b1204976c92b808ec61518bd2a9776f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_knuth, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:25:45 compute-0 podman[279476]: 2025-12-05 01:25:45.515705625 +0000 UTC m=+0.287942967 container start b96c4ec1682e88bccd0eff6bd5009960b1204976c92b808ec61518bd2a9776f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Dec  5 01:25:45 compute-0 podman[279476]: 2025-12-05 01:25:45.52406417 +0000 UTC m=+0.296301562 container attach b96c4ec1682e88bccd0eff6bd5009960b1204976c92b808ec61518bd2a9776f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_knuth, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:25:45 compute-0 python3.9[279575]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:25:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:25:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:25:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:25:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:25:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:25:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:25:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v485: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:25:46 compute-0 python3.9[279710]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764897945.1367009-101-249378286908391/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:25:46 compute-0 elated_knuth[279520]: --> passed data devices: 0 physical, 3 LVM
Dec  5 01:25:46 compute-0 elated_knuth[279520]: --> relative data size: 1.0
Dec  5 01:25:46 compute-0 elated_knuth[279520]: --> All data devices are unavailable
Dec  5 01:25:46 compute-0 systemd[1]: libpod-b96c4ec1682e88bccd0eff6bd5009960b1204976c92b808ec61518bd2a9776f3.scope: Deactivated successfully.
Dec  5 01:25:46 compute-0 systemd[1]: libpod-b96c4ec1682e88bccd0eff6bd5009960b1204976c92b808ec61518bd2a9776f3.scope: Consumed 1.215s CPU time.
Dec  5 01:25:46 compute-0 podman[279476]: 2025-12-05 01:25:46.799704267 +0000 UTC m=+1.571941589 container died b96c4ec1682e88bccd0eff6bd5009960b1204976c92b808ec61518bd2a9776f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_knuth, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Dec  5 01:25:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1f439ee21e7c9d5a4bcd58aefe75191a464307883ba0305614590d7e097cc56-merged.mount: Deactivated successfully.
Dec  5 01:25:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:25:47 compute-0 podman[279476]: 2025-12-05 01:25:47.415444637 +0000 UTC m=+2.187681979 container remove b96c4ec1682e88bccd0eff6bd5009960b1204976c92b808ec61518bd2a9776f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_knuth, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  5 01:25:47 compute-0 systemd[1]: libpod-conmon-b96c4ec1682e88bccd0eff6bd5009960b1204976c92b808ec61518bd2a9776f3.scope: Deactivated successfully.
Dec  5 01:25:48 compute-0 python3.9[279953]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  5 01:25:48 compute-0 podman[280026]: 2025-12-05 01:25:48.425106973 +0000 UTC m=+0.068026867 container create 57ce447b39dcc3e34555176521afbdc534439b935f4c0c10ba51cf66c1d2c680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:25:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v486: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:25:48 compute-0 systemd[1]: Started libpod-conmon-57ce447b39dcc3e34555176521afbdc534439b935f4c0c10ba51cf66c1d2c680.scope.
Dec  5 01:25:48 compute-0 podman[280026]: 2025-12-05 01:25:48.398688503 +0000 UTC m=+0.041608487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:25:48 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:25:48 compute-0 podman[280026]: 2025-12-05 01:25:48.529743545 +0000 UTC m=+0.172663469 container init 57ce447b39dcc3e34555176521afbdc534439b935f4c0c10ba51cf66c1d2c680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_khorana, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  5 01:25:48 compute-0 podman[280026]: 2025-12-05 01:25:48.546944847 +0000 UTC m=+0.189864751 container start 57ce447b39dcc3e34555176521afbdc534439b935f4c0c10ba51cf66c1d2c680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  5 01:25:48 compute-0 modest_khorana[280044]: 167 167
Dec  5 01:25:48 compute-0 podman[280026]: 2025-12-05 01:25:48.553064208 +0000 UTC m=+0.195984142 container attach 57ce447b39dcc3e34555176521afbdc534439b935f4c0c10ba51cf66c1d2c680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  5 01:25:48 compute-0 systemd[1]: libpod-57ce447b39dcc3e34555176521afbdc534439b935f4c0c10ba51cf66c1d2c680.scope: Deactivated successfully.
Dec  5 01:25:48 compute-0 podman[280026]: 2025-12-05 01:25:48.554337164 +0000 UTC m=+0.197257138 container died 57ce447b39dcc3e34555176521afbdc534439b935f4c0c10ba51cf66c1d2c680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:25:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-08c44cbb2975d258eb98d3348861889f43ce52031c880dddf4a3fc98bbb804f6-merged.mount: Deactivated successfully.
Dec  5 01:25:48 compute-0 podman[280026]: 2025-12-05 01:25:48.619592962 +0000 UTC m=+0.262512856 container remove 57ce447b39dcc3e34555176521afbdc534439b935f4c0c10ba51cf66c1d2c680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:25:48 compute-0 systemd[1]: libpod-conmon-57ce447b39dcc3e34555176521afbdc534439b935f4c0c10ba51cf66c1d2c680.scope: Deactivated successfully.
Dec  5 01:25:48 compute-0 podman[280091]: 2025-12-05 01:25:48.859426441 +0000 UTC m=+0.078909882 container create c1964e64b34ef2c20a4d964d182ac9c8a61dde24f5517df72df8e5f6e4c5f25c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  5 01:25:48 compute-0 podman[280091]: 2025-12-05 01:25:48.828990288 +0000 UTC m=+0.048473739 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:25:48 compute-0 systemd[1]: Started libpod-conmon-c1964e64b34ef2c20a4d964d182ac9c8a61dde24f5517df72df8e5f6e4c5f25c.scope.
Dec  5 01:25:48 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eadf0664c7ed87e017442a1caad484d8754a140ce2acf9017021b8047fa386b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eadf0664c7ed87e017442a1caad484d8754a140ce2acf9017021b8047fa386b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eadf0664c7ed87e017442a1caad484d8754a140ce2acf9017021b8047fa386b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9eadf0664c7ed87e017442a1caad484d8754a140ce2acf9017021b8047fa386b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:25:49 compute-0 podman[280091]: 2025-12-05 01:25:49.013125297 +0000 UTC m=+0.232608768 container init c1964e64b34ef2c20a4d964d182ac9c8a61dde24f5517df72df8e5f6e4c5f25c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bouman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:25:49 compute-0 podman[280091]: 2025-12-05 01:25:49.041365978 +0000 UTC m=+0.260849429 container start c1964e64b34ef2c20a4d964d182ac9c8a61dde24f5517df72df8e5f6e4c5f25c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:25:49 compute-0 podman[280091]: 2025-12-05 01:25:49.046751839 +0000 UTC m=+0.266235360 container attach c1964e64b34ef2c20a4d964d182ac9c8a61dde24f5517df72df8e5f6e4c5f25c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:25:49 compute-0 python3.9[280164]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  5 01:25:49 compute-0 great_bouman[280134]: {
Dec  5 01:25:49 compute-0 great_bouman[280134]:    "0": [
Dec  5 01:25:49 compute-0 great_bouman[280134]:        {
Dec  5 01:25:49 compute-0 great_bouman[280134]:            "devices": [
Dec  5 01:25:49 compute-0 great_bouman[280134]:                "/dev/loop3"
Dec  5 01:25:49 compute-0 great_bouman[280134]:            ],
Dec  5 01:25:49 compute-0 great_bouman[280134]:            "lv_name": "ceph_lv0",
Dec  5 01:25:49 compute-0 great_bouman[280134]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:25:49 compute-0 great_bouman[280134]:            "lv_size": "21470642176",
Dec  5 01:25:49 compute-0 great_bouman[280134]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:25:49 compute-0 great_bouman[280134]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:25:49 compute-0 great_bouman[280134]:            "name": "ceph_lv0",
Dec  5 01:25:49 compute-0 great_bouman[280134]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:25:49 compute-0 great_bouman[280134]:            "tags": {
Dec  5 01:25:49 compute-0 great_bouman[280134]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:25:49 compute-0 great_bouman[280134]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:25:49 compute-0 great_bouman[280134]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:25:49 compute-0 great_bouman[280134]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:25:49 compute-0 great_bouman[280134]:                "ceph.cluster_name": "ceph",
Dec  5 01:25:49 compute-0 great_bouman[280134]:                "ceph.crush_device_class": "",
Dec  5 01:25:49 compute-0 great_bouman[280134]:                "ceph.encrypted": "0",
Dec  5 01:25:49 compute-0 great_bouman[280134]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:25:49 compute-0 great_bouman[280134]:                "ceph.osd_id": "0",
Dec  5 01:25:49 compute-0 great_bouman[280134]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:25:49 compute-0 great_bouman[280134]:                "ceph.type": "block",
Dec  5 01:25:49 compute-0 great_bouman[280134]:                "ceph.vdo": "0"
Dec  5 01:25:49 compute-0 great_bouman[280134]:            },
Dec  5 01:25:49 compute-0 great_bouman[280134]:            "type": "block",
Dec  5 01:25:49 compute-0 great_bouman[280134]:            "vg_name": "ceph_vg0"
Dec  5 01:25:49 compute-0 great_bouman[280134]:        }
Dec  5 01:25:49 compute-0 great_bouman[280134]:    ],
Dec  5 01:25:49 compute-0 great_bouman[280134]:    "1": [
Dec  5 01:25:49 compute-0 great_bouman[280134]:        {
Dec  5 01:25:49 compute-0 great_bouman[280134]:            "devices": [
Dec  5 01:25:49 compute-0 great_bouman[280134]:                "/dev/loop4"
Dec  5 01:25:49 compute-0 great_bouman[280134]:            ],
Dec  5 01:25:49 compute-0 great_bouman[280134]:            "lv_name": "ceph_lv1",
Dec  5 01:25:49 compute-0 great_bouman[280134]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:25:49 compute-0 great_bouman[280134]:            "lv_size": "21470642176",
Dec  5 01:25:49 compute-0 great_bouman[280134]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:25:49 compute-0 great_bouman[280134]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:25:49 compute-0 great_bouman[280134]:            "name": "ceph_lv1",
Dec  5 01:25:49 compute-0 great_bouman[280134]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:25:49 compute-0 great_bouman[280134]:            "tags": {
Dec  5 01:25:49 compute-0 great_bouman[280134]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:25:49 compute-0 great_bouman[280134]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:25:49 compute-0 great_bouman[280134]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:25:49 compute-0 great_bouman[280134]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:25:49 compute-0 great_bouman[280134]:                "ceph.cluster_name": "ceph",
Dec  5 01:25:49 compute-0 great_bouman[280134]:                "ceph.crush_device_class": "",
Dec  5 01:25:49 compute-0 great_bouman[280134]:                "ceph.encrypted": "0",
Dec  5 01:25:49 compute-0 great_bouman[280134]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:25:49 compute-0 great_bouman[280134]:                "ceph.osd_id": "1",
Dec  5 01:25:49 compute-0 great_bouman[280134]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:25:49 compute-0 great_bouman[280134]:                "ceph.type": "block",
Dec  5 01:25:49 compute-0 great_bouman[280134]:                "ceph.vdo": "0"
Dec  5 01:25:49 compute-0 great_bouman[280134]:            },
Dec  5 01:25:49 compute-0 great_bouman[280134]:            "type": "block",
Dec  5 01:25:49 compute-0 great_bouman[280134]:            "vg_name": "ceph_vg1"
Dec  5 01:25:49 compute-0 great_bouman[280134]:        }
Dec  5 01:25:49 compute-0 great_bouman[280134]:    ],
Dec  5 01:25:49 compute-0 great_bouman[280134]:    "2": [
Dec  5 01:25:49 compute-0 great_bouman[280134]:        {
Dec  5 01:25:49 compute-0 great_bouman[280134]:            "devices": [
Dec  5 01:25:49 compute-0 great_bouman[280134]:                "/dev/loop5"
Dec  5 01:25:49 compute-0 great_bouman[280134]:            ],
Dec  5 01:25:49 compute-0 great_bouman[280134]:            "lv_name": "ceph_lv2",
Dec  5 01:25:49 compute-0 great_bouman[280134]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:25:49 compute-0 great_bouman[280134]:            "lv_size": "21470642176",
Dec  5 01:25:49 compute-0 great_bouman[280134]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:25:49 compute-0 great_bouman[280134]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:25:49 compute-0 great_bouman[280134]:            "name": "ceph_lv2",
Dec  5 01:25:49 compute-0 great_bouman[280134]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:25:49 compute-0 great_bouman[280134]:            "tags": {
Dec  5 01:25:49 compute-0 great_bouman[280134]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:25:49 compute-0 great_bouman[280134]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:25:49 compute-0 great_bouman[280134]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:25:49 compute-0 great_bouman[280134]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:25:49 compute-0 great_bouman[280134]:                "ceph.cluster_name": "ceph",
Dec  5 01:25:49 compute-0 great_bouman[280134]:                "ceph.crush_device_class": "",
Dec  5 01:25:49 compute-0 great_bouman[280134]:                "ceph.encrypted": "0",
Dec  5 01:25:49 compute-0 great_bouman[280134]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:25:49 compute-0 great_bouman[280134]:                "ceph.osd_id": "2",
Dec  5 01:25:49 compute-0 great_bouman[280134]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:25:49 compute-0 great_bouman[280134]:                "ceph.type": "block",
Dec  5 01:25:49 compute-0 great_bouman[280134]:                "ceph.vdo": "0"
Dec  5 01:25:49 compute-0 great_bouman[280134]:            },
Dec  5 01:25:49 compute-0 great_bouman[280134]:            "type": "block",
Dec  5 01:25:49 compute-0 great_bouman[280134]:            "vg_name": "ceph_vg2"
Dec  5 01:25:49 compute-0 great_bouman[280134]:        }
Dec  5 01:25:49 compute-0 great_bouman[280134]:    ]
Dec  5 01:25:49 compute-0 great_bouman[280134]: }
Dec  5 01:25:49 compute-0 systemd[1]: libpod-c1964e64b34ef2c20a4d964d182ac9c8a61dde24f5517df72df8e5f6e4c5f25c.scope: Deactivated successfully.
Dec  5 01:25:49 compute-0 podman[280091]: 2025-12-05 01:25:49.924825309 +0000 UTC m=+1.144308750 container died c1964e64b34ef2c20a4d964d182ac9c8a61dde24f5517df72df8e5f6e4c5f25c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:25:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-9eadf0664c7ed87e017442a1caad484d8754a140ce2acf9017021b8047fa386b-merged.mount: Deactivated successfully.
Dec  5 01:25:50 compute-0 podman[280091]: 2025-12-05 01:25:50.013515554 +0000 UTC m=+1.232999005 container remove c1964e64b34ef2c20a4d964d182ac9c8a61dde24f5517df72df8e5f6e4c5f25c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bouman, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:25:50 compute-0 systemd[1]: libpod-conmon-c1964e64b34ef2c20a4d964d182ac9c8a61dde24f5517df72df8e5f6e4c5f25c.scope: Deactivated successfully.
Dec  5 01:25:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v487: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:25:51 compute-0 podman[280398]: 2025-12-05 01:25:51.094407565 +0000 UTC m=+0.055532637 container create 7ad159a7e23fe2bdc4dd375a254d4e872ecf51db5f9873ed18901fd0952edb77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elgamal, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  5 01:25:51 compute-0 systemd[1]: Started libpod-conmon-7ad159a7e23fe2bdc4dd375a254d4e872ecf51db5f9873ed18901fd0952edb77.scope.
Dec  5 01:25:51 compute-0 podman[280398]: 2025-12-05 01:25:51.071823242 +0000 UTC m=+0.032948344 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:25:51 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:25:51 compute-0 podman[280398]: 2025-12-05 01:25:51.207452172 +0000 UTC m=+0.168577274 container init 7ad159a7e23fe2bdc4dd375a254d4e872ecf51db5f9873ed18901fd0952edb77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  5 01:25:51 compute-0 podman[280398]: 2025-12-05 01:25:51.222488123 +0000 UTC m=+0.183613215 container start 7ad159a7e23fe2bdc4dd375a254d4e872ecf51db5f9873ed18901fd0952edb77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  5 01:25:51 compute-0 condescending_elgamal[280417]: 167 167
Dec  5 01:25:51 compute-0 podman[280398]: 2025-12-05 01:25:51.230365914 +0000 UTC m=+0.191491066 container attach 7ad159a7e23fe2bdc4dd375a254d4e872ecf51db5f9873ed18901fd0952edb77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  5 01:25:51 compute-0 systemd[1]: libpod-7ad159a7e23fe2bdc4dd375a254d4e872ecf51db5f9873ed18901fd0952edb77.scope: Deactivated successfully.
Dec  5 01:25:51 compute-0 podman[280398]: 2025-12-05 01:25:51.230869548 +0000 UTC m=+0.191994690 container died 7ad159a7e23fe2bdc4dd375a254d4e872ecf51db5f9873ed18901fd0952edb77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elgamal, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:25:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9e6018414f42b2e1a63d430b42ff09309067ed2995006f0a75f7c56a026af3b-merged.mount: Deactivated successfully.
Dec  5 01:25:51 compute-0 podman[280398]: 2025-12-05 01:25:51.307608458 +0000 UTC m=+0.268733550 container remove 7ad159a7e23fe2bdc4dd375a254d4e872ecf51db5f9873ed18901fd0952edb77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  5 01:25:51 compute-0 systemd[1]: libpod-conmon-7ad159a7e23fe2bdc4dd375a254d4e872ecf51db5f9873ed18901fd0952edb77.scope: Deactivated successfully.
Dec  5 01:25:51 compute-0 podman[280466]: 2025-12-05 01:25:51.581336386 +0000 UTC m=+0.091723250 container create dd70189ae9fb8a7d7a8661fbcbff94b37eed357b4d979e93ffb5a22c1efa3b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  5 01:25:51 compute-0 podman[280466]: 2025-12-05 01:25:51.543242809 +0000 UTC m=+0.053629723 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:25:51 compute-0 systemd[1]: Started libpod-conmon-dd70189ae9fb8a7d7a8661fbcbff94b37eed357b4d979e93ffb5a22c1efa3b84.scope.
Dec  5 01:25:51 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:25:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bff3efd00a90fa8f79a69937dc1424349fcdc1647494aae06e3d8e0094cf399a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:25:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bff3efd00a90fa8f79a69937dc1424349fcdc1647494aae06e3d8e0094cf399a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:25:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bff3efd00a90fa8f79a69937dc1424349fcdc1647494aae06e3d8e0094cf399a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:25:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bff3efd00a90fa8f79a69937dc1424349fcdc1647494aae06e3d8e0094cf399a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:25:51 compute-0 podman[280466]: 2025-12-05 01:25:51.748000335 +0000 UTC m=+0.258387209 container init dd70189ae9fb8a7d7a8661fbcbff94b37eed357b4d979e93ffb5a22c1efa3b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_solomon, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:25:51 compute-0 podman[280466]: 2025-12-05 01:25:51.772563934 +0000 UTC m=+0.282950758 container start dd70189ae9fb8a7d7a8661fbcbff94b37eed357b4d979e93ffb5a22c1efa3b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_solomon, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  5 01:25:51 compute-0 podman[280466]: 2025-12-05 01:25:51.778205742 +0000 UTC m=+0.288592616 container attach dd70189ae9fb8a7d7a8661fbcbff94b37eed357b4d979e93ffb5a22c1efa3b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_solomon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:25:52 compute-0 python3.9[280534]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  5 01:25:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:25:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v488: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:25:52 compute-0 elegant_solomon[280529]: {
Dec  5 01:25:52 compute-0 elegant_solomon[280529]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 01:25:52 compute-0 elegant_solomon[280529]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:25:52 compute-0 elegant_solomon[280529]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 01:25:52 compute-0 elegant_solomon[280529]:        "osd_id": 0,
Dec  5 01:25:52 compute-0 elegant_solomon[280529]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:25:52 compute-0 elegant_solomon[280529]:        "type": "bluestore"
Dec  5 01:25:52 compute-0 elegant_solomon[280529]:    },
Dec  5 01:25:52 compute-0 elegant_solomon[280529]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 01:25:52 compute-0 elegant_solomon[280529]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:25:52 compute-0 elegant_solomon[280529]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 01:25:52 compute-0 elegant_solomon[280529]:        "osd_id": 1,
Dec  5 01:25:52 compute-0 elegant_solomon[280529]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:25:52 compute-0 elegant_solomon[280529]:        "type": "bluestore"
Dec  5 01:25:52 compute-0 elegant_solomon[280529]:    },
Dec  5 01:25:52 compute-0 elegant_solomon[280529]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 01:25:52 compute-0 elegant_solomon[280529]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:25:52 compute-0 elegant_solomon[280529]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 01:25:52 compute-0 elegant_solomon[280529]:        "osd_id": 2,
Dec  5 01:25:52 compute-0 elegant_solomon[280529]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:25:52 compute-0 elegant_solomon[280529]:        "type": "bluestore"
Dec  5 01:25:52 compute-0 elegant_solomon[280529]:    }
Dec  5 01:25:52 compute-0 elegant_solomon[280529]: }
Dec  5 01:25:52 compute-0 systemd[1]: libpod-dd70189ae9fb8a7d7a8661fbcbff94b37eed357b4d979e93ffb5a22c1efa3b84.scope: Deactivated successfully.
Dec  5 01:25:52 compute-0 systemd[1]: libpod-dd70189ae9fb8a7d7a8661fbcbff94b37eed357b4d979e93ffb5a22c1efa3b84.scope: Consumed 1.209s CPU time.
Dec  5 01:25:52 compute-0 podman[280466]: 2025-12-05 01:25:52.985878135 +0000 UTC m=+1.496264989 container died dd70189ae9fb8a7d7a8661fbcbff94b37eed357b4d979e93ffb5a22c1efa3b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_solomon, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:25:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-bff3efd00a90fa8f79a69937dc1424349fcdc1647494aae06e3d8e0094cf399a-merged.mount: Deactivated successfully.
Dec  5 01:25:53 compute-0 podman[280466]: 2025-12-05 01:25:53.07743034 +0000 UTC m=+1.587817164 container remove dd70189ae9fb8a7d7a8661fbcbff94b37eed357b4d979e93ffb5a22c1efa3b84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:25:53 compute-0 systemd[1]: libpod-conmon-dd70189ae9fb8a7d7a8661fbcbff94b37eed357b4d979e93ffb5a22c1efa3b84.scope: Deactivated successfully.
Dec  5 01:25:53 compute-0 python3.9[280715]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:25:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:25:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:25:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:25:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:25:53 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 5a34326d-0cf4-4720-9c3a-a0cc826ac947 does not exist
Dec  5 01:25:53 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 1765033a-fa2e-4058-8e76-30069dcbb3f3 does not exist
Dec  5 01:25:53 compute-0 python3.9[280900]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764897952.4023588-138-123485413661297/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:25:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:25:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:25:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v489: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:25:54 compute-0 python3.9[281050]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:25:55 compute-0 python3.9[281171]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764897954.227336-138-30038818134245/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:25:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v490: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:25:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:25:57 compute-0 python3.9[281321]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:25:58 compute-0 python3.9[281442]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764897956.6940145-182-45735763062298/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:25:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v491: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:25:59 compute-0 python3.9[281592]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:25:59 compute-0 podman[158197]: time="2025-12-05T01:25:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:25:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:25:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec  5 01:25:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:25:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6841 "" "Go-http-client/1.1"
Dec  5 01:26:00 compute-0 python3.9[281713]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764897958.5798128-182-187535376460068/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:26:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v492: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:26:01 compute-0 python3.9[281863]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  5 01:26:01 compute-0 openstack_network_exporter[160350]: ERROR   01:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:26:01 compute-0 openstack_network_exporter[160350]: ERROR   01:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:26:01 compute-0 openstack_network_exporter[160350]: ERROR   01:26:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:26:01 compute-0 openstack_network_exporter[160350]: ERROR   01:26:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:26:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:26:01 compute-0 openstack_network_exporter[160350]: ERROR   01:26:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:26:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:26:02 compute-0 podman[281989]: 2025-12-05 01:26:02.016034628 +0000 UTC m=+0.109342684 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  5 01:26:02 compute-0 podman[281990]: 2025-12-05 01:26:02.028612181 +0000 UTC m=+0.120148767 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 01:26:02 compute-0 podman[281991]: 2025-12-05 01:26:02.072666545 +0000 UTC m=+0.154607062 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  5 01:26:02 compute-0 python3.9[282074]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:26:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:26:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v493: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:26:02 compute-0 podman[282157]: 2025-12-05 01:26:02.705029441 +0000 UTC m=+0.119358005 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 01:26:03 compute-0 python3.9[282255]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:26:03 compute-0 python3.9[282333]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:26:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v494: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:26:04 compute-0 python3.9[282485]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:26:05 compute-0 python3.9[282563]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:26:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v495: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 17 op/s
Dec  5 01:26:06 compute-0 python3.9[282715]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:26:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:26:07 compute-0 podman[282839]: 2025-12-05 01:26:07.425180678 +0000 UTC m=+0.118454439 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, distribution-scope=public, maintainer=Red Hat, Inc., vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, version=9.4, com.redhat.component=ubi9-container, config_id=edpm, io.buildah.version=1.29.0, name=ubi9)
Dec  5 01:26:07 compute-0 python3.9[282887]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:26:08 compute-0 python3.9[282965]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:26:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v496: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Dec  5 01:26:09 compute-0 python3.9[283117]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:26:09 compute-0 python3.9[283195]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:26:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v497: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  5 01:26:10 compute-0 podman[283295]: 2025-12-05 01:26:10.748652826 +0000 UTC m=+0.153878222 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., container_name=openstack_network_exporter, vcs-type=git, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, distribution-scope=public)
Dec  5 01:26:11 compute-0 python3.9[283366]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  5 01:26:11 compute-0 systemd[1]: Reloading.
Dec  5 01:26:11 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:26:11 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:26:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:26:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v498: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  5 01:26:12 compute-0 python3.9[283555]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:26:13 compute-0 python3.9[283633]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:26:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v499: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  5 01:26:15 compute-0 python3.9[283785]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:26:15 compute-0 podman[283835]: 2025-12-05 01:26:15.715111604 +0000 UTC m=+0.120501507 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 01:26:15 compute-0 python3.9[283885]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:26:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:26:16
Dec  5 01:26:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 01:26:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 01:26:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'images', 'cephfs.cephfs.data', 'backups', 'volumes', 'default.rgw.log', 'vms', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta']
Dec  5 01:26:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 01:26:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:26:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:26:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:26:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:26:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:26:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:26:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 01:26:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:26:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 01:26:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:26:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:26:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:26:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:26:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:26:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:26:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:26:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v500: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  5 01:26:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:26:18 compute-0 python3.9[284037]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  5 01:26:18 compute-0 systemd[1]: Reloading.
Dec  5 01:26:18 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:26:18 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:26:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v501: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 42 op/s
Dec  5 01:26:18 compute-0 systemd[1]: Starting Create netns directory...
Dec  5 01:26:18 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  5 01:26:18 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  5 01:26:18 compute-0 systemd[1]: Finished Create netns directory.
Dec  5 01:26:20 compute-0 python3.9[284232]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:26:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v502: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 0 B/s wr, 4 op/s
Dec  5 01:26:21 compute-0 python3.9[284384]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:26:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:26:22 compute-0 python3.9[284507]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764897980.58235-333-96165476405112/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:26:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v503: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:26:23 compute-0 python3.9[284659]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:26:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v504: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:26:24 compute-0 python3.9[284811]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:26:25 compute-0 python3.9[284934]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764897983.782176-358-48370933557769/.source.json _original_basename=.d67sywwq follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:26:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 01:26:26 compute-0 python3.9[285086]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:26:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v505: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:26:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:26:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v506: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:26:29 compute-0 podman[158197]: time="2025-12-05T01:26:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:26:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:26:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32821 "" "Go-http-client/1.1"
Dec  5 01:26:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:26:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 6837 "" "Go-http-client/1.1"
Dec  5 01:26:30 compute-0 python3.9[285513]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Dec  5 01:26:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v507: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:26:31 compute-0 openstack_network_exporter[160350]: ERROR   01:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:26:31 compute-0 openstack_network_exporter[160350]: ERROR   01:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:26:31 compute-0 openstack_network_exporter[160350]: ERROR   01:26:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:26:31 compute-0 openstack_network_exporter[160350]: ERROR   01:26:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:26:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:26:31 compute-0 openstack_network_exporter[160350]: ERROR   01:26:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:26:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:26:31 compute-0 python3.9[285665]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  5 01:26:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:26:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v508: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:26:32 compute-0 podman[285765]: 2025-12-05 01:26:32.728034439 +0000 UTC m=+0.125463336 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  5 01:26:32 compute-0 podman[285767]: 2025-12-05 01:26:32.757127774 +0000 UTC m=+0.149619053 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 01:26:32 compute-0 podman[285770]: 2025-12-05 01:26:32.787055362 +0000 UTC m=+0.169961632 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  5 01:26:32 compute-0 podman[285863]: 2025-12-05 01:26:32.849671207 +0000 UTC m=+0.095039174 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Dec  5 01:26:32 compute-0 python3.9[285895]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  5 01:26:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v509: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:26:35 compute-0 python3[286082]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  5 01:26:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v510: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:26:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:26:37 compute-0 podman[286122]: 2025-12-05 01:26:37.647366435 +0000 UTC m=+0.066358600 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, release-0.7.12=, vendor=Red Hat, Inc., container_name=kepler)
Dec  5 01:26:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v511: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:26:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v512: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:26:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:26:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v513: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.545 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.546 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.546 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f83151a5f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.546 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f83151a6690>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8316c39160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee59a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f941a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee79e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f942c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee6300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee74d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee76b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.549 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.549 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8314f94050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.550 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.550 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8314f940e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.550 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.550 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f831506dc10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.550 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.550 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8314ee7950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.550 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.550 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8314ee7a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.550 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8314f94170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.551 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8314ee79b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.551 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8314f94200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.551 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8314f94290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.551 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8314ee7ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.552 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8314f94320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.552 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8314ee59d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.552 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8314ee7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.552 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8314ee7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8314ee74a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8314ee7500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8314ee7560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8314ee75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8314f945f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8314ee7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8314ee7680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8314ee76e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8314ee7f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8314ee7740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8314ee7f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.555 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.556 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.556 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.556 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.556 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.556 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.556 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.556 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.556 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.556 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.556 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.556 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.556 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.556 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:26:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:26:42.557 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:26:44 compute-0 podman[286157]: 2025-12-05 01:26:44.056977183 +0000 UTC m=+2.470574575 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, release=1755695350, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, vcs-type=git, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7)
Dec  5 01:26:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v514: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:26:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:26:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:26:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:26:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:26:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:26:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:26:46 compute-0 podman[286207]: 2025-12-05 01:26:46.244341973 +0000 UTC m=+0.362261880 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 01:26:46 compute-0 podman[286093]: 2025-12-05 01:26:46.24813257 +0000 UTC m=+10.735390328 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  5 01:26:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v515: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:26:46 compute-0 podman[286251]: 2025-12-05 01:26:46.51298363 +0000 UTC m=+0.103405188 container create 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible)
Dec  5 01:26:46 compute-0 podman[286251]: 2025-12-05 01:26:46.457153426 +0000 UTC m=+0.047575064 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  5 01:26:46 compute-0 python3[286082]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  5 01:26:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:26:47 compute-0 python3.9[286437]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  5 01:26:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v516: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:26:49 compute-0 python3.9[286591]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:26:49 compute-0 python3.9[286668]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  5 01:26:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v517: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:26:50 compute-0 python3.9[286819]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764898009.9640265-446-10032884565177/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:26:51 compute-0 python3.9[286895]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  5 01:26:51 compute-0 systemd[1]: Reloading.
Dec  5 01:26:51 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:26:51 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:26:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:26:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v518: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:26:53 compute-0 python3.9[287005]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  5 01:26:53 compute-0 systemd[1]: Reloading.
Dec  5 01:26:53 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:26:53 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:26:53 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Dec  5 01:26:53 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:26:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/393669ba6562fd4aacce8ce9edf46b11d718390a30075dad6e312fb8e357d173/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec  5 01:26:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/393669ba6562fd4aacce8ce9edf46b11d718390a30075dad6e312fb8e357d173/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  5 01:26:53 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638.
Dec  5 01:26:53 compute-0 podman[287073]: 2025-12-05 01:26:53.896271103 +0000 UTC m=+0.192289910 container init 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec  5 01:26:53 compute-0 ovn_metadata_agent[287107]: + sudo -E kolla_set_configs
Dec  5 01:26:53 compute-0 podman[287073]: 2025-12-05 01:26:53.932238339 +0000 UTC m=+0.228257166 container start 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec  5 01:26:53 compute-0 edpm-start-podman-container[287073]: ovn_metadata_agent
Dec  5 01:26:54 compute-0 ovn_metadata_agent[287107]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  5 01:26:54 compute-0 ovn_metadata_agent[287107]: INFO:__main__:Validating config file
Dec  5 01:26:54 compute-0 ovn_metadata_agent[287107]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  5 01:26:54 compute-0 ovn_metadata_agent[287107]: INFO:__main__:Copying service configuration files
Dec  5 01:26:54 compute-0 ovn_metadata_agent[287107]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec  5 01:26:54 compute-0 ovn_metadata_agent[287107]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec  5 01:26:54 compute-0 ovn_metadata_agent[287107]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec  5 01:26:54 compute-0 ovn_metadata_agent[287107]: INFO:__main__:Writing out command to execute
Dec  5 01:26:54 compute-0 ovn_metadata_agent[287107]: INFO:__main__:Setting permission for /var/lib/neutron
Dec  5 01:26:54 compute-0 ovn_metadata_agent[287107]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec  5 01:26:54 compute-0 ovn_metadata_agent[287107]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec  5 01:26:54 compute-0 ovn_metadata_agent[287107]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec  5 01:26:54 compute-0 ovn_metadata_agent[287107]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec  5 01:26:54 compute-0 ovn_metadata_agent[287107]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec  5 01:26:54 compute-0 ovn_metadata_agent[287107]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec  5 01:26:54 compute-0 ovn_metadata_agent[287107]: ++ cat /run_command
Dec  5 01:26:54 compute-0 ovn_metadata_agent[287107]: + CMD=neutron-ovn-metadata-agent
Dec  5 01:26:54 compute-0 ovn_metadata_agent[287107]: + ARGS=
Dec  5 01:26:54 compute-0 ovn_metadata_agent[287107]: + sudo kolla_copy_cacerts
Dec  5 01:26:54 compute-0 ovn_metadata_agent[287107]: + [[ ! -n '' ]]
Dec  5 01:26:54 compute-0 ovn_metadata_agent[287107]: + . kolla_extend_start
Dec  5 01:26:54 compute-0 ovn_metadata_agent[287107]: Running command: 'neutron-ovn-metadata-agent'
Dec  5 01:26:54 compute-0 ovn_metadata_agent[287107]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Dec  5 01:26:54 compute-0 ovn_metadata_agent[287107]: + umask 0022
Dec  5 01:26:54 compute-0 ovn_metadata_agent[287107]: + exec neutron-ovn-metadata-agent
Dec  5 01:26:54 compute-0 podman[287126]: 2025-12-05 01:26:54.065285061 +0000 UTC m=+0.116675615 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  5 01:26:54 compute-0 edpm-start-podman-container[287071]: Creating additional drop-in dependency for "ovn_metadata_agent" (33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638)
Dec  5 01:26:54 compute-0 systemd[1]: Reloading.
Dec  5 01:26:54 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:26:54 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:26:54 compute-0 systemd[1]: Started ovn_metadata_agent container.
Dec  5 01:26:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v519: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:26:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:26:54 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:26:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 01:26:54 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:26:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 01:26:54 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:26:54 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev fbe546c2-0d9a-4e20-95c8-cfc58367be8f does not exist
Dec  5 01:26:54 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev a9a86025-4371-409f-97b3-ff0e6e9b8ea7 does not exist
Dec  5 01:26:54 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev c2fa5975-d35d-42d6-966b-2bd57fce7343 does not exist
Dec  5 01:26:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 01:26:54 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 01:26:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 01:26:54 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:26:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:26:54 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:26:55 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Dec  5 01:26:55 compute-0 systemd-logind[792]: Session 54 logged out. Waiting for processes to exit.
Dec  5 01:26:55 compute-0 systemd[1]: session-54.scope: Consumed 1min 30.337s CPU time.
Dec  5 01:26:55 compute-0 systemd-logind[792]: Removed session 54.
Dec  5 01:26:55 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:26:55 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:26:55 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:26:55 compute-0 podman[287440]: 2025-12-05 01:26:55.718457863 +0000 UTC m=+0.075025440 container create 2edd814af2bc09cd73db3844cc8dfe45de554d2493cda41283af33bae424e5cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_almeida, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:26:55 compute-0 systemd[1]: Started libpod-conmon-2edd814af2bc09cd73db3844cc8dfe45de554d2493cda41283af33bae424e5cd.scope.
Dec  5 01:26:55 compute-0 podman[287440]: 2025-12-05 01:26:55.689585375 +0000 UTC m=+0.046152982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:26:55 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:26:55 compute-0 podman[287440]: 2025-12-05 01:26:55.854250071 +0000 UTC m=+0.210817738 container init 2edd814af2bc09cd73db3844cc8dfe45de554d2493cda41283af33bae424e5cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  5 01:26:55 compute-0 podman[287440]: 2025-12-05 01:26:55.870944068 +0000 UTC m=+0.227511685 container start 2edd814af2bc09cd73db3844cc8dfe45de554d2493cda41283af33bae424e5cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_almeida, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  5 01:26:55 compute-0 podman[287440]: 2025-12-05 01:26:55.876988347 +0000 UTC m=+0.233555964 container attach 2edd814af2bc09cd73db3844cc8dfe45de554d2493cda41283af33bae424e5cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_almeida, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  5 01:26:55 compute-0 angry_almeida[287456]: 167 167
Dec  5 01:26:55 compute-0 systemd[1]: libpod-2edd814af2bc09cd73db3844cc8dfe45de554d2493cda41283af33bae424e5cd.scope: Deactivated successfully.
Dec  5 01:26:55 compute-0 podman[287440]: 2025-12-05 01:26:55.885587297 +0000 UTC m=+0.242154904 container died 2edd814af2bc09cd73db3844cc8dfe45de554d2493cda41283af33bae424e5cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Dec  5 01:26:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-e283dbcdfc773cb6d7960df6e3cc33e11d6130c2bf99020ea5167738e7968a97-merged.mount: Deactivated successfully.
Dec  5 01:26:55 compute-0 podman[287440]: 2025-12-05 01:26:55.952864969 +0000 UTC m=+0.309432556 container remove 2edd814af2bc09cd73db3844cc8dfe45de554d2493cda41283af33bae424e5cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_almeida, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Dec  5 01:26:55 compute-0 systemd[1]: libpod-conmon-2edd814af2bc09cd73db3844cc8dfe45de554d2493cda41283af33bae424e5cd.scope: Deactivated successfully.
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.096 287122 INFO neutron.common.config [-] Logging enabled!#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.097 287122 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.097 287122 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.097 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.097 287122 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.097 287122 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.098 287122 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.098 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.098 287122 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.098 287122 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.098 287122 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.098 287122 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.098 287122 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.098 287122 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.098 287122 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.099 287122 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.099 287122 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.099 287122 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.099 287122 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.099 287122 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.099 287122 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.099 287122 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.099 287122 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.099 287122 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.100 287122 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.100 287122 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.100 287122 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.100 287122 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.100 287122 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.100 287122 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.100 287122 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.100 287122 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.100 287122 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.101 287122 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.101 287122 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.101 287122 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.101 287122 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.101 287122 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.101 287122 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.101 287122 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.101 287122 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.102 287122 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.102 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.102 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.102 287122 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.102 287122 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.102 287122 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.102 287122 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.102 287122 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.102 287122 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.103 287122 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.103 287122 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.103 287122 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.103 287122 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.103 287122 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.103 287122 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.103 287122 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.103 287122 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.103 287122 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.104 287122 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.104 287122 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.104 287122 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.104 287122 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.104 287122 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.104 287122 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.104 287122 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.104 287122 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.104 287122 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.105 287122 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.105 287122 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.105 287122 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.105 287122 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.105 287122 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.105 287122 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.105 287122 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.105 287122 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.106 287122 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.106 287122 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.106 287122 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.106 287122 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.106 287122 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.106 287122 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.106 287122 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.106 287122 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.106 287122 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.107 287122 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.107 287122 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.107 287122 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.107 287122 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.107 287122 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.107 287122 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.107 287122 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.108 287122 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.108 287122 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.108 287122 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.108 287122 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.108 287122 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.108 287122 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.108 287122 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.108 287122 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.109 287122 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.109 287122 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.109 287122 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.109 287122 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.109 287122 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.109 287122 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.109 287122 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.109 287122 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.110 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.110 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.110 287122 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.110 287122 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.110 287122 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.110 287122 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.111 287122 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.111 287122 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.111 287122 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.111 287122 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.111 287122 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.111 287122 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.111 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.112 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.112 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.112 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.112 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.112 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.112 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.112 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.113 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.113 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.113 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.113 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.113 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.113 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.113 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.114 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.114 287122 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.114 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.114 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.114 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.114 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.114 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.115 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.115 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.115 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.115 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.115 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.115 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.116 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.116 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.116 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.116 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.116 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.116 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.116 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.117 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.117 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.117 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.117 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.117 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.117 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.118 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.118 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.118 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.118 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.118 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.118 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.119 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.119 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.119 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.119 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.119 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.119 287122 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.119 287122 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.119 287122 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.120 287122 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.120 287122 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.120 287122 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.120 287122 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.120 287122 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.120 287122 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.120 287122 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.121 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.121 287122 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.121 287122 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.121 287122 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.121 287122 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.121 287122 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.121 287122 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.122 287122 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.122 287122 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.122 287122 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.122 287122 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.122 287122 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.122 287122 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.122 287122 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.123 287122 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.123 287122 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.123 287122 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.123 287122 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.123 287122 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.123 287122 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.123 287122 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.124 287122 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.124 287122 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.124 287122 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.124 287122 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.124 287122 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.124 287122 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.124 287122 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.124 287122 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.125 287122 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.125 287122 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.125 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.125 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.125 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.125 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.125 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.125 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.126 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.126 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.126 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.126 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.126 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.126 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.126 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.127 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.127 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.127 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.127 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.127 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.127 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.127 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.128 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.128 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.128 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.128 287122 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.128 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.128 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.128 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.128 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.129 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.129 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.129 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.129 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.129 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.129 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.129 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.130 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.130 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.130 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.130 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.130 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.130 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.130 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.131 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.131 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.131 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.131 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.131 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.132 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.132 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.132 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.132 287122 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.132 287122 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.132 287122 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.132 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.133 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.133 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.133 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.133 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.133 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.133 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.133 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.134 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.134 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.134 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.134 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.134 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.134 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.134 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.135 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.135 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.135 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.135 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.135 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.135 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.135 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.136 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.136 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.136 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.136 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.136 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.136 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.136 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.137 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.137 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.137 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.137 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.137 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.137 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.137 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.137 287122 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.138 287122 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.150 287122 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.150 287122 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.150 287122 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.151 287122 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.151 287122 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.165 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 8dd76c1c-ab01-42af-b35e-2e870841b6ad (UUID: 8dd76c1c-ab01-42af-b35e-2e870841b6ad) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.187 287122 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.187 287122 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.187 287122 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.187 287122 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.191 287122 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.197 287122 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.202 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '8dd76c1c-ab01-42af-b35e-2e870841b6ad'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], external_ids={}, name=8dd76c1c-ab01-42af-b35e-2e870841b6ad, nb_cfg_timestamp=1764896443569, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.203 287122 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f64f0638e20>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.204 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.204 287122 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.204 287122 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.204 287122 INFO oslo_service.service [-] Starting 1 workers#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.210 287122 DEBUG oslo_service.service [-] Started child 287490 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.214 287122 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpos3ihnt6/privsep.sock']#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.217 287490 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-956106'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Dec  5 01:26:56 compute-0 podman[287479]: 2025-12-05 01:26:56.249720973 +0000 UTC m=+0.106154310 container create 56e3ab83e37878a9c4316f039e10fc1e95a149b0b78e557895e67916337440bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goldberg, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.254 287490 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.255 287490 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.255 287490 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.261 287490 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.273 287490 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Dec  5 01:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.279 287490 INFO eventlet.wsgi.server [-] (287490) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Dec  5 01:26:56 compute-0 podman[287479]: 2025-12-05 01:26:56.215848486 +0000 UTC m=+0.072281883 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:26:56 compute-0 systemd[1]: Started libpod-conmon-56e3ab83e37878a9c4316f039e10fc1e95a149b0b78e557895e67916337440bd.scope.
Dec  5 01:26:56 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:26:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e5eba10045fe0a0937e6791e6a79cac42d3bdac53682231b8545677ef33cf7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:26:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e5eba10045fe0a0937e6791e6a79cac42d3bdac53682231b8545677ef33cf7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:26:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e5eba10045fe0a0937e6791e6a79cac42d3bdac53682231b8545677ef33cf7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:26:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e5eba10045fe0a0937e6791e6a79cac42d3bdac53682231b8545677ef33cf7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:26:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e5eba10045fe0a0937e6791e6a79cac42d3bdac53682231b8545677ef33cf7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:26:56 compute-0 podman[287479]: 2025-12-05 01:26:56.45980123 +0000 UTC m=+0.316234567 container init 56e3ab83e37878a9c4316f039e10fc1e95a149b0b78e557895e67916337440bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goldberg, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  5 01:26:56 compute-0 podman[287479]: 2025-12-05 01:26:56.4827029 +0000 UTC m=+0.339136237 container start 56e3ab83e37878a9c4316f039e10fc1e95a149b0b78e557895e67916337440bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goldberg, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  5 01:26:56 compute-0 podman[287479]: 2025-12-05 01:26:56.487041051 +0000 UTC m=+0.343474388 container attach 56e3ab83e37878a9c4316f039e10fc1e95a149b0b78e557895e67916337440bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goldberg, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  5 01:26:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v520: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:26:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:57.010 287122 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec  5 01:26:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:57.011 287122 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpos3ihnt6/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec  5 01:26:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.869 287504 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  5 01:26:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.874 287504 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  5 01:26:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.876 287504 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Dec  5 01:26:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:56.877 287504 INFO oslo.privsep.daemon [-] privsep daemon running as pid 287504#033[00m
Dec  5 01:26:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:57.016 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[0a119daf-b097-4494-8298-5b906b94100a]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:26:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:26:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:57.525 287504 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:26:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:57.525 287504 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:26:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:57.525 287504 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:26:57 compute-0 vibrant_goldberg[287498]: --> passed data devices: 0 physical, 3 LVM
Dec  5 01:26:57 compute-0 vibrant_goldberg[287498]: --> relative data size: 1.0
Dec  5 01:26:57 compute-0 vibrant_goldberg[287498]: --> All data devices are unavailable
Dec  5 01:26:57 compute-0 systemd[1]: libpod-56e3ab83e37878a9c4316f039e10fc1e95a149b0b78e557895e67916337440bd.scope: Deactivated successfully.
Dec  5 01:26:57 compute-0 systemd[1]: libpod-56e3ab83e37878a9c4316f039e10fc1e95a149b0b78e557895e67916337440bd.scope: Consumed 1.168s CPU time.
Dec  5 01:26:57 compute-0 podman[287479]: 2025-12-05 01:26:57.713856978 +0000 UTC m=+1.570290325 container died 56e3ab83e37878a9c4316f039e10fc1e95a149b0b78e557895e67916337440bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goldberg, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:26:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8e5eba10045fe0a0937e6791e6a79cac42d3bdac53682231b8545677ef33cf7-merged.mount: Deactivated successfully.
Dec  5 01:26:57 compute-0 podman[287479]: 2025-12-05 01:26:57.798934158 +0000 UTC m=+1.655367505 container remove 56e3ab83e37878a9c4316f039e10fc1e95a149b0b78e557895e67916337440bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_goldberg, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:26:57 compute-0 systemd[1]: libpod-conmon-56e3ab83e37878a9c4316f039e10fc1e95a149b0b78e557895e67916337440bd.scope: Deactivated successfully.
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.160 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[15914d36-c181-4361-8d8c-752121762e9b]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.163 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=8dd76c1c-ab01-42af-b35e-2e870841b6ad, column=external_ids, values=({'neutron:ovn-metadata-id': '87fd3287-3707-559d-869a-060a9ee7b0a4'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.181 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8dd76c1c-ab01-42af-b35e-2e870841b6ad, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.189 287122 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.189 287122 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.190 287122 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.190 287122 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.190 287122 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.190 287122 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.191 287122 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.191 287122 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.191 287122 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.192 287122 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.192 287122 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.192 287122 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.192 287122 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.193 287122 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.193 287122 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.193 287122 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.194 287122 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.194 287122 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.194 287122 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.194 287122 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.195 287122 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.195 287122 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.195 287122 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.196 287122 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.196 287122 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.196 287122 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.197 287122 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.197 287122 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.197 287122 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.198 287122 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.198 287122 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.198 287122 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.199 287122 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.199 287122 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.199 287122 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.199 287122 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.200 287122 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.200 287122 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.201 287122 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.201 287122 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.201 287122 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.201 287122 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.202 287122 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.202 287122 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.202 287122 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.203 287122 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.203 287122 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.203 287122 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.203 287122 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.203 287122 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.204 287122 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.204 287122 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.204 287122 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.204 287122 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.205 287122 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.205 287122 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.205 287122 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.205 287122 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.206 287122 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.206 287122 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.206 287122 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.207 287122 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.207 287122 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.207 287122 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.207 287122 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.208 287122 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.208 287122 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.208 287122 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.208 287122 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.209 287122 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.209 287122 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.209 287122 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.209 287122 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.210 287122 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.210 287122 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.210 287122 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.210 287122 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.211 287122 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.211 287122 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.211 287122 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.211 287122 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.212 287122 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.212 287122 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.212 287122 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.212 287122 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.213 287122 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.213 287122 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.213 287122 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.213 287122 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.214 287122 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.214 287122 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.214 287122 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.215 287122 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.215 287122 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.215 287122 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.215 287122 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.216 287122 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.216 287122 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.216 287122 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.216 287122 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.217 287122 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.217 287122 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.217 287122 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.217 287122 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.218 287122 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.218 287122 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.218 287122 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.218 287122 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.219 287122 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.219 287122 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.219 287122 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.220 287122 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.220 287122 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.220 287122 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.220 287122 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.221 287122 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.221 287122 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.221 287122 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.222 287122 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.222 287122 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.222 287122 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.222 287122 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.223 287122 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.223 287122 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.223 287122 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.224 287122 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.224 287122 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.224 287122 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.224 287122 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.225 287122 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.225 287122 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.225 287122 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.226 287122 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.227 287122 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.227 287122 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.227 287122 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.228 287122 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.228 287122 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.228 287122 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.228 287122 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.229 287122 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.229 287122 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.229 287122 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.229 287122 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.229 287122 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.230 287122 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.230 287122 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.230 287122 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.230 287122 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.230 287122 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.230 287122 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.230 287122 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.231 287122 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.231 287122 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.231 287122 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.231 287122 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.231 287122 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.231 287122 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.232 287122 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.232 287122 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.232 287122 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.232 287122 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.232 287122 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.232 287122 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.233 287122 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.233 287122 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.233 287122 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.233 287122 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.233 287122 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.233 287122 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.233 287122 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.234 287122 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.234 287122 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.234 287122 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.234 287122 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.234 287122 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.234 287122 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.235 287122 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.235 287122 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.235 287122 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.235 287122 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.235 287122 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.235 287122 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.236 287122 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.236 287122 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.236 287122 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.236 287122 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.236 287122 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.236 287122 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.237 287122 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.237 287122 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.237 287122 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.237 287122 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.237 287122 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.237 287122 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.238 287122 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.238 287122 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.238 287122 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.238 287122 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.238 287122 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.238 287122 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.239 287122 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.239 287122 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.239 287122 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.239 287122 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.239 287122 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.239 287122 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.240 287122 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.240 287122 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.240 287122 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.240 287122 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.240 287122 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.241 287122 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.241 287122 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.241 287122 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.241 287122 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.241 287122 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.241 287122 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.242 287122 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.242 287122 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.242 287122 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.242 287122 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.242 287122 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.242 287122 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.242 287122 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.243 287122 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.243 287122 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.243 287122 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.243 287122 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.243 287122 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.243 287122 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.244 287122 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.244 287122 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.244 287122 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.244 287122 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.244 287122 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.244 287122 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.245 287122 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.245 287122 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.245 287122 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.245 287122 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.245 287122 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.245 287122 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.246 287122 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.246 287122 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.246 287122 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.246 287122 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.246 287122 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.246 287122 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.247 287122 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.247 287122 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.247 287122 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.247 287122 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.247 287122 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.248 287122 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.248 287122 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.248 287122 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.248 287122 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.248 287122 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.248 287122 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.249 287122 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.249 287122 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.249 287122 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.249 287122 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.249 287122 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.249 287122 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.250 287122 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.250 287122 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.250 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.250 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.250 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.251 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.251 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.251 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.251 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.251 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.251 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.251 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.252 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.252 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.252 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.252 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.252 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.253 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.253 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.253 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.253 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.253 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.253 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.254 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.254 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.254 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.254 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.254 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.254 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.255 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.255 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.255 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.255 287122 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.255 287122 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.255 287122 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.256 287122 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.256 287122 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:26:58 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:26:58.256 287122 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  5 01:26:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v521: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:26:58 compute-0 podman[287680]: 2025-12-05 01:26:58.831243093 +0000 UTC m=+0.075289787 container create db0318d7a79277372ef91a8b995252e3df6ab7721d0ddb49e56db814686be292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_proskuriakova, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:26:58 compute-0 podman[287680]: 2025-12-05 01:26:58.796430969 +0000 UTC m=+0.040477713 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:26:58 compute-0 systemd[1]: Started libpod-conmon-db0318d7a79277372ef91a8b995252e3df6ab7721d0ddb49e56db814686be292.scope.
Dec  5 01:26:58 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:26:58 compute-0 podman[287680]: 2025-12-05 01:26:58.975809467 +0000 UTC m=+0.219856221 container init db0318d7a79277372ef91a8b995252e3df6ab7721d0ddb49e56db814686be292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_proskuriakova, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:26:58 compute-0 podman[287680]: 2025-12-05 01:26:58.993983435 +0000 UTC m=+0.238030129 container start db0318d7a79277372ef91a8b995252e3df6ab7721d0ddb49e56db814686be292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_proskuriakova, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  5 01:26:59 compute-0 podman[287680]: 2025-12-05 01:26:58.999824619 +0000 UTC m=+0.243871393 container attach db0318d7a79277372ef91a8b995252e3df6ab7721d0ddb49e56db814686be292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec  5 01:26:59 compute-0 naughty_proskuriakova[287696]: 167 167
Dec  5 01:26:59 compute-0 systemd[1]: libpod-db0318d7a79277372ef91a8b995252e3df6ab7721d0ddb49e56db814686be292.scope: Deactivated successfully.
Dec  5 01:26:59 compute-0 podman[287680]: 2025-12-05 01:26:59.004606712 +0000 UTC m=+0.248653376 container died db0318d7a79277372ef91a8b995252e3df6ab7721d0ddb49e56db814686be292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  5 01:26:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5e3175df4e3a9ed826d2cfda6756d237dd93eb7cb7cd62187df3b2348a5c4ff-merged.mount: Deactivated successfully.
Dec  5 01:26:59 compute-0 podman[287680]: 2025-12-05 01:26:59.074724134 +0000 UTC m=+0.318770798 container remove db0318d7a79277372ef91a8b995252e3df6ab7721d0ddb49e56db814686be292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_proskuriakova, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Dec  5 01:26:59 compute-0 systemd[1]: libpod-conmon-db0318d7a79277372ef91a8b995252e3df6ab7721d0ddb49e56db814686be292.scope: Deactivated successfully.
Dec  5 01:26:59 compute-0 podman[287718]: 2025-12-05 01:26:59.321383793 +0000 UTC m=+0.077013175 container create 6f2650eb33b3e266c2fbda9cc30a74f49e9a42b84bbc4fd00b0e4debe644e06d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:26:59 compute-0 podman[287718]: 2025-12-05 01:26:59.285657404 +0000 UTC m=+0.041286826 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:26:59 compute-0 systemd[1]: Started libpod-conmon-6f2650eb33b3e266c2fbda9cc30a74f49e9a42b84bbc4fd00b0e4debe644e06d.scope.
Dec  5 01:26:59 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:26:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98913c89380ee6b6dd8168527dd7c8af5d07a8fc7d502b5095a72c9e770741e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:26:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98913c89380ee6b6dd8168527dd7c8af5d07a8fc7d502b5095a72c9e770741e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:26:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98913c89380ee6b6dd8168527dd7c8af5d07a8fc7d502b5095a72c9e770741e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:26:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98913c89380ee6b6dd8168527dd7c8af5d07a8fc7d502b5095a72c9e770741e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:26:59 compute-0 podman[287718]: 2025-12-05 01:26:59.478666943 +0000 UTC m=+0.234296315 container init 6f2650eb33b3e266c2fbda9cc30a74f49e9a42b84bbc4fd00b0e4debe644e06d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_swirles, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  5 01:26:59 compute-0 podman[287718]: 2025-12-05 01:26:59.497068378 +0000 UTC m=+0.252697760 container start 6f2650eb33b3e266c2fbda9cc30a74f49e9a42b84bbc4fd00b0e4debe644e06d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  5 01:26:59 compute-0 podman[287718]: 2025-12-05 01:26:59.503558529 +0000 UTC m=+0.259187961 container attach 6f2650eb33b3e266c2fbda9cc30a74f49e9a42b84bbc4fd00b0e4debe644e06d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_swirles, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:26:59 compute-0 podman[158197]: time="2025-12-05T01:26:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:26:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:26:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 37300 "" "Go-http-client/1.1"
Dec  5 01:26:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:26:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7680 "" "Go-http-client/1.1"
Dec  5 01:27:00 compute-0 kind_swirles[287733]: {
Dec  5 01:27:00 compute-0 kind_swirles[287733]:    "0": [
Dec  5 01:27:00 compute-0 kind_swirles[287733]:        {
Dec  5 01:27:00 compute-0 kind_swirles[287733]:            "devices": [
Dec  5 01:27:00 compute-0 kind_swirles[287733]:                "/dev/loop3"
Dec  5 01:27:00 compute-0 kind_swirles[287733]:            ],
Dec  5 01:27:00 compute-0 kind_swirles[287733]:            "lv_name": "ceph_lv0",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:            "lv_size": "21470642176",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:            "name": "ceph_lv0",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:            "tags": {
Dec  5 01:27:00 compute-0 kind_swirles[287733]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:                "ceph.cluster_name": "ceph",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:                "ceph.crush_device_class": "",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:                "ceph.encrypted": "0",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:                "ceph.osd_id": "0",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:                "ceph.type": "block",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:                "ceph.vdo": "0"
Dec  5 01:27:00 compute-0 kind_swirles[287733]:            },
Dec  5 01:27:00 compute-0 kind_swirles[287733]:            "type": "block",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:            "vg_name": "ceph_vg0"
Dec  5 01:27:00 compute-0 kind_swirles[287733]:        }
Dec  5 01:27:00 compute-0 kind_swirles[287733]:    ],
Dec  5 01:27:00 compute-0 kind_swirles[287733]:    "1": [
Dec  5 01:27:00 compute-0 kind_swirles[287733]:        {
Dec  5 01:27:00 compute-0 kind_swirles[287733]:            "devices": [
Dec  5 01:27:00 compute-0 kind_swirles[287733]:                "/dev/loop4"
Dec  5 01:27:00 compute-0 kind_swirles[287733]:            ],
Dec  5 01:27:00 compute-0 kind_swirles[287733]:            "lv_name": "ceph_lv1",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:            "lv_size": "21470642176",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:            "name": "ceph_lv1",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:            "tags": {
Dec  5 01:27:00 compute-0 kind_swirles[287733]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:                "ceph.cluster_name": "ceph",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:                "ceph.crush_device_class": "",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:                "ceph.encrypted": "0",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:                "ceph.osd_id": "1",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:                "ceph.type": "block",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:                "ceph.vdo": "0"
Dec  5 01:27:00 compute-0 kind_swirles[287733]:            },
Dec  5 01:27:00 compute-0 kind_swirles[287733]:            "type": "block",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:            "vg_name": "ceph_vg1"
Dec  5 01:27:00 compute-0 kind_swirles[287733]:        }
Dec  5 01:27:00 compute-0 kind_swirles[287733]:    ],
Dec  5 01:27:00 compute-0 kind_swirles[287733]:    "2": [
Dec  5 01:27:00 compute-0 kind_swirles[287733]:        {
Dec  5 01:27:00 compute-0 kind_swirles[287733]:            "devices": [
Dec  5 01:27:00 compute-0 kind_swirles[287733]:                "/dev/loop5"
Dec  5 01:27:00 compute-0 kind_swirles[287733]:            ],
Dec  5 01:27:00 compute-0 kind_swirles[287733]:            "lv_name": "ceph_lv2",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:            "lv_size": "21470642176",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:            "name": "ceph_lv2",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:            "tags": {
Dec  5 01:27:00 compute-0 kind_swirles[287733]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:                "ceph.cluster_name": "ceph",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:                "ceph.crush_device_class": "",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:                "ceph.encrypted": "0",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:                "ceph.osd_id": "2",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:                "ceph.type": "block",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:                "ceph.vdo": "0"
Dec  5 01:27:00 compute-0 kind_swirles[287733]:            },
Dec  5 01:27:00 compute-0 kind_swirles[287733]:            "type": "block",
Dec  5 01:27:00 compute-0 kind_swirles[287733]:            "vg_name": "ceph_vg2"
Dec  5 01:27:00 compute-0 kind_swirles[287733]:        }
Dec  5 01:27:00 compute-0 kind_swirles[287733]:    ]
Dec  5 01:27:00 compute-0 kind_swirles[287733]: }
Dec  5 01:27:00 compute-0 systemd[1]: libpod-6f2650eb33b3e266c2fbda9cc30a74f49e9a42b84bbc4fd00b0e4debe644e06d.scope: Deactivated successfully.
Dec  5 01:27:00 compute-0 podman[287718]: 2025-12-05 01:27:00.302377434 +0000 UTC m=+1.058006766 container died 6f2650eb33b3e266c2fbda9cc30a74f49e9a42b84bbc4fd00b0e4debe644e06d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  5 01:27:00 compute-0 systemd-logind[792]: New session 55 of user zuul.
Dec  5 01:27:00 compute-0 systemd[1]: Started Session 55 of User zuul.
Dec  5 01:27:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-98913c89380ee6b6dd8168527dd7c8af5d07a8fc7d502b5095a72c9e770741e0-merged.mount: Deactivated successfully.
Dec  5 01:27:00 compute-0 podman[287718]: 2025-12-05 01:27:00.389576823 +0000 UTC m=+1.145206165 container remove 6f2650eb33b3e266c2fbda9cc30a74f49e9a42b84bbc4fd00b0e4debe644e06d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:27:00 compute-0 systemd[1]: libpod-conmon-6f2650eb33b3e266c2fbda9cc30a74f49e9a42b84bbc4fd00b0e4debe644e06d.scope: Deactivated successfully.
Dec  5 01:27:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v522: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:27:01 compute-0 openstack_network_exporter[160350]: ERROR   01:27:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:27:01 compute-0 openstack_network_exporter[160350]: ERROR   01:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:27:01 compute-0 openstack_network_exporter[160350]: ERROR   01:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:27:01 compute-0 openstack_network_exporter[160350]: ERROR   01:27:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:27:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:27:01 compute-0 openstack_network_exporter[160350]: ERROR   01:27:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:27:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:27:01 compute-0 podman[288048]: 2025-12-05 01:27:01.496849336 +0000 UTC m=+0.064062593 container create d13ca90e411a7cac455d4e88c5f3b53210e15387898f037a54ad910a75a0c22a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_merkle, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec  5 01:27:01 compute-0 systemd[1]: Started libpod-conmon-d13ca90e411a7cac455d4e88c5f3b53210e15387898f037a54ad910a75a0c22a.scope.
Dec  5 01:27:01 compute-0 podman[288048]: 2025-12-05 01:27:01.469730447 +0000 UTC m=+0.036943744 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:27:01 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:27:01 compute-0 python3.9[288040]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  5 01:27:01 compute-0 podman[288048]: 2025-12-05 01:27:01.707817057 +0000 UTC m=+0.275030344 container init d13ca90e411a7cac455d4e88c5f3b53210e15387898f037a54ad910a75a0c22a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  5 01:27:01 compute-0 podman[288048]: 2025-12-05 01:27:01.729087932 +0000 UTC m=+0.296301179 container start d13ca90e411a7cac455d4e88c5f3b53210e15387898f037a54ad910a75a0c22a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  5 01:27:01 compute-0 podman[288048]: 2025-12-05 01:27:01.734034361 +0000 UTC m=+0.301247658 container attach d13ca90e411a7cac455d4e88c5f3b53210e15387898f037a54ad910a75a0c22a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:27:01 compute-0 sharp_merkle[288063]: 167 167
Dec  5 01:27:01 compute-0 podman[288048]: 2025-12-05 01:27:01.741274313 +0000 UTC m=+0.308487560 container died d13ca90e411a7cac455d4e88c5f3b53210e15387898f037a54ad910a75a0c22a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_merkle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  5 01:27:01 compute-0 systemd[1]: libpod-d13ca90e411a7cac455d4e88c5f3b53210e15387898f037a54ad910a75a0c22a.scope: Deactivated successfully.
Dec  5 01:27:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-6780c3b8233f8f31b301a0cfe43f906960ea4eeee26304bc4898d8b1f3f1e9a2-merged.mount: Deactivated successfully.
Dec  5 01:27:01 compute-0 podman[288048]: 2025-12-05 01:27:01.784161113 +0000 UTC m=+0.351374380 container remove d13ca90e411a7cac455d4e88c5f3b53210e15387898f037a54ad910a75a0c22a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_merkle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  5 01:27:01 compute-0 systemd[1]: libpod-conmon-d13ca90e411a7cac455d4e88c5f3b53210e15387898f037a54ad910a75a0c22a.scope: Deactivated successfully.
Dec  5 01:27:02 compute-0 podman[288091]: 2025-12-05 01:27:02.005138753 +0000 UTC m=+0.061958234 container create 96b8c3b6fa014aaafcd48c53118a5cf5f18ca1bd9efc5b00090cae2d48810112 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  5 01:27:02 compute-0 systemd[1]: Started libpod-conmon-96b8c3b6fa014aaafcd48c53118a5cf5f18ca1bd9efc5b00090cae2d48810112.scope.
Dec  5 01:27:02 compute-0 podman[288091]: 2025-12-05 01:27:01.983188229 +0000 UTC m=+0.040007790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:27:02 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:27:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d9f75ecb6357c5a13328c21a8306bfd743b1d1612d76a578fcd79cfa91b30e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:27:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d9f75ecb6357c5a13328c21a8306bfd743b1d1612d76a578fcd79cfa91b30e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:27:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d9f75ecb6357c5a13328c21a8306bfd743b1d1612d76a578fcd79cfa91b30e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:27:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d9f75ecb6357c5a13328c21a8306bfd743b1d1612d76a578fcd79cfa91b30e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:27:02 compute-0 podman[288091]: 2025-12-05 01:27:02.123801152 +0000 UTC m=+0.180620723 container init 96b8c3b6fa014aaafcd48c53118a5cf5f18ca1bd9efc5b00090cae2d48810112 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:27:02 compute-0 podman[288091]: 2025-12-05 01:27:02.148971706 +0000 UTC m=+0.205791227 container start 96b8c3b6fa014aaafcd48c53118a5cf5f18ca1bd9efc5b00090cae2d48810112 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  5 01:27:02 compute-0 podman[288091]: 2025-12-05 01:27:02.154507041 +0000 UTC m=+0.211326602 container attach 96b8c3b6fa014aaafcd48c53118a5cf5f18ca1bd9efc5b00090cae2d48810112 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:27:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:27:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v523: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:27:02 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Dec  5 01:27:02 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:27:02.732106) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  5 01:27:02 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Dec  5 01:27:02 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898022732157, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2042, "num_deletes": 251, "total_data_size": 3473827, "memory_usage": 3522088, "flush_reason": "Manual Compaction"}
Dec  5 01:27:02 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Dec  5 01:27:02 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898022749879, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3408983, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9687, "largest_seqno": 11728, "table_properties": {"data_size": 3399730, "index_size": 5875, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17874, "raw_average_key_size": 19, "raw_value_size": 3381359, "raw_average_value_size": 3683, "num_data_blocks": 266, "num_entries": 918, "num_filter_entries": 918, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897789, "oldest_key_time": 1764897789, "file_creation_time": 1764898022, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Dec  5 01:27:02 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 17848 microseconds, and 7349 cpu microseconds.
Dec  5 01:27:02 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 01:27:02 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:27:02.749961) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3408983 bytes OK
Dec  5 01:27:02 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:27:02.749977) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Dec  5 01:27:02 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:27:02.751808) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Dec  5 01:27:02 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:27:02.751821) EVENT_LOG_v1 {"time_micros": 1764898022751816, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  5 01:27:02 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:27:02.751838) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  5 01:27:02 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3465306, prev total WAL file size 3465306, number of live WAL files 2.
Dec  5 01:27:02 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 01:27:02 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:27:02.753181) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Dec  5 01:27:02 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  5 01:27:02 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3329KB)], [26(5939KB)]
Dec  5 01:27:02 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898022753221, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9491325, "oldest_snapshot_seqno": -1}
Dec  5 01:27:02 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3686 keys, 7824079 bytes, temperature: kUnknown
Dec  5 01:27:02 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898022791824, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 7824079, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7795882, "index_size": 17911, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9221, "raw_key_size": 88514, "raw_average_key_size": 24, "raw_value_size": 7725743, "raw_average_value_size": 2095, "num_data_blocks": 775, "num_entries": 3686, "num_filter_entries": 3686, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764898022, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Dec  5 01:27:02 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 01:27:02 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:27:02.792048) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 7824079 bytes
Dec  5 01:27:02 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:27:02.799875) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 245.3 rd, 202.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 5.8 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(5.1) write-amplify(2.3) OK, records in: 4200, records dropped: 514 output_compression: NoCompression
Dec  5 01:27:02 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:27:02.800716) EVENT_LOG_v1 {"time_micros": 1764898022800588, "job": 10, "event": "compaction_finished", "compaction_time_micros": 38694, "compaction_time_cpu_micros": 16949, "output_level": 6, "num_output_files": 1, "total_output_size": 7824079, "num_input_records": 4200, "num_output_records": 3686, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  5 01:27:02 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 01:27:02 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898022801770, "job": 10, "event": "table_file_deletion", "file_number": 28}
Dec  5 01:27:02 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 01:27:02 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898022802778, "job": 10, "event": "table_file_deletion", "file_number": 26}
Dec  5 01:27:02 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:27:02.752999) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:27:02 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:27:02.803137) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:27:02 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:27:02.803145) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:27:02 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:27:02.803149) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:27:02 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:27:02.803152) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:27:02 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:27:02.803155) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:27:03 compute-0 podman[288236]: 2025-12-05 01:27:03.019078215 +0000 UTC m=+0.112585740 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec  5 01:27:03 compute-0 podman[288237]: 2025-12-05 01:27:03.046288456 +0000 UTC m=+0.122513728 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  5 01:27:03 compute-0 podman[288238]: 2025-12-05 01:27:03.060949006 +0000 UTC m=+0.131124009 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Dec  5 01:27:03 compute-0 podman[288246]: 2025-12-05 01:27:03.069397073 +0000 UTC m=+0.133702491 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec  5 01:27:03 compute-0 python3.9[288346]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:27:03 compute-0 priceless_zhukovsky[288130]: {
Dec  5 01:27:03 compute-0 priceless_zhukovsky[288130]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 01:27:03 compute-0 priceless_zhukovsky[288130]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:27:03 compute-0 priceless_zhukovsky[288130]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 01:27:03 compute-0 priceless_zhukovsky[288130]:        "osd_id": 0,
Dec  5 01:27:03 compute-0 priceless_zhukovsky[288130]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:27:03 compute-0 priceless_zhukovsky[288130]:        "type": "bluestore"
Dec  5 01:27:03 compute-0 priceless_zhukovsky[288130]:    },
Dec  5 01:27:03 compute-0 priceless_zhukovsky[288130]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 01:27:03 compute-0 priceless_zhukovsky[288130]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:27:03 compute-0 priceless_zhukovsky[288130]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 01:27:03 compute-0 priceless_zhukovsky[288130]:        "osd_id": 1,
Dec  5 01:27:03 compute-0 priceless_zhukovsky[288130]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:27:03 compute-0 priceless_zhukovsky[288130]:        "type": "bluestore"
Dec  5 01:27:03 compute-0 priceless_zhukovsky[288130]:    },
Dec  5 01:27:03 compute-0 priceless_zhukovsky[288130]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 01:27:03 compute-0 priceless_zhukovsky[288130]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:27:03 compute-0 priceless_zhukovsky[288130]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 01:27:03 compute-0 priceless_zhukovsky[288130]:        "osd_id": 2,
Dec  5 01:27:03 compute-0 priceless_zhukovsky[288130]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:27:03 compute-0 priceless_zhukovsky[288130]:        "type": "bluestore"
Dec  5 01:27:03 compute-0 priceless_zhukovsky[288130]:    }
Dec  5 01:27:03 compute-0 priceless_zhukovsky[288130]: }
Dec  5 01:27:03 compute-0 podman[288091]: 2025-12-05 01:27:03.332974416 +0000 UTC m=+1.389793937 container died 96b8c3b6fa014aaafcd48c53118a5cf5f18ca1bd9efc5b00090cae2d48810112 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:27:03 compute-0 systemd[1]: libpod-96b8c3b6fa014aaafcd48c53118a5cf5f18ca1bd9efc5b00090cae2d48810112.scope: Deactivated successfully.
Dec  5 01:27:03 compute-0 systemd[1]: libpod-96b8c3b6fa014aaafcd48c53118a5cf5f18ca1bd9efc5b00090cae2d48810112.scope: Consumed 1.169s CPU time.
Dec  5 01:27:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d9f75ecb6357c5a13328c21a8306bfd743b1d1612d76a578fcd79cfa91b30e8-merged.mount: Deactivated successfully.
Dec  5 01:27:03 compute-0 podman[288091]: 2025-12-05 01:27:03.427214812 +0000 UTC m=+1.484034303 container remove 96b8c3b6fa014aaafcd48c53118a5cf5f18ca1bd9efc5b00090cae2d48810112 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_zhukovsky, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:27:03 compute-0 systemd[1]: libpod-conmon-96b8c3b6fa014aaafcd48c53118a5cf5f18ca1bd9efc5b00090cae2d48810112.scope: Deactivated successfully.
Dec  5 01:27:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:27:03 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:27:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:27:03 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:27:03 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev d6398f30-efed-499b-ab20-be364d3ef25f does not exist
Dec  5 01:27:03 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev d7db7296-23f6-483b-b846-bd305c9e7e53 does not exist
Dec  5 01:27:04 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:27:04 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:27:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v524: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:27:05 compute-0 python3.9[288597]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  5 01:27:05 compute-0 systemd[1]: Reloading.
Dec  5 01:27:05 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:27:05 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:27:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v525: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:27:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:27:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v526: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:27:08 compute-0 python3.9[288785]: ansible-ansible.builtin.service_facts Invoked
Dec  5 01:27:08 compute-0 podman[288786]: 2025-12-05 01:27:08.752535532 +0000 UTC m=+0.151436377 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, architecture=x86_64, build-date=2024-09-18T21:23:30, name=ubi9, release-0.7.12=, io.buildah.version=1.29.0, config_id=edpm, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-container, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., version=9.4)
Dec  5 01:27:08 compute-0 network[288822]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  5 01:27:08 compute-0 network[288823]: 'network-scripts' will be removed from distribution in near future.
Dec  5 01:27:08 compute-0 network[288824]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  5 01:27:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v527: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:27:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:27:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v528: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:27:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v529: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:27:15 compute-0 python3.9[289095]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  5 01:27:15 compute-0 podman[289220]: 2025-12-05 01:27:15.948168448 +0000 UTC m=+0.134502934 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, architecture=x86_64, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, config_id=edpm, name=ubi9-minimal, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.6, container_name=openstack_network_exporter)
Dec  5 01:27:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:27:16
Dec  5 01:27:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 01:27:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 01:27:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', '.rgw.root', 'default.rgw.control', 'images', 'vms', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes']
Dec  5 01:27:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 01:27:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:27:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:27:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:27:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:27:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:27:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:27:16 compute-0 python3.9[289268]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  5 01:27:16 compute-0 podman[289272]: 2025-12-05 01:27:16.384724408 +0000 UTC m=+0.085366329 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  5 01:27:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 01:27:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:27:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 01:27:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:27:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:27:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:27:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:27:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:27:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:27:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:27:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v530: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:27:17 compute-0 python3.9[289446]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  5 01:27:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:27:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v531: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:27:18 compute-0 python3.9[289599]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  5 01:27:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v532: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:27:21 compute-0 python3.9[289753]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  5 01:27:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:27:22 compute-0 python3.9[289906]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  5 01:27:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v533: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:27:23 compute-0 python3.9[290059]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  5 01:27:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v534: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:27:24 compute-0 podman[290184]: 2025-12-05 01:27:24.688097531 +0000 UTC m=+0.105554963 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  5 01:27:24 compute-0 python3.9[290229]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:27:25 compute-0 python3.9[290381]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:27:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 01:27:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v535: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:27:26 compute-0 python3.9[290533]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:27:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:27:27 compute-0 python3.9[290685]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:27:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v536: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:27:28 compute-0 python3.9[290837]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:27:29 compute-0 podman[158197]: time="2025-12-05T01:27:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:27:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:27:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35733 "" "Go-http-client/1.1"
Dec  5 01:27:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:27:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7276 "" "Go-http-client/1.1"
Dec  5 01:27:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v537: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:27:30 compute-0 python3.9[290989]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:27:31 compute-0 openstack_network_exporter[160350]: ERROR   01:27:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:27:31 compute-0 openstack_network_exporter[160350]: ERROR   01:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:27:31 compute-0 openstack_network_exporter[160350]: ERROR   01:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:27:31 compute-0 openstack_network_exporter[160350]: ERROR   01:27:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:27:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:27:31 compute-0 openstack_network_exporter[160350]: ERROR   01:27:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:27:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:27:31 compute-0 python3.9[291141]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:27:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:27:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v538: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:27:33 compute-0 podman[291266]: 2025-12-05 01:27:33.260616318 +0000 UTC m=+0.102437606 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  5 01:27:33 compute-0 podman[291267]: 2025-12-05 01:27:33.286451161 +0000 UTC m=+0.120460551 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 01:27:33 compute-0 podman[291265]: 2025-12-05 01:27:33.28714978 +0000 UTC m=+0.134842603 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125)
Dec  5 01:27:33 compute-0 podman[291268]: 2025-12-05 01:27:33.323346153 +0000 UTC m=+0.147332192 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3)
Dec  5 01:27:33 compute-0 python3.9[291370]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:27:34 compute-0 python3.9[291528]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:27:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v539: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:27:35 compute-0 python3.9[291680]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:27:36 compute-0 python3.9[291832]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:27:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v540: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:27:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:27:37 compute-0 python3.9[291984]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:27:38 compute-0 python3.9[292136]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:27:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v541: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:27:39 compute-0 podman[292260]: 2025-12-05 01:27:39.439591945 +0000 UTC m=+0.110381999 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.openshift.expose-services=, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, name=ubi9, maintainer=Red Hat, Inc., release-0.7.12=, architecture=x86_64, config_id=edpm, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  5 01:27:39 compute-0 python3.9[292304]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:27:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v542: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:27:41 compute-0 python3.9[292456]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:27:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:27:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v543: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:27:43 compute-0 python3.9[292608]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  5 01:27:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v544: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:27:44 compute-0 python3.9[292760]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  5 01:27:44 compute-0 systemd[1]: Reloading.
Dec  5 01:27:44 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:27:44 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:27:46 compute-0 podman[292947]: 2025-12-05 01:27:46.212534006 +0000 UTC m=+0.124321208 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.buildah.version=1.33.7, release=1755695350, vcs-type=git, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, architecture=x86_64, config_id=edpm, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  5 01:27:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:27:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:27:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:27:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:27:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:27:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:27:46 compute-0 python3.9[292948]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:27:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v545: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:27:46 compute-0 podman[293016]: 2025-12-05 01:27:46.673630494 +0000 UTC m=+0.083228199 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 01:27:47 compute-0 python3.9[293143]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:27:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:27:48 compute-0 python3.9[293296]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:27:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v546: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:27:49 compute-0 python3.9[293449]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:27:50 compute-0 python3.9[293603]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:27:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v547: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:27:51 compute-0 python3.9[293756]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:27:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:27:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v548: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:27:52 compute-0 python3.9[293909]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:27:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v549: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:27:54 compute-0 podman[294034]: 2025-12-05 01:27:54.97962212 +0000 UTC m=+0.113127455 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec  5 01:27:55 compute-0 python3.9[294081]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Dec  5 01:27:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:27:56.153 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:27:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:27:56.154 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:27:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:27:56.154 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:27:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v550: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:27:56 compute-0 python3.9[294234]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  5 01:27:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:27:57 compute-0 python3.9[294318]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  5 01:27:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v551: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:27:59 compute-0 podman[158197]: time="2025-12-05T01:27:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:27:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:27:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35733 "" "Go-http-client/1.1"
Dec  5 01:27:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:27:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7283 "" "Go-http-client/1.1"
Dec  5 01:28:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v552: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:28:01 compute-0 python3.9[294471]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  5 01:28:01 compute-0 openstack_network_exporter[160350]: ERROR   01:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:28:01 compute-0 openstack_network_exporter[160350]: ERROR   01:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:28:01 compute-0 openstack_network_exporter[160350]: ERROR   01:28:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:28:01 compute-0 openstack_network_exporter[160350]: ERROR   01:28:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:28:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:28:01 compute-0 openstack_network_exporter[160350]: ERROR   01:28:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:28:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:28:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:28:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v553: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:28:02 compute-0 python3.9[294626]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  5 01:28:03 compute-0 podman[294678]: 2025-12-05 01:28:03.69841187 +0000 UTC m=+0.103054133 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 01:28:03 compute-0 podman[294677]: 2025-12-05 01:28:03.726522497 +0000 UTC m=+0.127831847 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm)
Dec  5 01:28:03 compute-0 podman[294682]: 2025-12-05 01:28:03.745360964 +0000 UTC m=+0.126785188 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  5 01:28:03 compute-0 podman[294680]: 2025-12-05 01:28:03.746736722 +0000 UTC m=+0.133927077 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec  5 01:28:04 compute-0 python3.9[294918]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  5 01:28:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v554: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:28:04 compute-0 podman[295107]: 2025-12-05 01:28:04.938081156 +0000 UTC m=+0.116132139 container exec aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  5 01:28:05 compute-0 podman[295107]: 2025-12-05 01:28:05.087029123 +0000 UTC m=+0.265080116 container exec_died aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:28:06 compute-0 python3.9[295299]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  5 01:28:06 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:28:06 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:28:06 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:28:06 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:28:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v555: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:28:07 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:28:07 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:28:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:28:07 compute-0 python3.9[295598]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  5 01:28:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:28:07 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:28:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 01:28:07 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:28:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 01:28:07 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:28:07 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev e0da4494-2fac-4667-a183-fac897211f20 does not exist
Dec  5 01:28:07 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 62e9c5a8-0114-4358-85b9-01044d7c5811 does not exist
Dec  5 01:28:07 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev c400a858-933a-4f79-93a1-47c47866171b does not exist
Dec  5 01:28:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 01:28:07 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 01:28:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 01:28:07 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:28:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:28:07 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:28:08 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:28:08 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:28:08 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:28:08 compute-0 podman[295906]: 2025-12-05 01:28:08.533212999 +0000 UTC m=+0.081067719 container create c242900af9670b0882c5adbd5ef94280159028f1fa23c428b4f42eb21ac50524 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_newton, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:28:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v556: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:28:08 compute-0 podman[295906]: 2025-12-05 01:28:08.501268725 +0000 UTC m=+0.049123465 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:28:08 compute-0 systemd[1]: Started libpod-conmon-c242900af9670b0882c5adbd5ef94280159028f1fa23c428b4f42eb21ac50524.scope.
Dec  5 01:28:08 compute-0 python3.9[295892]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  5 01:28:08 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:28:08 compute-0 podman[295906]: 2025-12-05 01:28:08.714741796 +0000 UTC m=+0.262596536 container init c242900af9670b0882c5adbd5ef94280159028f1fa23c428b4f42eb21ac50524 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_newton, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:28:08 compute-0 podman[295906]: 2025-12-05 01:28:08.734791027 +0000 UTC m=+0.282645747 container start c242900af9670b0882c5adbd5ef94280159028f1fa23c428b4f42eb21ac50524 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_newton, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  5 01:28:08 compute-0 podman[295906]: 2025-12-05 01:28:08.741810173 +0000 UTC m=+0.289664913 container attach c242900af9670b0882c5adbd5ef94280159028f1fa23c428b4f42eb21ac50524 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_newton, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  5 01:28:08 compute-0 focused_newton[295923]: 167 167
Dec  5 01:28:08 compute-0 systemd[1]: libpod-c242900af9670b0882c5adbd5ef94280159028f1fa23c428b4f42eb21ac50524.scope: Deactivated successfully.
Dec  5 01:28:08 compute-0 podman[295906]: 2025-12-05 01:28:08.747877583 +0000 UTC m=+0.295732343 container died c242900af9670b0882c5adbd5ef94280159028f1fa23c428b4f42eb21ac50524 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_newton, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  5 01:28:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-f74d99a2e5f9d6bb2b0b10635d17f3f6455329f5c8e99cf5bcf5a066ffb2823e-merged.mount: Deactivated successfully.
Dec  5 01:28:08 compute-0 podman[295906]: 2025-12-05 01:28:08.840774562 +0000 UTC m=+0.388629292 container remove c242900af9670b0882c5adbd5ef94280159028f1fa23c428b4f42eb21ac50524 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_newton, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:28:08 compute-0 systemd[1]: libpod-conmon-c242900af9670b0882c5adbd5ef94280159028f1fa23c428b4f42eb21ac50524.scope: Deactivated successfully.
Dec  5 01:28:09 compute-0 podman[295973]: 2025-12-05 01:28:09.05056039 +0000 UTC m=+0.083485046 container create 01afabe20f064870808f9cfed8e313fea44c6d884faa1d2e6f47196c71ed0841 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ride, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:28:09 compute-0 podman[295973]: 2025-12-05 01:28:09.014426779 +0000 UTC m=+0.047351505 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:28:09 compute-0 systemd[1]: Started libpod-conmon-01afabe20f064870808f9cfed8e313fea44c6d884faa1d2e6f47196c71ed0841.scope.
Dec  5 01:28:09 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:28:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebeadd46d0c86c5625aea68681bcda392c224e9a22cb503f9c8d85a5b9de8b5f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:28:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebeadd46d0c86c5625aea68681bcda392c224e9a22cb503f9c8d85a5b9de8b5f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:28:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebeadd46d0c86c5625aea68681bcda392c224e9a22cb503f9c8d85a5b9de8b5f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:28:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebeadd46d0c86c5625aea68681bcda392c224e9a22cb503f9c8d85a5b9de8b5f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:28:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebeadd46d0c86c5625aea68681bcda392c224e9a22cb503f9c8d85a5b9de8b5f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:28:09 compute-0 podman[295973]: 2025-12-05 01:28:09.181847362 +0000 UTC m=+0.214772048 container init 01afabe20f064870808f9cfed8e313fea44c6d884faa1d2e6f47196c71ed0841 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ride, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:28:09 compute-0 podman[295973]: 2025-12-05 01:28:09.201716118 +0000 UTC m=+0.234640774 container start 01afabe20f064870808f9cfed8e313fea44c6d884faa1d2e6f47196c71ed0841 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ride, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  5 01:28:09 compute-0 podman[295973]: 2025-12-05 01:28:09.208322813 +0000 UTC m=+0.241247499 container attach 01afabe20f064870808f9cfed8e313fea44c6d884faa1d2e6f47196c71ed0841 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:28:09 compute-0 podman[296076]: 2025-12-05 01:28:09.739069179 +0000 UTC m=+0.139530084 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, release-0.7.12=, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vendor=Red Hat, Inc., distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, io.openshift.tags=base rhel9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  5 01:28:10 compute-0 python3.9[296139]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  5 01:28:10 compute-0 nostalgic_ride[296024]: --> passed data devices: 0 physical, 3 LVM
Dec  5 01:28:10 compute-0 nostalgic_ride[296024]: --> relative data size: 1.0
Dec  5 01:28:10 compute-0 nostalgic_ride[296024]: --> All data devices are unavailable
Dec  5 01:28:10 compute-0 systemd[1]: libpod-01afabe20f064870808f9cfed8e313fea44c6d884faa1d2e6f47196c71ed0841.scope: Deactivated successfully.
Dec  5 01:28:10 compute-0 podman[295973]: 2025-12-05 01:28:10.44128859 +0000 UTC m=+1.474213246 container died 01afabe20f064870808f9cfed8e313fea44c6d884faa1d2e6f47196c71ed0841 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:28:10 compute-0 systemd[1]: libpod-01afabe20f064870808f9cfed8e313fea44c6d884faa1d2e6f47196c71ed0841.scope: Consumed 1.173s CPU time.
Dec  5 01:28:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-ebeadd46d0c86c5625aea68681bcda392c224e9a22cb503f9c8d85a5b9de8b5f-merged.mount: Deactivated successfully.
Dec  5 01:28:10 compute-0 podman[295973]: 2025-12-05 01:28:10.53386013 +0000 UTC m=+1.566784786 container remove 01afabe20f064870808f9cfed8e313fea44c6d884faa1d2e6f47196c71ed0841 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ride, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  5 01:28:10 compute-0 systemd[1]: libpod-conmon-01afabe20f064870808f9cfed8e313fea44c6d884faa1d2e6f47196c71ed0841.scope: Deactivated successfully.
Dec  5 01:28:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v557: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:28:11 compute-0 python3.9[296409]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  5 01:28:11 compute-0 podman[296495]: 2025-12-05 01:28:11.496379244 +0000 UTC m=+0.064980859 container create daf53dd4cc0081ea643617e63a6f1735d3ba3650cf86a4d5a4217a9f6d1c1e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:28:11 compute-0 systemd[1]: Started libpod-conmon-daf53dd4cc0081ea643617e63a6f1735d3ba3650cf86a4d5a4217a9f6d1c1e63.scope.
Dec  5 01:28:11 compute-0 podman[296495]: 2025-12-05 01:28:11.470046077 +0000 UTC m=+0.038647682 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:28:11 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:28:11 compute-0 podman[296495]: 2025-12-05 01:28:11.636404621 +0000 UTC m=+0.205006246 container init daf53dd4cc0081ea643617e63a6f1735d3ba3650cf86a4d5a4217a9f6d1c1e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  5 01:28:11 compute-0 podman[296495]: 2025-12-05 01:28:11.656074111 +0000 UTC m=+0.224675696 container start daf53dd4cc0081ea643617e63a6f1735d3ba3650cf86a4d5a4217a9f6d1c1e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  5 01:28:11 compute-0 podman[296495]: 2025-12-05 01:28:11.661096031 +0000 UTC m=+0.229697666 container attach daf53dd4cc0081ea643617e63a6f1735d3ba3650cf86a4d5a4217a9f6d1c1e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:28:11 compute-0 unruffled_gauss[296536]: 167 167
Dec  5 01:28:11 compute-0 systemd[1]: libpod-daf53dd4cc0081ea643617e63a6f1735d3ba3650cf86a4d5a4217a9f6d1c1e63.scope: Deactivated successfully.
Dec  5 01:28:11 compute-0 conmon[296536]: conmon daf53dd4cc0081ea6436 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-daf53dd4cc0081ea643617e63a6f1735d3ba3650cf86a4d5a4217a9f6d1c1e63.scope/container/memory.events
Dec  5 01:28:11 compute-0 podman[296495]: 2025-12-05 01:28:11.668389185 +0000 UTC m=+0.236990790 container died daf53dd4cc0081ea643617e63a6f1735d3ba3650cf86a4d5a4217a9f6d1c1e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:28:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-5737be733f4411a0415f8cac954e5fdad07a1fb9042ab8262b13b6871f4f5cfa-merged.mount: Deactivated successfully.
Dec  5 01:28:11 compute-0 podman[296495]: 2025-12-05 01:28:11.729553566 +0000 UTC m=+0.298155161 container remove daf53dd4cc0081ea643617e63a6f1735d3ba3650cf86a4d5a4217a9f6d1c1e63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_gauss, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  5 01:28:11 compute-0 systemd[1]: libpod-conmon-daf53dd4cc0081ea643617e63a6f1735d3ba3650cf86a4d5a4217a9f6d1c1e63.scope: Deactivated successfully.
Dec  5 01:28:11 compute-0 podman[296617]: 2025-12-05 01:28:11.961042542 +0000 UTC m=+0.066460401 container create 8d5d2c99018b381ffade40104ddd7912fdbbc1e5b854381f470779e038d76cf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  5 01:28:12 compute-0 podman[296617]: 2025-12-05 01:28:11.933439399 +0000 UTC m=+0.038857238 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:28:12 compute-0 systemd[1]: Started libpod-conmon-8d5d2c99018b381ffade40104ddd7912fdbbc1e5b854381f470779e038d76cf0.scope.
Dec  5 01:28:12 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:28:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6591cbfee6e1bd85ef1a73198f3423a2f8085e975c1cf91abb2d09c1aa12f53a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:28:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6591cbfee6e1bd85ef1a73198f3423a2f8085e975c1cf91abb2d09c1aa12f53a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:28:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6591cbfee6e1bd85ef1a73198f3423a2f8085e975c1cf91abb2d09c1aa12f53a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:28:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6591cbfee6e1bd85ef1a73198f3423a2f8085e975c1cf91abb2d09c1aa12f53a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:28:12 compute-0 podman[296617]: 2025-12-05 01:28:12.151302043 +0000 UTC m=+0.256719972 container init 8d5d2c99018b381ffade40104ddd7912fdbbc1e5b854381f470779e038d76cf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:28:12 compute-0 podman[296617]: 2025-12-05 01:28:12.187451555 +0000 UTC m=+0.292869404 container start 8d5d2c99018b381ffade40104ddd7912fdbbc1e5b854381f470779e038d76cf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:28:12 compute-0 podman[296617]: 2025-12-05 01:28:12.194109081 +0000 UTC m=+0.299526930 container attach 8d5d2c99018b381ffade40104ddd7912fdbbc1e5b854381f470779e038d76cf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lichterman, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  5 01:28:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:28:12 compute-0 python3.9[296679]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  5 01:28:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v558: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:28:12 compute-0 romantic_lichterman[296677]: {
Dec  5 01:28:12 compute-0 romantic_lichterman[296677]:    "0": [
Dec  5 01:28:12 compute-0 romantic_lichterman[296677]:        {
Dec  5 01:28:12 compute-0 romantic_lichterman[296677]:            "devices": [
Dec  5 01:28:12 compute-0 romantic_lichterman[296677]:                "/dev/loop3"
Dec  5 01:28:12 compute-0 romantic_lichterman[296677]:            ],
Dec  5 01:28:12 compute-0 romantic_lichterman[296677]:            "lv_name": "ceph_lv0",
Dec  5 01:28:12 compute-0 romantic_lichterman[296677]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:28:12 compute-0 romantic_lichterman[296677]:            "lv_size": "21470642176",
Dec  5 01:28:12 compute-0 romantic_lichterman[296677]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:28:12 compute-0 romantic_lichterman[296677]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:28:12 compute-0 romantic_lichterman[296677]:            "name": "ceph_lv0",
Dec  5 01:28:12 compute-0 romantic_lichterman[296677]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:28:12 compute-0 romantic_lichterman[296677]:            "tags": {
Dec  5 01:28:12 compute-0 romantic_lichterman[296677]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:28:12 compute-0 romantic_lichterman[296677]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:28:12 compute-0 romantic_lichterman[296677]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:28:12 compute-0 romantic_lichterman[296677]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:28:12 compute-0 romantic_lichterman[296677]:                "ceph.cluster_name": "ceph",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:                "ceph.crush_device_class": "",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:                "ceph.encrypted": "0",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:                "ceph.osd_id": "0",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:                "ceph.type": "block",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:                "ceph.vdo": "0"
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:            },
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:            "type": "block",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:            "vg_name": "ceph_vg0"
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:        }
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:    ],
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:    "1": [
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:        {
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:            "devices": [
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:                "/dev/loop4"
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:            ],
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:            "lv_name": "ceph_lv1",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:            "lv_size": "21470642176",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:            "name": "ceph_lv1",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:            "tags": {
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:                "ceph.cluster_name": "ceph",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:                "ceph.crush_device_class": "",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:                "ceph.encrypted": "0",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:                "ceph.osd_id": "1",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:                "ceph.type": "block",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:                "ceph.vdo": "0"
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:            },
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:            "type": "block",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:            "vg_name": "ceph_vg1"
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:        }
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:    ],
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:    "2": [
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:        {
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:            "devices": [
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:                "/dev/loop5"
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:            ],
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:            "lv_name": "ceph_lv2",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:            "lv_size": "21470642176",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:            "name": "ceph_lv2",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:            "tags": {
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:                "ceph.cluster_name": "ceph",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:                "ceph.crush_device_class": "",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:                "ceph.encrypted": "0",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:                "ceph.osd_id": "2",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:                "ceph.type": "block",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:                "ceph.vdo": "0"
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:            },
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:            "type": "block",
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:            "vg_name": "ceph_vg2"
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:        }
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]:    ]
Dec  5 01:28:13 compute-0 romantic_lichterman[296677]: }
Dec  5 01:28:13 compute-0 systemd[1]: libpod-8d5d2c99018b381ffade40104ddd7912fdbbc1e5b854381f470779e038d76cf0.scope: Deactivated successfully.
Dec  5 01:28:13 compute-0 podman[296617]: 2025-12-05 01:28:13.039467187 +0000 UTC m=+1.144885046 container died 8d5d2c99018b381ffade40104ddd7912fdbbc1e5b854381f470779e038d76cf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lichterman, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:28:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-6591cbfee6e1bd85ef1a73198f3423a2f8085e975c1cf91abb2d09c1aa12f53a-merged.mount: Deactivated successfully.
Dec  5 01:28:13 compute-0 podman[296617]: 2025-12-05 01:28:13.15825968 +0000 UTC m=+1.263677509 container remove 8d5d2c99018b381ffade40104ddd7912fdbbc1e5b854381f470779e038d76cf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  5 01:28:13 compute-0 systemd[1]: libpod-conmon-8d5d2c99018b381ffade40104ddd7912fdbbc1e5b854381f470779e038d76cf0.scope: Deactivated successfully.
Dec  5 01:28:13 compute-0 python3.9[296894]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  5 01:28:14 compute-0 podman[297048]: 2025-12-05 01:28:14.34053304 +0000 UTC m=+0.086511711 container create 8a019cb4190653f5eb27e389fd473e1827af376673c6bf3439660b48bba108e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_cartwright, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  5 01:28:14 compute-0 podman[297048]: 2025-12-05 01:28:14.308770442 +0000 UTC m=+0.054749143 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:28:14 compute-0 systemd[1]: Started libpod-conmon-8a019cb4190653f5eb27e389fd473e1827af376673c6bf3439660b48bba108e7.scope.
Dec  5 01:28:14 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:28:14 compute-0 podman[297048]: 2025-12-05 01:28:14.484671372 +0000 UTC m=+0.230650063 container init 8a019cb4190653f5eb27e389fd473e1827af376673c6bf3439660b48bba108e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:28:14 compute-0 podman[297048]: 2025-12-05 01:28:14.503712055 +0000 UTC m=+0.249690756 container start 8a019cb4190653f5eb27e389fd473e1827af376673c6bf3439660b48bba108e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  5 01:28:14 compute-0 vigorous_cartwright[297096]: 167 167
Dec  5 01:28:14 compute-0 podman[297048]: 2025-12-05 01:28:14.511263606 +0000 UTC m=+0.257242287 container attach 8a019cb4190653f5eb27e389fd473e1827af376673c6bf3439660b48bba108e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_cartwright, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  5 01:28:14 compute-0 systemd[1]: libpod-8a019cb4190653f5eb27e389fd473e1827af376673c6bf3439660b48bba108e7.scope: Deactivated successfully.
Dec  5 01:28:14 compute-0 podman[297048]: 2025-12-05 01:28:14.513697634 +0000 UTC m=+0.259676335 container died 8a019cb4190653f5eb27e389fd473e1827af376673c6bf3439660b48bba108e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_cartwright, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  5 01:28:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v559: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:28:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e906e0e3c43dee6130d8afbf3bc2f23fd5c2d0a48dadb23c7c267d04895ed5b-merged.mount: Deactivated successfully.
Dec  5 01:28:14 compute-0 podman[297048]: 2025-12-05 01:28:14.591775048 +0000 UTC m=+0.337753719 container remove 8a019cb4190653f5eb27e389fd473e1827af376673c6bf3439660b48bba108e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  5 01:28:14 compute-0 systemd[1]: libpod-conmon-8a019cb4190653f5eb27e389fd473e1827af376673c6bf3439660b48bba108e7.scope: Deactivated successfully.
Dec  5 01:28:14 compute-0 podman[297183]: 2025-12-05 01:28:14.843547861 +0000 UTC m=+0.078282061 container create 3ae07302d23e504b5df5d60cc775df34b5cf9a344dbf140a2ad148bf6bde2597 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shtern, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:28:14 compute-0 podman[297183]: 2025-12-05 01:28:14.806456363 +0000 UTC m=+0.041190613 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:28:14 compute-0 systemd[1]: Started libpod-conmon-3ae07302d23e504b5df5d60cc775df34b5cf9a344dbf140a2ad148bf6bde2597.scope.
Dec  5 01:28:14 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:28:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89caa85fac1461e37885d1eeca9e87fa41eb94322c5875d165248217b2750913/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:28:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89caa85fac1461e37885d1eeca9e87fa41eb94322c5875d165248217b2750913/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:28:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89caa85fac1461e37885d1eeca9e87fa41eb94322c5875d165248217b2750913/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:28:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89caa85fac1461e37885d1eeca9e87fa41eb94322c5875d165248217b2750913/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:28:15 compute-0 podman[297183]: 2025-12-05 01:28:15.001872209 +0000 UTC m=+0.236606459 container init 3ae07302d23e504b5df5d60cc775df34b5cf9a344dbf140a2ad148bf6bde2597 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shtern, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  5 01:28:15 compute-0 podman[297183]: 2025-12-05 01:28:15.018393611 +0000 UTC m=+0.253127791 container start 3ae07302d23e504b5df5d60cc775df34b5cf9a344dbf140a2ad148bf6bde2597 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shtern, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec  5 01:28:15 compute-0 podman[297183]: 2025-12-05 01:28:15.023572086 +0000 UTC m=+0.258306346 container attach 3ae07302d23e504b5df5d60cc775df34b5cf9a344dbf140a2ad148bf6bde2597 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shtern, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:28:15 compute-0 python3.9[297192]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  5 01:28:16 compute-0 vigilant_shtern[297203]: {
Dec  5 01:28:16 compute-0 vigilant_shtern[297203]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 01:28:16 compute-0 vigilant_shtern[297203]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:28:16 compute-0 vigilant_shtern[297203]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 01:28:16 compute-0 vigilant_shtern[297203]:        "osd_id": 0,
Dec  5 01:28:16 compute-0 vigilant_shtern[297203]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:28:16 compute-0 vigilant_shtern[297203]:        "type": "bluestore"
Dec  5 01:28:16 compute-0 vigilant_shtern[297203]:    },
Dec  5 01:28:16 compute-0 vigilant_shtern[297203]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 01:28:16 compute-0 vigilant_shtern[297203]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:28:16 compute-0 vigilant_shtern[297203]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 01:28:16 compute-0 vigilant_shtern[297203]:        "osd_id": 1,
Dec  5 01:28:16 compute-0 vigilant_shtern[297203]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:28:16 compute-0 vigilant_shtern[297203]:        "type": "bluestore"
Dec  5 01:28:16 compute-0 vigilant_shtern[297203]:    },
Dec  5 01:28:16 compute-0 vigilant_shtern[297203]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 01:28:16 compute-0 vigilant_shtern[297203]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:28:16 compute-0 vigilant_shtern[297203]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 01:28:16 compute-0 vigilant_shtern[297203]:        "osd_id": 2,
Dec  5 01:28:16 compute-0 vigilant_shtern[297203]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:28:16 compute-0 vigilant_shtern[297203]:        "type": "bluestore"
Dec  5 01:28:16 compute-0 vigilant_shtern[297203]:    }
Dec  5 01:28:16 compute-0 vigilant_shtern[297203]: }
Dec  5 01:28:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:28:16
Dec  5 01:28:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 01:28:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 01:28:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'volumes', 'vms', 'images', 'default.rgw.control', 'default.rgw.meta']
Dec  5 01:28:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 01:28:16 compute-0 systemd[1]: libpod-3ae07302d23e504b5df5d60cc775df34b5cf9a344dbf140a2ad148bf6bde2597.scope: Deactivated successfully.
Dec  5 01:28:16 compute-0 systemd[1]: libpod-3ae07302d23e504b5df5d60cc775df34b5cf9a344dbf140a2ad148bf6bde2597.scope: Consumed 1.170s CPU time.
Dec  5 01:28:16 compute-0 podman[297183]: 2025-12-05 01:28:16.185032525 +0000 UTC m=+1.419766735 container died 3ae07302d23e504b5df5d60cc775df34b5cf9a344dbf140a2ad148bf6bde2597 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shtern, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  5 01:28:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:28:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:28:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:28:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:28:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:28:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:28:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-89caa85fac1461e37885d1eeca9e87fa41eb94322c5875d165248217b2750913-merged.mount: Deactivated successfully.
Dec  5 01:28:16 compute-0 python3.9[297382]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  5 01:28:16 compute-0 podman[297183]: 2025-12-05 01:28:16.324418084 +0000 UTC m=+1.559152254 container remove 3ae07302d23e504b5df5d60cc775df34b5cf9a344dbf140a2ad148bf6bde2597 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shtern, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:28:16 compute-0 systemd[1]: libpod-conmon-3ae07302d23e504b5df5d60cc775df34b5cf9a344dbf140a2ad148bf6bde2597.scope: Deactivated successfully.
Dec  5 01:28:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:28:16 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:28:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:28:16 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:28:16 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev c08dfc6d-194e-434a-ad1d-9a23eb9411cd does not exist
Dec  5 01:28:16 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev c74ddc19-eb6f-46a4-a2ab-4b510b01a936 does not exist
Dec  5 01:28:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 01:28:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:28:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 01:28:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:28:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:28:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:28:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:28:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:28:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:28:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:28:16 compute-0 podman[297402]: 2025-12-05 01:28:16.419633347 +0000 UTC m=+0.130888432 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vendor=Red Hat, Inc., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=edpm, distribution-scope=public, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  5 01:28:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v560: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:28:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:28:17 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:28:17 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:28:17 compute-0 podman[297557]: 2025-12-05 01:28:17.70747802 +0000 UTC m=+0.113683001 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 01:28:18 compute-0 python3.9[297649]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  5 01:28:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v561: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:28:19 compute-0 python3.9[297805]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  5 01:28:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v562: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:28:21 compute-0 python3.9[297960]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  5 01:28:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:28:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v563: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:28:22 compute-0 python3.9[298115]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  5 01:28:23 compute-0 python3.9[298270]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  5 01:28:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v564: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:28:25 compute-0 python3.9[298425]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  5 01:28:25 compute-0 podman[298427]: 2025-12-05 01:28:25.326646103 +0000 UTC m=+0.134280147 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  5 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:28:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 01:28:26 compute-0 python3.9[298598]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  5 01:28:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v565: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:28:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:28:27 compute-0 python3.9[298753]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  5 01:28:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v566: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:28:29 compute-0 python3.9[298908]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  5 01:28:29 compute-0 podman[158197]: time="2025-12-05T01:28:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:28:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:28:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35733 "" "Go-http-client/1.1"
Dec  5 01:28:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:28:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7281 "" "Go-http-client/1.1"
Dec  5 01:28:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v567: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:28:30 compute-0 python3.9[299063]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  5 01:28:31 compute-0 openstack_network_exporter[160350]: ERROR   01:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:28:31 compute-0 openstack_network_exporter[160350]: ERROR   01:28:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:28:31 compute-0 openstack_network_exporter[160350]: ERROR   01:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:28:31 compute-0 openstack_network_exporter[160350]: ERROR   01:28:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:28:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:28:31 compute-0 openstack_network_exporter[160350]: ERROR   01:28:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:28:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:28:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:28:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v568: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:28:32 compute-0 python3.9[299218]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  5 01:28:33 compute-0 python3.9[299373]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  5 01:28:34 compute-0 podman[299376]: 2025-12-05 01:28:34.026149176 +0000 UTC m=+0.082593002 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  5 01:28:34 compute-0 podman[299375]: 2025-12-05 01:28:34.032054001 +0000 UTC m=+0.088329852 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  5 01:28:34 compute-0 podman[299377]: 2025-12-05 01:28:34.070858906 +0000 UTC m=+0.121301354 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  5 01:28:34 compute-0 podman[299378]: 2025-12-05 01:28:34.07741557 +0000 UTC m=+0.126218362 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 01:28:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v569: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:28:35 compute-0 python3.9[299613]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:28:36 compute-0 python3.9[299765]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:28:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v570: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:28:37 compute-0 python3.9[299917]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:28:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:28:38 compute-0 python3.9[300069]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:28:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v571: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:28:39 compute-0 python3.9[300221]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:28:40 compute-0 podman[300345]: 2025-12-05 01:28:40.091809502 +0000 UTC m=+0.128036932 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., io.buildah.version=1.29.0, release-0.7.12=, config_id=edpm, architecture=x86_64, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-type=git, version=9.4, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  5 01:28:40 compute-0 python3.9[300389]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:28:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v572: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:28:41 compute-0 python3.9[300543]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:28:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.546 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.546 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.546 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f83151a5f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.547 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f83151a6690>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8316c39160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.551 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8314f94050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.552 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8314f940e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.552 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f831506dc10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.554 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee59a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8314ee7950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8314ee7a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.554 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f941a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.556 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee79e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.557 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.558 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f942c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.558 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee6300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.559 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.560 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.560 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.561 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.561 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee74d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.562 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.562 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.563 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8314f94170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.564 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.565 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8314ee79b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.564 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.565 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.567 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8314f94200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.567 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.567 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8314f94290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.567 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.568 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8314ee7ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.569 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.569 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8314f94320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.570 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.570 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8314ee59d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.571 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.566 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.572 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8314ee7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.573 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee76b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.576 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.576 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.577 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.577 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.575 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.578 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8314ee7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.578 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.579 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8314ee74a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.579 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.579 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8314ee7500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.579 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.579 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8314ee7560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:28:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v573: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.580 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.580 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8314ee75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.582 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.583 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8314f945f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.583 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.584 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8314ee7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.584 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.585 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8314ee7680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.585 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.586 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8314ee76e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.586 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.587 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8314ee7f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.587 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.588 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8314ee7740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.588 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.589 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8314ee7f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.589 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.590 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.591 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.591 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.591 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.591 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.591 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.591 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.591 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.591 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.591 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.591 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.592 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.592 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.592 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.592 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.592 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.592 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.592 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.592 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.592 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.592 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.592 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.593 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.593 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.593 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:28:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:28:42.593 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:28:42 compute-0 python3.9[300622]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/virtlogd.conf _original_basename=virtlogd.conf recurse=False state=file path=/etc/libvirt/virtlogd.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:28:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v574: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:28:44 compute-0 python3.9[300774]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:28:45 compute-0 python3.9[300852]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/virtnodedevd.conf _original_basename=virtnodedevd.conf recurse=False state=file path=/etc/libvirt/virtnodedevd.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:28:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:28:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:28:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:28:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:28:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:28:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:28:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v575: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:28:46 compute-0 python3.9[301004]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:28:46 compute-0 podman[301005]: 2025-12-05 01:28:46.701135658 +0000 UTC m=+0.110748009 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, distribution-scope=public, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.6, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  5 01:28:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:28:47 compute-0 python3.9[301103]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/virtproxyd.conf _original_basename=virtproxyd.conf recurse=False state=file path=/etc/libvirt/virtproxyd.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:28:48 compute-0 podman[301227]: 2025-12-05 01:28:48.177587738 +0000 UTC m=+0.112736245 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  5 01:28:48 compute-0 python3.9[301277]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:28:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v576: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:28:48 compute-0 python3.9[301355]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/virtqemud.conf _original_basename=virtqemud.conf recurse=False state=file path=/etc/libvirt/virtqemud.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:28:49 compute-0 python3.9[301508]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:28:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v577: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:28:50 compute-0 python3.9[301586]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/qemu.conf _original_basename=qemu.conf.j2 recurse=False state=file path=/etc/libvirt/qemu.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:28:51 compute-0 python3.9[301738]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:28:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:28:52 compute-0 python3.9[301816]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/libvirt/virtsecretd.conf _original_basename=virtsecretd.conf recurse=False state=file path=/etc/libvirt/virtsecretd.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:28:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v578: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:28:53 compute-0 python3.9[301968]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:28:54 compute-0 python3.9[302046]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0600 owner=libvirt dest=/etc/libvirt/auth.conf _original_basename=auth.conf recurse=False state=file path=/etc/libvirt/auth.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:28:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v579: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:28:55 compute-0 podman[302170]: 2025-12-05 01:28:55.675517029 +0000 UTC m=+0.104980038 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  5 01:28:55 compute-0 python3.9[302217]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:28:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:28:56.155 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:28:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:28:56.155 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:28:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:28:56.156 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:28:56 compute-0 python3.9[302296]: ansible-ansible.legacy.file Invoked with group=libvirt mode=0640 owner=libvirt dest=/etc/sasl2/libvirt.conf _original_basename=sasl_libvirt.conf recurse=False state=file path=/etc/sasl2/libvirt.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:28:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v580: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:28:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:28:58 compute-0 python3.9[302448]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Dec  5 01:28:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v581: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:28:59 compute-0 python3.9[302601]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:28:59 compute-0 podman[158197]: time="2025-12-05T01:28:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:28:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:28:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35733 "" "Go-http-client/1.1"
Dec  5 01:28:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:28:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7269 "" "Go-http-client/1.1"
Dec  5 01:29:00 compute-0 python3.9[302753]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:29:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v582: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:29:01 compute-0 openstack_network_exporter[160350]: ERROR   01:29:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:29:01 compute-0 openstack_network_exporter[160350]: ERROR   01:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:29:01 compute-0 openstack_network_exporter[160350]: ERROR   01:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:29:01 compute-0 openstack_network_exporter[160350]: ERROR   01:29:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:29:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:29:01 compute-0 openstack_network_exporter[160350]: ERROR   01:29:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:29:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:29:01 compute-0 python3.9[302905]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:29:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:29:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v583: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:29:02 compute-0 python3.9[303057]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:29:03 compute-0 python3.9[303209]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:29:04 compute-0 podman[303334]: 2025-12-05 01:29:04.589560961 +0000 UTC m=+0.098786554 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 01:29:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v584: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:29:04 compute-0 podman[303335]: 2025-12-05 01:29:04.603112211 +0000 UTC m=+0.104701610 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Dec  5 01:29:04 compute-0 podman[303336]: 2025-12-05 01:29:04.626791723 +0000 UTC m=+0.131178170 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec  5 01:29:04 compute-0 podman[303333]: 2025-12-05 01:29:04.640305021 +0000 UTC m=+0.152352793 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute)
Dec  5 01:29:04 compute-0 python3.9[303439]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:29:05 compute-0 python3.9[303595]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:29:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v585: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:29:06 compute-0 python3.9[303747]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:29:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:29:08 compute-0 python3.9[303899]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:29:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v586: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:29:09 compute-0 python3.9[304051]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:29:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v587: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:29:10 compute-0 podman[304134]: 2025-12-05 01:29:10.746522494 +0000 UTC m=+0.148954036 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., version=9.4, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, architecture=x86_64, io.openshift.tags=base rhel9, release-0.7.12=, distribution-scope=public, maintainer=Red Hat, Inc., release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, container_name=kepler, config_id=edpm)
Dec  5 01:29:11 compute-0 python3.9[304222]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:29:12 compute-0 python3.9[304374]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:29:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:29:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v588: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:29:13 compute-0 python3.9[304526]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:29:14 compute-0 python3.9[304678]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:29:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v589: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:29:15 compute-0 python3.9[304830]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:29:15 compute-0 python3.9[304908]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtlogd.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtlogd.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:29:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:29:16
Dec  5 01:29:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 01:29:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 01:29:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'images', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'vms', 'cephfs.cephfs.meta', '.rgw.root', 'backups', 'cephfs.cephfs.data']
Dec  5 01:29:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 01:29:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:29:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:29:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:29:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:29:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:29:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:29:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 01:29:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:29:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 01:29:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:29:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:29:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:29:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:29:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:29:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:29:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:29:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v590: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:29:16 compute-0 podman[305109]: 2025-12-05 01:29:16.976451717 +0000 UTC m=+0.110165582 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vcs-type=git, com.redhat.component=ubi9-minimal-container, config_id=edpm, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, version=9.6, managed_by=edpm_ansible, vendor=Red Hat, Inc., container_name=openstack_network_exporter)
Dec  5 01:29:17 compute-0 python3.9[305110]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:29:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:29:17 compute-0 python3.9[305274]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:29:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:29:17 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:29:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 01:29:17 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:29:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 01:29:17 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:29:17 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 3a4186c5-7e9b-45e5-94d9-c6cd27b6f618 does not exist
Dec  5 01:29:17 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 378c0e33-e4b2-4195-bab0-a78b4aaf5b8a does not exist
Dec  5 01:29:17 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 8edfd303-9bc4-494f-9b33-329afd70a601 does not exist
Dec  5 01:29:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 01:29:17 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 01:29:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 01:29:17 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:29:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:29:17 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:29:17 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:29:17 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:29:17 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:29:18 compute-0 podman[305514]: 2025-12-05 01:29:18.416680263 +0000 UTC m=+0.081403708 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 01:29:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v591: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:29:18 compute-0 podman[305603]: 2025-12-05 01:29:18.627554411 +0000 UTC m=+0.050166754 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:29:18 compute-0 podman[305603]: 2025-12-05 01:29:18.956563194 +0000 UTC m=+0.379175547 container create 331726bc4d85fe1ff1361d85591f20d514466163b266ce38d02491fe181f8a05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_brahmagupta, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:29:19 compute-0 systemd[1]: Started libpod-conmon-331726bc4d85fe1ff1361d85591f20d514466163b266ce38d02491fe181f8a05.scope.
Dec  5 01:29:19 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:29:19 compute-0 podman[305603]: 2025-12-05 01:29:19.139043698 +0000 UTC m=+0.561656061 container init 331726bc4d85fe1ff1361d85591f20d514466163b266ce38d02491fe181f8a05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:29:19 compute-0 podman[305603]: 2025-12-05 01:29:19.155373385 +0000 UTC m=+0.577985748 container start 331726bc4d85fe1ff1361d85591f20d514466163b266ce38d02491fe181f8a05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_brahmagupta, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Dec  5 01:29:19 compute-0 podman[305603]: 2025-12-05 01:29:19.162653359 +0000 UTC m=+0.585265732 container attach 331726bc4d85fe1ff1361d85591f20d514466163b266ce38d02491fe181f8a05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_brahmagupta, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:29:19 compute-0 interesting_brahmagupta[305619]: 167 167
Dec  5 01:29:19 compute-0 systemd[1]: libpod-331726bc4d85fe1ff1361d85591f20d514466163b266ce38d02491fe181f8a05.scope: Deactivated successfully.
Dec  5 01:29:19 compute-0 podman[305603]: 2025-12-05 01:29:19.166880967 +0000 UTC m=+0.589493320 container died 331726bc4d85fe1ff1361d85591f20d514466163b266ce38d02491fe181f8a05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_brahmagupta, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:29:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-5853d4ba12680505c24186270451c1de0f78e9b2bf4319bd3caee503eea61d4e-merged.mount: Deactivated successfully.
Dec  5 01:29:19 compute-0 podman[305603]: 2025-12-05 01:29:19.242193964 +0000 UTC m=+0.664806287 container remove 331726bc4d85fe1ff1361d85591f20d514466163b266ce38d02491fe181f8a05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Dec  5 01:29:19 compute-0 python3.9[305588]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:29:19 compute-0 systemd[1]: libpod-conmon-331726bc4d85fe1ff1361d85591f20d514466163b266ce38d02491fe181f8a05.scope: Deactivated successfully.
Dec  5 01:29:19 compute-0 podman[305645]: 2025-12-05 01:29:19.485587522 +0000 UTC m=+0.075578415 container create 72f616471711f3d752ba11b9b6c14fd090779e5db97a53b95e971617373207a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  5 01:29:19 compute-0 podman[305645]: 2025-12-05 01:29:19.447356323 +0000 UTC m=+0.037347306 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:29:19 compute-0 systemd[1]: Started libpod-conmon-72f616471711f3d752ba11b9b6c14fd090779e5db97a53b95e971617373207a9.scope.
Dec  5 01:29:19 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:29:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37460b3eac7fd82b1964115a51411a931f75530c74aa14f99e0c9e9cf831d556/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:29:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37460b3eac7fd82b1964115a51411a931f75530c74aa14f99e0c9e9cf831d556/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:29:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37460b3eac7fd82b1964115a51411a931f75530c74aa14f99e0c9e9cf831d556/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:29:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37460b3eac7fd82b1964115a51411a931f75530c74aa14f99e0c9e9cf831d556/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:29:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37460b3eac7fd82b1964115a51411a931f75530c74aa14f99e0c9e9cf831d556/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:29:19 compute-0 podman[305645]: 2025-12-05 01:29:19.670578007 +0000 UTC m=+0.260568920 container init 72f616471711f3d752ba11b9b6c14fd090779e5db97a53b95e971617373207a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_benz, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:29:19 compute-0 podman[305645]: 2025-12-05 01:29:19.687474389 +0000 UTC m=+0.277465272 container start 72f616471711f3d752ba11b9b6c14fd090779e5db97a53b95e971617373207a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:29:19 compute-0 podman[305645]: 2025-12-05 01:29:19.693119777 +0000 UTC m=+0.283110660 container attach 72f616471711f3d752ba11b9b6c14fd090779e5db97a53b95e971617373207a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_benz, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Dec  5 01:29:19 compute-0 python3.9[305743]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtnodedevd.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:29:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v592: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:29:20 compute-0 python3.9[305911]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:29:20 compute-0 zealous_benz[305686]: --> passed data devices: 0 physical, 3 LVM
Dec  5 01:29:20 compute-0 zealous_benz[305686]: --> relative data size: 1.0
Dec  5 01:29:20 compute-0 zealous_benz[305686]: --> All data devices are unavailable
Dec  5 01:29:21 compute-0 systemd[1]: libpod-72f616471711f3d752ba11b9b6c14fd090779e5db97a53b95e971617373207a9.scope: Deactivated successfully.
Dec  5 01:29:21 compute-0 podman[305645]: 2025-12-05 01:29:21.035537017 +0000 UTC m=+1.625527900 container died 72f616471711f3d752ba11b9b6c14fd090779e5db97a53b95e971617373207a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_benz, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:29:21 compute-0 systemd[1]: libpod-72f616471711f3d752ba11b9b6c14fd090779e5db97a53b95e971617373207a9.scope: Consumed 1.256s CPU time.
Dec  5 01:29:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-37460b3eac7fd82b1964115a51411a931f75530c74aa14f99e0c9e9cf831d556-merged.mount: Deactivated successfully.
Dec  5 01:29:21 compute-0 podman[305645]: 2025-12-05 01:29:21.122011616 +0000 UTC m=+1.712002519 container remove 72f616471711f3d752ba11b9b6c14fd090779e5db97a53b95e971617373207a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_benz, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Dec  5 01:29:21 compute-0 systemd[1]: libpod-conmon-72f616471711f3d752ba11b9b6c14fd090779e5db97a53b95e971617373207a9.scope: Deactivated successfully.
Dec  5 01:29:22 compute-0 podman[306145]: 2025-12-05 01:29:22.114292811 +0000 UTC m=+0.050104112 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:29:22 compute-0 podman[306145]: 2025-12-05 01:29:22.215465441 +0000 UTC m=+0.151276762 container create 93e3415d8e524e753eba8786254d3dca00992de1642841fe3fd9c19c9e98765f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jones, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  5 01:29:22 compute-0 python3.9[306144]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:29:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:29:22 compute-0 systemd[1]: Started libpod-conmon-93e3415d8e524e753eba8786254d3dca00992de1642841fe3fd9c19c9e98765f.scope.
Dec  5 01:29:22 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:29:22 compute-0 podman[306145]: 2025-12-05 01:29:22.358806271 +0000 UTC m=+0.294617592 container init 93e3415d8e524e753eba8786254d3dca00992de1642841fe3fd9c19c9e98765f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:29:22 compute-0 podman[306145]: 2025-12-05 01:29:22.373250545 +0000 UTC m=+0.309061826 container start 93e3415d8e524e753eba8786254d3dca00992de1642841fe3fd9c19c9e98765f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:29:22 compute-0 podman[306145]: 2025-12-05 01:29:22.378246675 +0000 UTC m=+0.314057986 container attach 93e3415d8e524e753eba8786254d3dca00992de1642841fe3fd9c19c9e98765f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jones, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:29:22 compute-0 brave_jones[306161]: 167 167
Dec  5 01:29:22 compute-0 systemd[1]: libpod-93e3415d8e524e753eba8786254d3dca00992de1642841fe3fd9c19c9e98765f.scope: Deactivated successfully.
Dec  5 01:29:22 compute-0 podman[306145]: 2025-12-05 01:29:22.383404159 +0000 UTC m=+0.319215450 container died 93e3415d8e524e753eba8786254d3dca00992de1642841fe3fd9c19c9e98765f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:29:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e699b1f4e54cea1ec2aea7e8dea0e7eca4c7430bd9b3083d7d2b437c2ab5bde-merged.mount: Deactivated successfully.
Dec  5 01:29:22 compute-0 podman[306145]: 2025-12-05 01:29:22.438984794 +0000 UTC m=+0.374796085 container remove 93e3415d8e524e753eba8786254d3dca00992de1642841fe3fd9c19c9e98765f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jones, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:29:22 compute-0 systemd[1]: libpod-conmon-93e3415d8e524e753eba8786254d3dca00992de1642841fe3fd9c19c9e98765f.scope: Deactivated successfully.
Dec  5 01:29:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v593: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:29:22 compute-0 podman[306252]: 2025-12-05 01:29:22.683048891 +0000 UTC m=+0.083619390 container create 50b826528a76947eebf56f36b4436239a269d09586c6c241687118e26dae194f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  5 01:29:22 compute-0 podman[306252]: 2025-12-05 01:29:22.647800845 +0000 UTC m=+0.048371384 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:29:22 compute-0 systemd[1]: Started libpod-conmon-50b826528a76947eebf56f36b4436239a269d09586c6c241687118e26dae194f.scope.
Dec  5 01:29:22 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:29:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31bc212ceb3ce5191e5e5ce2936ce2afc136757f6c4e766ef86b4ba0bb1d6ff7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:29:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31bc212ceb3ce5191e5e5ce2936ce2afc136757f6c4e766ef86b4ba0bb1d6ff7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:29:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31bc212ceb3ce5191e5e5ce2936ce2afc136757f6c4e766ef86b4ba0bb1d6ff7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:29:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31bc212ceb3ce5191e5e5ce2936ce2afc136757f6c4e766ef86b4ba0bb1d6ff7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:29:22 compute-0 podman[306252]: 2025-12-05 01:29:22.863620872 +0000 UTC m=+0.264191351 container init 50b826528a76947eebf56f36b4436239a269d09586c6c241687118e26dae194f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_swartz, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  5 01:29:22 compute-0 podman[306252]: 2025-12-05 01:29:22.88502299 +0000 UTC m=+0.285593449 container start 50b826528a76947eebf56f36b4436239a269d09586c6c241687118e26dae194f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:29:22 compute-0 podman[306252]: 2025-12-05 01:29:22.890284828 +0000 UTC m=+0.290855287 container attach 50b826528a76947eebf56f36b4436239a269d09586c6c241687118e26dae194f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_swartz, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:29:23 compute-0 python3.9[306357]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]: {
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:    "0": [
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:        {
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:            "devices": [
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:                "/dev/loop3"
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:            ],
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:            "lv_name": "ceph_lv0",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:            "lv_size": "21470642176",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:            "name": "ceph_lv0",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:            "tags": {
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:                "ceph.cluster_name": "ceph",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:                "ceph.crush_device_class": "",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:                "ceph.encrypted": "0",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:                "ceph.osd_id": "0",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:                "ceph.type": "block",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:                "ceph.vdo": "0"
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:            },
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:            "type": "block",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:            "vg_name": "ceph_vg0"
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:        }
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:    ],
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:    "1": [
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:        {
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:            "devices": [
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:                "/dev/loop4"
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:            ],
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:            "lv_name": "ceph_lv1",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:            "lv_size": "21470642176",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:            "name": "ceph_lv1",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:            "tags": {
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:                "ceph.cluster_name": "ceph",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:                "ceph.crush_device_class": "",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:                "ceph.encrypted": "0",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:                "ceph.osd_id": "1",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:                "ceph.type": "block",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:                "ceph.vdo": "0"
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:            },
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:            "type": "block",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:            "vg_name": "ceph_vg1"
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:        }
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:    ],
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:    "2": [
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:        {
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:            "devices": [
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:                "/dev/loop5"
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:            ],
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:            "lv_name": "ceph_lv2",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:            "lv_size": "21470642176",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:            "name": "ceph_lv2",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:            "tags": {
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:                "ceph.cluster_name": "ceph",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:                "ceph.crush_device_class": "",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:                "ceph.encrypted": "0",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:                "ceph.osd_id": "2",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:                "ceph.type": "block",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:                "ceph.vdo": "0"
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:            },
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:            "type": "block",
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:            "vg_name": "ceph_vg2"
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:        }
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]:    ]
Dec  5 01:29:23 compute-0 hardcore_swartz[306300]: }
Dec  5 01:29:23 compute-0 systemd[1]: libpod-50b826528a76947eebf56f36b4436239a269d09586c6c241687118e26dae194f.scope: Deactivated successfully.
Dec  5 01:29:23 compute-0 podman[306252]: 2025-12-05 01:29:23.729364579 +0000 UTC m=+1.129935068 container died 50b826528a76947eebf56f36b4436239a269d09586c6c241687118e26dae194f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_swartz, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Dec  5 01:29:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-31bc212ceb3ce5191e5e5ce2936ce2afc136757f6c4e766ef86b4ba0bb1d6ff7-merged.mount: Deactivated successfully.
Dec  5 01:29:23 compute-0 podman[306252]: 2025-12-05 01:29:23.841290269 +0000 UTC m=+1.241860848 container remove 50b826528a76947eebf56f36b4436239a269d09586c6c241687118e26dae194f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Dec  5 01:29:23 compute-0 systemd[1]: libpod-conmon-50b826528a76947eebf56f36b4436239a269d09586c6c241687118e26dae194f.scope: Deactivated successfully.
Dec  5 01:29:24 compute-0 python3.9[306466]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:29:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v594: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:29:24 compute-0 podman[306715]: 2025-12-05 01:29:24.907486923 +0000 UTC m=+0.066221563 container create 61cc64ec096690bcd2c5a4077e315daffbb1e529b236bb6630b30c11ab9ccf77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:29:24 compute-0 systemd[1]: Started libpod-conmon-61cc64ec096690bcd2c5a4077e315daffbb1e529b236bb6630b30c11ab9ccf77.scope.
Dec  5 01:29:24 compute-0 podman[306715]: 2025-12-05 01:29:24.881417134 +0000 UTC m=+0.040151854 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:29:25 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:29:25 compute-0 podman[306715]: 2025-12-05 01:29:25.040522083 +0000 UTC m=+0.199256713 container init 61cc64ec096690bcd2c5a4077e315daffbb1e529b236bb6630b30c11ab9ccf77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_goldberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  5 01:29:25 compute-0 podman[306715]: 2025-12-05 01:29:25.050102381 +0000 UTC m=+0.208837051 container start 61cc64ec096690bcd2c5a4077e315daffbb1e529b236bb6630b30c11ab9ccf77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_goldberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  5 01:29:25 compute-0 podman[306715]: 2025-12-05 01:29:25.056718286 +0000 UTC m=+0.215452946 container attach 61cc64ec096690bcd2c5a4077e315daffbb1e529b236bb6630b30c11ab9ccf77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:29:25 compute-0 exciting_goldberg[306756]: 167 167
Dec  5 01:29:25 compute-0 systemd[1]: libpod-61cc64ec096690bcd2c5a4077e315daffbb1e529b236bb6630b30c11ab9ccf77.scope: Deactivated successfully.
Dec  5 01:29:25 compute-0 podman[306715]: 2025-12-05 01:29:25.06434332 +0000 UTC m=+0.223077990 container died 61cc64ec096690bcd2c5a4077e315daffbb1e529b236bb6630b30c11ab9ccf77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_goldberg, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:29:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-74eb120e3352d268cb64039b0b440cf5ffdc35c0e69618f505735dba3e250a7e-merged.mount: Deactivated successfully.
Dec  5 01:29:25 compute-0 podman[306715]: 2025-12-05 01:29:25.138791202 +0000 UTC m=+0.297525872 container remove 61cc64ec096690bcd2c5a4077e315daffbb1e529b236bb6630b30c11ab9ccf77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_goldberg, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  5 01:29:25 compute-0 systemd[1]: libpod-conmon-61cc64ec096690bcd2c5a4077e315daffbb1e529b236bb6630b30c11ab9ccf77.scope: Deactivated successfully.
Dec  5 01:29:25 compute-0 python3.9[306758]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:29:25 compute-0 podman[306782]: 2025-12-05 01:29:25.398787175 +0000 UTC m=+0.063254581 container create 984504081e0cdccde018514780f5c2c4948cf1b7f788e0dbe5cf69eed2659141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  5 01:29:25 compute-0 podman[306782]: 2025-12-05 01:29:25.372827539 +0000 UTC m=+0.037294965 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:29:25 compute-0 systemd[1]: Started libpod-conmon-984504081e0cdccde018514780f5c2c4948cf1b7f788e0dbe5cf69eed2659141.scope.
Dec  5 01:29:25 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:29:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a72d8ad3e3954e0efe6eb8b1461d11916646ab0a651ab34e91345f319e404b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:29:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a72d8ad3e3954e0efe6eb8b1461d11916646ab0a651ab34e91345f319e404b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:29:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a72d8ad3e3954e0efe6eb8b1461d11916646ab0a651ab34e91345f319e404b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:29:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a72d8ad3e3954e0efe6eb8b1461d11916646ab0a651ab34e91345f319e404b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:29:25 compute-0 podman[306782]: 2025-12-05 01:29:25.558640766 +0000 UTC m=+0.223108192 container init 984504081e0cdccde018514780f5c2c4948cf1b7f788e0dbe5cf69eed2659141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Dec  5 01:29:25 compute-0 podman[306782]: 2025-12-05 01:29:25.576837075 +0000 UTC m=+0.241304481 container start 984504081e0cdccde018514780f5c2c4948cf1b7f788e0dbe5cf69eed2659141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  5 01:29:25 compute-0 podman[306782]: 2025-12-05 01:29:25.582419171 +0000 UTC m=+0.246886587 container attach 984504081e0cdccde018514780f5c2c4948cf1b7f788e0dbe5cf69eed2659141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wilbur, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:29:25 compute-0 python3.9[306878]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtproxyd.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtproxyd.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:29:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 01:29:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v595: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:29:26 compute-0 charming_wilbur[306834]: {
Dec  5 01:29:26 compute-0 charming_wilbur[306834]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 01:29:26 compute-0 charming_wilbur[306834]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:29:26 compute-0 charming_wilbur[306834]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 01:29:26 compute-0 charming_wilbur[306834]:        "osd_id": 0,
Dec  5 01:29:26 compute-0 charming_wilbur[306834]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:29:26 compute-0 charming_wilbur[306834]:        "type": "bluestore"
Dec  5 01:29:26 compute-0 charming_wilbur[306834]:    },
Dec  5 01:29:26 compute-0 charming_wilbur[306834]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 01:29:26 compute-0 charming_wilbur[306834]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:29:26 compute-0 charming_wilbur[306834]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 01:29:26 compute-0 charming_wilbur[306834]:        "osd_id": 1,
Dec  5 01:29:26 compute-0 charming_wilbur[306834]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:29:26 compute-0 charming_wilbur[306834]:        "type": "bluestore"
Dec  5 01:29:26 compute-0 charming_wilbur[306834]:    },
Dec  5 01:29:26 compute-0 charming_wilbur[306834]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 01:29:26 compute-0 charming_wilbur[306834]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:29:26 compute-0 charming_wilbur[306834]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 01:29:26 compute-0 charming_wilbur[306834]:        "osd_id": 2,
Dec  5 01:29:26 compute-0 charming_wilbur[306834]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:29:26 compute-0 charming_wilbur[306834]:        "type": "bluestore"
Dec  5 01:29:26 compute-0 charming_wilbur[306834]:    }
Dec  5 01:29:26 compute-0 charming_wilbur[306834]: }
Dec  5 01:29:26 compute-0 podman[307019]: 2025-12-05 01:29:26.71061786 +0000 UTC m=+0.122413086 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec  5 01:29:26 compute-0 systemd[1]: libpod-984504081e0cdccde018514780f5c2c4948cf1b7f788e0dbe5cf69eed2659141.scope: Deactivated successfully.
Dec  5 01:29:26 compute-0 systemd[1]: libpod-984504081e0cdccde018514780f5c2c4948cf1b7f788e0dbe5cf69eed2659141.scope: Consumed 1.138s CPU time.
Dec  5 01:29:26 compute-0 conmon[306834]: conmon 984504081e0cdccde018 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-984504081e0cdccde018514780f5c2c4948cf1b7f788e0dbe5cf69eed2659141.scope/container/memory.events
Dec  5 01:29:26 compute-0 podman[306782]: 2025-12-05 01:29:26.719174709 +0000 UTC m=+1.383642105 container died 984504081e0cdccde018514780f5c2c4948cf1b7f788e0dbe5cf69eed2659141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wilbur, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  5 01:29:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a72d8ad3e3954e0efe6eb8b1461d11916646ab0a651ab34e91345f319e404b0-merged.mount: Deactivated successfully.
Dec  5 01:29:26 compute-0 podman[306782]: 2025-12-05 01:29:26.808073446 +0000 UTC m=+1.472540872 container remove 984504081e0cdccde018514780f5c2c4948cf1b7f788e0dbe5cf69eed2659141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:29:26 compute-0 systemd[1]: libpod-conmon-984504081e0cdccde018514780f5c2c4948cf1b7f788e0dbe5cf69eed2659141.scope: Deactivated successfully.
Dec  5 01:29:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:29:26 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:29:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:29:26 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:29:26 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 674b5970-f4a2-4c16-a196-f2021c9226c9 does not exist
Dec  5 01:29:26 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 536a2f30-69fe-4883-8789-629c0be5e6b8 does not exist
Dec  5 01:29:26 compute-0 python3.9[307076]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:29:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:29:27 compute-0 python3.9[307215]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:29:27 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:29:27 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:29:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v596: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:29:28 compute-0 python3.9[307367]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:29:29 compute-0 python3.9[307445]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:29:29 compute-0 podman[158197]: time="2025-12-05T01:29:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:29:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:29:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35733 "" "Go-http-client/1.1"
Dec  5 01:29:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:29:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7273 "" "Go-http-client/1.1"
Dec  5 01:29:30 compute-0 python3.9[307597]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:29:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v597: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:29:30 compute-0 python3.9[307675]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtqemud.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtqemud.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:29:31 compute-0 openstack_network_exporter[160350]: ERROR   01:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:29:31 compute-0 openstack_network_exporter[160350]: ERROR   01:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:29:31 compute-0 openstack_network_exporter[160350]: ERROR   01:29:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:29:31 compute-0 openstack_network_exporter[160350]: ERROR   01:29:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:29:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:29:31 compute-0 openstack_network_exporter[160350]: ERROR   01:29:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:29:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:29:32 compute-0 python3.9[307827]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:29:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:29:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v598: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:29:32 compute-0 python3.9[307905]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:29:34 compute-0 python3.9[308057]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:29:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v599: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:29:34 compute-0 podman[308109]: 2025-12-05 01:29:34.919267062 +0000 UTC m=+0.126533681 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  5 01:29:34 compute-0 podman[308108]: 2025-12-05 01:29:34.919409976 +0000 UTC m=+0.124531715 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  5 01:29:34 compute-0 podman[308107]: 2025-12-05 01:29:34.934743914 +0000 UTC m=+0.151380895 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Dec  5 01:29:34 compute-0 podman[308115]: 2025-12-05 01:29:34.97320302 +0000 UTC m=+0.165325505 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  5 01:29:35 compute-0 python3.9[308211]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:29:36 compute-0 python3.9[308369]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:29:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v600: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:29:36 compute-0 python3.9[308447]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtsecretd.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtsecretd.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:29:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:29:37 compute-0 python3.9[308599]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:29:38 compute-0 python3.9[308677]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:29:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v601: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:29:39 compute-0 python3.9[308829]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:29:40 compute-0 python3.9[308907]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf _original_basename=libvirt-socket.unit.j2 recurse=False state=file path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:29:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v602: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:29:41 compute-0 python3.9[309057]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:29:41 compute-0 podman[309061]: 2025-12-05 01:29:41.190447579 +0000 UTC m=+0.108224799 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, version=9.4, container_name=kepler, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, distribution-scope=public, name=ubi9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  5 01:29:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:29:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v603: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:29:43 compute-0 python3.9[309230]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec  5 01:29:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v604: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:29:45 compute-0 python3.9[309382]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:29:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:29:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:29:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:29:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:29:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:29:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:29:46 compute-0 python3.9[309534]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:29:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v605: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:29:47 compute-0 podman[309658]: 2025-12-05 01:29:47.201967232 +0000 UTC m=+0.119644228 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., version=9.6, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, name=ubi9-minimal, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.buildah.version=1.33.7, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41)
Dec  5 01:29:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:29:47 compute-0 python3.9[309704]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:29:48 compute-0 python3.9[309856]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:29:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v606: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:29:48 compute-0 podman[309905]: 2025-12-05 01:29:48.645955523 +0000 UTC m=+0.068153147 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  5 01:29:49 compute-0 python3.9[310029]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:29:50 compute-0 python3.9[310182]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:29:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v607: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:29:51 compute-0 python3.9[310334]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:29:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:29:52 compute-0 python3.9[310486]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:29:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v608: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:29:53 compute-0 python3.9[310638]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:29:54 compute-0 python3.9[310790]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:29:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v609: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:29:55 compute-0 python3.9[310942]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:29:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:29:56.156 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:29:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:29:56.157 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:29:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:29:56.157 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:29:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v610: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:29:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:29:57 compute-0 podman[311066]: 2025-12-05 01:29:57.482535876 +0000 UTC m=+0.091943563 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  5 01:29:57 compute-0 python3.9[311110]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  5 01:29:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v611: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:29:58 compute-0 python3.9[311262]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:29:59 compute-0 python3.9[311416]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  5 01:29:59 compute-0 podman[158197]: time="2025-12-05T01:29:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:29:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:29:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35733 "" "Go-http-client/1.1"
Dec  5 01:29:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:29:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7269 "" "Go-http-client/1.1"
Dec  5 01:30:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v612: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:30:00 compute-0 python3.9[311566]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:30:01 compute-0 openstack_network_exporter[160350]: ERROR   01:30:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:30:01 compute-0 openstack_network_exporter[160350]: ERROR   01:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:30:01 compute-0 openstack_network_exporter[160350]: ERROR   01:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:30:01 compute-0 openstack_network_exporter[160350]: ERROR   01:30:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:30:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:30:01 compute-0 openstack_network_exporter[160350]: ERROR   01:30:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:30:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:30:01 compute-0 python3.9[311687]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764898200.1960008-1017-134455890810254/.source.xml follow=False _original_basename=secret.xml.j2 checksum=fdb3975e1f666f2811f2fcfa5c297c7e31466e55 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:30:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:30:02 compute-0 python3.9[311839]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine cbd280d3-cbd8-528b-ace6-2b3a887cdcee#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:30:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v613: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:30:02 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec  5 01:30:02 compute-0 systemd[1]: Started libvirt secret daemon.
Dec  5 01:30:03 compute-0 python3.9[312020]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:30:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v614: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:30:05 compute-0 podman[312251]: 2025-12-05 01:30:05.707000881 +0000 UTC m=+0.104390141 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 01:30:05 compute-0 podman[312253]: 2025-12-05 01:30:05.729070418 +0000 UTC m=+0.114236497 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  5 01:30:05 compute-0 podman[312250]: 2025-12-05 01:30:05.734381006 +0000 UTC m=+0.145771158 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  5 01:30:05 compute-0 podman[312258]: 2025-12-05 01:30:05.771708991 +0000 UTC m=+0.152730964 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  5 01:30:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v615: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:30:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:30:07 compute-0 python3.9[312565]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:30:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v616: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:30:09 compute-0 python3.9[312717]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:30:10 compute-0 python3.9[312795]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/edpm-config/firewall/libvirt.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/libvirt.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:30:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v617: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:30:11 compute-0 podman[312919]: 2025-12-05 01:30:11.422646409 +0000 UTC m=+0.123015762 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, distribution-scope=public, vcs-type=git, build-date=2024-09-18T21:23:30, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., release-0.7.12=, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  5 01:30:11 compute-0 python3.9[312965]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:30:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:30:12 compute-0 python3.9[313117]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:30:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v618: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:30:13 compute-0 python3.9[313195]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:30:14 compute-0 python3.9[313347]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:30:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v619: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:30:14 compute-0 python3.9[313425]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.dlok90ob recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:30:15 compute-0 python3.9[313577]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:30:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:30:16
Dec  5 01:30:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 01:30:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 01:30:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'images']
Dec  5 01:30:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 01:30:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:30:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:30:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:30:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:30:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:30:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:30:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 01:30:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:30:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:30:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 01:30:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:30:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:30:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:30:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:30:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:30:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:30:16 compute-0 python3.9[313655]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:30:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v620: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:30:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:30:17 compute-0 podman[313779]: 2025-12-05 01:30:17.696120029 +0000 UTC m=+0.113492795 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, release=1755695350, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, vcs-type=git, version=9.6, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  5 01:30:18 compute-0 python3.9[313826]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:30:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v621: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:30:19 compute-0 podman[313951]: 2025-12-05 01:30:19.557523686 +0000 UTC m=+0.131862159 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 01:30:19 compute-0 python3[314003]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  5 01:30:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v622: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:30:21 compute-0 python3.9[314155]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:30:22 compute-0 python3.9[314233]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:30:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:30:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v623: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:30:23 compute-0 python3.9[314385]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:30:23 compute-0 python3.9[314463]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:30:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v624: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:30:24 compute-0 python3.9[314615]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:30:25 compute-0 python3.9[314693]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:30:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 01:30:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v625: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:30:26 compute-0 python3.9[314845]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:30:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:30:27 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Dec  5 01:30:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:30:27.301252) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  5 01:30:27 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Dec  5 01:30:27 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898227301306, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1795, "num_deletes": 250, "total_data_size": 3041035, "memory_usage": 3083944, "flush_reason": "Manual Compaction"}
Dec  5 01:30:27 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Dec  5 01:30:27 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898227316795, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1720154, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11729, "largest_seqno": 13523, "table_properties": {"data_size": 1714302, "index_size": 2927, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14541, "raw_average_key_size": 20, "raw_value_size": 1701411, "raw_average_value_size": 2346, "num_data_blocks": 136, "num_entries": 725, "num_filter_entries": 725, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764898023, "oldest_key_time": 1764898023, "file_creation_time": 1764898227, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Dec  5 01:30:27 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 15690 microseconds, and 8345 cpu microseconds.
Dec  5 01:30:27 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 01:30:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:30:27.316887) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1720154 bytes OK
Dec  5 01:30:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:30:27.316969) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Dec  5 01:30:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:30:27.319714) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Dec  5 01:30:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:30:27.319735) EVENT_LOG_v1 {"time_micros": 1764898227319728, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  5 01:30:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:30:27.319757) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  5 01:30:27 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 3033452, prev total WAL file size 3033452, number of live WAL files 2.
Dec  5 01:30:27 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 01:30:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:30:27.321469) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353032' seq:0, type:0; will stop at (end)
Dec  5 01:30:27 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  5 01:30:27 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1679KB)], [29(7640KB)]
Dec  5 01:30:27 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898227321525, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9544233, "oldest_snapshot_seqno": -1}
Dec  5 01:30:27 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 3995 keys, 7506489 bytes, temperature: kUnknown
Dec  5 01:30:27 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898227363137, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7506489, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7478069, "index_size": 17302, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 94986, "raw_average_key_size": 23, "raw_value_size": 7404293, "raw_average_value_size": 1853, "num_data_blocks": 755, "num_entries": 3995, "num_filter_entries": 3995, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764898227, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Dec  5 01:30:27 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 01:30:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:30:27.363358) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7506489 bytes
Dec  5 01:30:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:30:27.365459) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 229.0 rd, 180.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.5 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(9.9) write-amplify(4.4) OK, records in: 4411, records dropped: 416 output_compression: NoCompression
Dec  5 01:30:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:30:27.365478) EVENT_LOG_v1 {"time_micros": 1764898227365469, "job": 12, "event": "compaction_finished", "compaction_time_micros": 41681, "compaction_time_cpu_micros": 18252, "output_level": 6, "num_output_files": 1, "total_output_size": 7506489, "num_input_records": 4411, "num_output_records": 3995, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  5 01:30:27 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 01:30:27 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898227365815, "job": 12, "event": "table_file_deletion", "file_number": 31}
Dec  5 01:30:27 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 01:30:27 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898227366953, "job": 12, "event": "table_file_deletion", "file_number": 29}
Dec  5 01:30:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:30:27.321288) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:30:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:30:27.367368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:30:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:30:27.367379) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:30:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:30:27.367382) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:30:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:30:27.367385) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:30:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:30:27.367388) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:30:27 compute-0 python3.9[314944]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:30:27 compute-0 podman[315033]: 2025-12-05 01:30:27.71261189 +0000 UTC m=+0.120582804 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  5 01:30:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:30:28 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:30:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 01:30:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:30:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 01:30:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:30:28 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev cf29278d-c9e2-4f2a-8847-4e58fa987b5d does not exist
Dec  5 01:30:28 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 69d778fd-78ce-465f-9011-9d82589199e4 does not exist
Dec  5 01:30:28 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev cfd01f4e-f4c1-4e3e-848e-bd344e04b8ac does not exist
Dec  5 01:30:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 01:30:28 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 01:30:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 01:30:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:30:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:30:28 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:30:28 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:30:28 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:30:28 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:30:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v626: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:30:28 compute-0 python3.9[315246]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:30:29 compute-0 podman[315364]: 2025-12-05 01:30:29.294024576 +0000 UTC m=+0.067985133 container create b71e339e05db0027ddd23bff32642e1b5573ede06c2ba5bcfaa5bff68a9001d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mayer, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  5 01:30:29 compute-0 systemd[1]: Started libpod-conmon-b71e339e05db0027ddd23bff32642e1b5573ede06c2ba5bcfaa5bff68a9001d3.scope.
Dec  5 01:30:29 compute-0 podman[315364]: 2025-12-05 01:30:29.269200732 +0000 UTC m=+0.043161339 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:30:29 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:30:29 compute-0 podman[315364]: 2025-12-05 01:30:29.426773998 +0000 UTC m=+0.200734645 container init b71e339e05db0027ddd23bff32642e1b5573ede06c2ba5bcfaa5bff68a9001d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mayer, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  5 01:30:29 compute-0 podman[315364]: 2025-12-05 01:30:29.444767062 +0000 UTC m=+0.218727649 container start b71e339e05db0027ddd23bff32642e1b5573ede06c2ba5bcfaa5bff68a9001d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Dec  5 01:30:29 compute-0 podman[315364]: 2025-12-05 01:30:29.451381307 +0000 UTC m=+0.225341954 container attach b71e339e05db0027ddd23bff32642e1b5573ede06c2ba5bcfaa5bff68a9001d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mayer, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  5 01:30:29 compute-0 systemd[1]: libpod-b71e339e05db0027ddd23bff32642e1b5573ede06c2ba5bcfaa5bff68a9001d3.scope: Deactivated successfully.
Dec  5 01:30:29 compute-0 xenodochial_mayer[315380]: 167 167
Dec  5 01:30:29 compute-0 conmon[315380]: conmon b71e339e05db0027ddd2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b71e339e05db0027ddd23bff32642e1b5573ede06c2ba5bcfaa5bff68a9001d3.scope/container/memory.events
Dec  5 01:30:29 compute-0 podman[315364]: 2025-12-05 01:30:29.459069872 +0000 UTC m=+0.233030469 container died b71e339e05db0027ddd23bff32642e1b5573ede06c2ba5bcfaa5bff68a9001d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mayer, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:30:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-70137407807c0c71aa720abb61c99e7cad221592bbe4985fb92d1100314650ec-merged.mount: Deactivated successfully.
Dec  5 01:30:29 compute-0 podman[315364]: 2025-12-05 01:30:29.543684819 +0000 UTC m=+0.317645376 container remove b71e339e05db0027ddd23bff32642e1b5573ede06c2ba5bcfaa5bff68a9001d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Dec  5 01:30:29 compute-0 systemd[1]: libpod-conmon-b71e339e05db0027ddd23bff32642e1b5573ede06c2ba5bcfaa5bff68a9001d3.scope: Deactivated successfully.
Dec  5 01:30:29 compute-0 podman[158197]: time="2025-12-05T01:30:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:30:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:30:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35733 "" "Go-http-client/1.1"
Dec  5 01:30:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:30:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7279 "" "Go-http-client/1.1"
Dec  5 01:30:29 compute-0 podman[315449]: 2025-12-05 01:30:29.824124193 +0000 UTC m=+0.096540531 container create 0666dc5e432a1582d7795c0dfe7059e8a52a0b018af4d7e2ba864bdc17c21953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mayer, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:30:29 compute-0 systemd[1]: Started libpod-conmon-0666dc5e432a1582d7795c0dfe7059e8a52a0b018af4d7e2ba864bdc17c21953.scope.
Dec  5 01:30:29 compute-0 podman[315449]: 2025-12-05 01:30:29.79078613 +0000 UTC m=+0.063202508 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:30:29 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:30:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ccfb3169c3b85b0e83fc17347e81c61a15e3ff34127068b8def5763688dc86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:30:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ccfb3169c3b85b0e83fc17347e81c61a15e3ff34127068b8def5763688dc86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:30:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ccfb3169c3b85b0e83fc17347e81c61a15e3ff34127068b8def5763688dc86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:30:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ccfb3169c3b85b0e83fc17347e81c61a15e3ff34127068b8def5763688dc86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:30:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ccfb3169c3b85b0e83fc17347e81c61a15e3ff34127068b8def5763688dc86/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:30:29 compute-0 podman[315449]: 2025-12-05 01:30:29.971226288 +0000 UTC m=+0.243642616 container init 0666dc5e432a1582d7795c0dfe7059e8a52a0b018af4d7e2ba864bdc17c21953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mayer, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:30:30 compute-0 podman[315449]: 2025-12-05 01:30:30.007736369 +0000 UTC m=+0.280152697 container start 0666dc5e432a1582d7795c0dfe7059e8a52a0b018af4d7e2ba864bdc17c21953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mayer, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:30:30 compute-0 podman[315449]: 2025-12-05 01:30:30.013621064 +0000 UTC m=+0.286037392 container attach 0666dc5e432a1582d7795c0dfe7059e8a52a0b018af4d7e2ba864bdc17c21953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:30:30 compute-0 python3.9[315491]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:30:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v627: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:30:31 compute-0 python3.9[315665]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:30:31 compute-0 distracted_mayer[315494]: --> passed data devices: 0 physical, 3 LVM
Dec  5 01:30:31 compute-0 distracted_mayer[315494]: --> relative data size: 1.0
Dec  5 01:30:31 compute-0 distracted_mayer[315494]: --> All data devices are unavailable
Dec  5 01:30:31 compute-0 systemd[1]: libpod-0666dc5e432a1582d7795c0dfe7059e8a52a0b018af4d7e2ba864bdc17c21953.scope: Deactivated successfully.
Dec  5 01:30:31 compute-0 systemd[1]: libpod-0666dc5e432a1582d7795c0dfe7059e8a52a0b018af4d7e2ba864bdc17c21953.scope: Consumed 1.146s CPU time.
Dec  5 01:30:31 compute-0 podman[315449]: 2025-12-05 01:30:31.213929239 +0000 UTC m=+1.486345577 container died 0666dc5e432a1582d7795c0dfe7059e8a52a0b018af4d7e2ba864bdc17c21953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  5 01:30:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-99ccfb3169c3b85b0e83fc17347e81c61a15e3ff34127068b8def5763688dc86-merged.mount: Deactivated successfully.
Dec  5 01:30:31 compute-0 podman[315449]: 2025-12-05 01:30:31.297563798 +0000 UTC m=+1.569980126 container remove 0666dc5e432a1582d7795c0dfe7059e8a52a0b018af4d7e2ba864bdc17c21953 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mayer, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  5 01:30:31 compute-0 systemd[1]: libpod-conmon-0666dc5e432a1582d7795c0dfe7059e8a52a0b018af4d7e2ba864bdc17c21953.scope: Deactivated successfully.
Dec  5 01:30:31 compute-0 openstack_network_exporter[160350]: ERROR   01:30:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:30:31 compute-0 openstack_network_exporter[160350]: ERROR   01:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:30:31 compute-0 openstack_network_exporter[160350]: ERROR   01:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:30:31 compute-0 openstack_network_exporter[160350]: ERROR   01:30:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:30:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:30:31 compute-0 openstack_network_exporter[160350]: ERROR   01:30:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:30:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:30:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:30:32 compute-0 podman[315907]: 2025-12-05 01:30:32.331190791 +0000 UTC m=+0.081259234 container create 20e64b30bf05a60e280cfe8f9617c265d6da39756a3a360b47e02194a9c1f3b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:30:32 compute-0 podman[315907]: 2025-12-05 01:30:32.280414601 +0000 UTC m=+0.030483064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:30:32 compute-0 systemd[1]: Started libpod-conmon-20e64b30bf05a60e280cfe8f9617c265d6da39756a3a360b47e02194a9c1f3b1.scope.
Dec  5 01:30:32 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:30:32 compute-0 podman[315907]: 2025-12-05 01:30:32.582313016 +0000 UTC m=+0.332381479 container init 20e64b30bf05a60e280cfe8f9617c265d6da39756a3a360b47e02194a9c1f3b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  5 01:30:32 compute-0 podman[315907]: 2025-12-05 01:30:32.601382129 +0000 UTC m=+0.351450612 container start 20e64b30bf05a60e280cfe8f9617c265d6da39756a3a360b47e02194a9c1f3b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  5 01:30:32 compute-0 friendly_allen[315923]: 167 167
Dec  5 01:30:32 compute-0 systemd[1]: libpod-20e64b30bf05a60e280cfe8f9617c265d6da39756a3a360b47e02194a9c1f3b1.scope: Deactivated successfully.
Dec  5 01:30:32 compute-0 conmon[315923]: conmon 20e64b30bf05a60e280c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-20e64b30bf05a60e280cfe8f9617c265d6da39756a3a360b47e02194a9c1f3b1.scope/container/memory.events
Dec  5 01:30:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v628: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:30:32 compute-0 podman[315907]: 2025-12-05 01:30:32.70294052 +0000 UTC m=+0.453008983 container attach 20e64b30bf05a60e280cfe8f9617c265d6da39756a3a360b47e02194a9c1f3b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:30:32 compute-0 podman[315907]: 2025-12-05 01:30:32.70328941 +0000 UTC m=+0.453357853 container died 20e64b30bf05a60e280cfe8f9617c265d6da39756a3a360b47e02194a9c1f3b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:30:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-c621b018d240c442ac28991e7c88ce2b1332cf6c6c96ab82041bf70661333363-merged.mount: Deactivated successfully.
Dec  5 01:30:33 compute-0 podman[315907]: 2025-12-05 01:30:32.999218586 +0000 UTC m=+0.749287059 container remove 20e64b30bf05a60e280cfe8f9617c265d6da39756a3a360b47e02194a9c1f3b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_allen, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:30:33 compute-0 systemd[1]: libpod-conmon-20e64b30bf05a60e280cfe8f9617c265d6da39756a3a360b47e02194a9c1f3b1.scope: Deactivated successfully.
Dec  5 01:30:33 compute-0 python3.9[316013]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:30:33 compute-0 podman[316048]: 2025-12-05 01:30:33.280541006 +0000 UTC m=+0.081705777 container create 9ea72fde5076d95973adc43ce02667430872a17e2b6b1a9592f1981dc95c8bf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_driscoll, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:30:33 compute-0 podman[316048]: 2025-12-05 01:30:33.24424754 +0000 UTC m=+0.045412381 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:30:33 compute-0 systemd[1]: Started libpod-conmon-9ea72fde5076d95973adc43ce02667430872a17e2b6b1a9592f1981dc95c8bf6.scope.
Dec  5 01:30:33 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:30:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71dc2bbab6841be7ad81fbb9e6936ec1a548608b6c71f7d90cee86109a444a48/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:30:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71dc2bbab6841be7ad81fbb9e6936ec1a548608b6c71f7d90cee86109a444a48/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:30:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71dc2bbab6841be7ad81fbb9e6936ec1a548608b6c71f7d90cee86109a444a48/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:30:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71dc2bbab6841be7ad81fbb9e6936ec1a548608b6c71f7d90cee86109a444a48/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:30:33 compute-0 podman[316048]: 2025-12-05 01:30:33.519095629 +0000 UTC m=+0.320260410 container init 9ea72fde5076d95973adc43ce02667430872a17e2b6b1a9592f1981dc95c8bf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_driscoll, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:30:33 compute-0 podman[316048]: 2025-12-05 01:30:33.532436402 +0000 UTC m=+0.333601133 container start 9ea72fde5076d95973adc43ce02667430872a17e2b6b1a9592f1981dc95c8bf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  5 01:30:33 compute-0 podman[316048]: 2025-12-05 01:30:33.58134985 +0000 UTC m=+0.382514611 container attach 9ea72fde5076d95973adc43ce02667430872a17e2b6b1a9592f1981dc95c8bf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:30:34 compute-0 python3.9[316197]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:30:34 compute-0 busy_driscoll[316082]: {
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:    "0": [
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:        {
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:            "devices": [
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:                "/dev/loop3"
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:            ],
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:            "lv_name": "ceph_lv0",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:            "lv_size": "21470642176",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:            "name": "ceph_lv0",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:            "tags": {
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:                "ceph.cluster_name": "ceph",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:                "ceph.crush_device_class": "",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:                "ceph.encrypted": "0",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:                "ceph.osd_id": "0",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:                "ceph.type": "block",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:                "ceph.vdo": "0"
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:            },
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:            "type": "block",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:            "vg_name": "ceph_vg0"
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:        }
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:    ],
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:    "1": [
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:        {
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:            "devices": [
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:                "/dev/loop4"
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:            ],
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:            "lv_name": "ceph_lv1",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:            "lv_size": "21470642176",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:            "name": "ceph_lv1",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:            "tags": {
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:                "ceph.cluster_name": "ceph",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:                "ceph.crush_device_class": "",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:                "ceph.encrypted": "0",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:                "ceph.osd_id": "1",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:                "ceph.type": "block",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:                "ceph.vdo": "0"
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:            },
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:            "type": "block",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:            "vg_name": "ceph_vg1"
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:        }
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:    ],
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:    "2": [
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:        {
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:            "devices": [
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:                "/dev/loop5"
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:            ],
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:            "lv_name": "ceph_lv2",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:            "lv_size": "21470642176",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:            "name": "ceph_lv2",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:            "tags": {
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:                "ceph.cluster_name": "ceph",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:                "ceph.crush_device_class": "",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:                "ceph.encrypted": "0",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:                "ceph.osd_id": "2",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:                "ceph.type": "block",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:                "ceph.vdo": "0"
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:            },
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:            "type": "block",
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:            "vg_name": "ceph_vg2"
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:        }
Dec  5 01:30:34 compute-0 busy_driscoll[316082]:    ]
Dec  5 01:30:34 compute-0 busy_driscoll[316082]: }
Dec  5 01:30:34 compute-0 systemd[1]: libpod-9ea72fde5076d95973adc43ce02667430872a17e2b6b1a9592f1981dc95c8bf6.scope: Deactivated successfully.
Dec  5 01:30:34 compute-0 podman[316048]: 2025-12-05 01:30:34.370655309 +0000 UTC m=+1.171820110 container died 9ea72fde5076d95973adc43ce02667430872a17e2b6b1a9592f1981dc95c8bf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_driscoll, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:30:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-71dc2bbab6841be7ad81fbb9e6936ec1a548608b6c71f7d90cee86109a444a48-merged.mount: Deactivated successfully.
Dec  5 01:30:34 compute-0 podman[316048]: 2025-12-05 01:30:34.49582905 +0000 UTC m=+1.296993781 container remove 9ea72fde5076d95973adc43ce02667430872a17e2b6b1a9592f1981dc95c8bf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Dec  5 01:30:34 compute-0 systemd[1]: libpod-conmon-9ea72fde5076d95973adc43ce02667430872a17e2b6b1a9592f1981dc95c8bf6.scope: Deactivated successfully.
Dec  5 01:30:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v629: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:30:35 compute-0 python3.9[316441]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  5 01:30:35 compute-0 podman[316558]: 2025-12-05 01:30:35.456595875 +0000 UTC m=+0.077915311 container create 9466f48e5797698865713c97d62e5ca3aeee810097a421e5206da3e41c9ba8a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  5 01:30:35 compute-0 podman[316558]: 2025-12-05 01:30:35.42102742 +0000 UTC m=+0.042346896 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:30:35 compute-0 systemd[1]: Started libpod-conmon-9466f48e5797698865713c97d62e5ca3aeee810097a421e5206da3e41c9ba8a9.scope.
Dec  5 01:30:35 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:30:35 compute-0 podman[316558]: 2025-12-05 01:30:35.590066338 +0000 UTC m=+0.211385824 container init 9466f48e5797698865713c97d62e5ca3aeee810097a421e5206da3e41c9ba8a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:30:35 compute-0 podman[316558]: 2025-12-05 01:30:35.603135524 +0000 UTC m=+0.224454940 container start 9466f48e5797698865713c97d62e5ca3aeee810097a421e5206da3e41c9ba8a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hellman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec  5 01:30:35 compute-0 podman[316558]: 2025-12-05 01:30:35.609367398 +0000 UTC m=+0.230686894 container attach 9466f48e5797698865713c97d62e5ca3aeee810097a421e5206da3e41c9ba8a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hellman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  5 01:30:35 compute-0 vibrant_hellman[316601]: 167 167
Dec  5 01:30:35 compute-0 systemd[1]: libpod-9466f48e5797698865713c97d62e5ca3aeee810097a421e5206da3e41c9ba8a9.scope: Deactivated successfully.
Dec  5 01:30:35 compute-0 podman[316558]: 2025-12-05 01:30:35.613567725 +0000 UTC m=+0.234887131 container died 9466f48e5797698865713c97d62e5ca3aeee810097a421e5206da3e41c9ba8a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hellman, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:30:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e4f89b541516f3292e48723b1fbf8dc1b75ce3ee96835bfea7fdeb52c11c20e-merged.mount: Deactivated successfully.
Dec  5 01:30:36 compute-0 podman[316558]: 2025-12-05 01:30:36.03200866 +0000 UTC m=+0.653328046 container remove 9466f48e5797698865713c97d62e5ca3aeee810097a421e5206da3e41c9ba8a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hellman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:30:36 compute-0 python3.9[316724]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:30:36 compute-0 systemd[1]: libpod-conmon-9466f48e5797698865713c97d62e5ca3aeee810097a421e5206da3e41c9ba8a9.scope: Deactivated successfully.
Dec  5 01:30:36 compute-0 podman[316669]: 2025-12-05 01:30:36.121154204 +0000 UTC m=+0.346456422 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Dec  5 01:30:36 compute-0 podman[316666]: 2025-12-05 01:30:36.128126629 +0000 UTC m=+0.357656526 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 01:30:36 compute-0 podman[316668]: 2025-12-05 01:30:36.130169106 +0000 UTC m=+0.359907338 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  5 01:30:36 compute-0 podman[316723]: 2025-12-05 01:30:36.188435916 +0000 UTC m=+0.295101346 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  5 01:30:36 compute-0 podman[316790]: 2025-12-05 01:30:36.276212941 +0000 UTC m=+0.059222617 container create df90a21290118ae7ff6be2be00fc9a3a61e53ce20dd7d875983a39a21b426fdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_albattani, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:30:36 compute-0 systemd[1]: Started libpod-conmon-df90a21290118ae7ff6be2be00fc9a3a61e53ce20dd7d875983a39a21b426fdd.scope.
Dec  5 01:30:36 compute-0 podman[316790]: 2025-12-05 01:30:36.256922182 +0000 UTC m=+0.039931898 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:30:36 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:30:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db58f119bedcde94768938759851b30780d955bcce2e20192042f3da70a82c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:30:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db58f119bedcde94768938759851b30780d955bcce2e20192042f3da70a82c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:30:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db58f119bedcde94768938759851b30780d955bcce2e20192042f3da70a82c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:30:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db58f119bedcde94768938759851b30780d955bcce2e20192042f3da70a82c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:30:36 compute-0 podman[316790]: 2025-12-05 01:30:36.41417812 +0000 UTC m=+0.197187886 container init df90a21290118ae7ff6be2be00fc9a3a61e53ce20dd7d875983a39a21b426fdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:30:36 compute-0 podman[316790]: 2025-12-05 01:30:36.439150459 +0000 UTC m=+0.222160175 container start df90a21290118ae7ff6be2be00fc9a3a61e53ce20dd7d875983a39a21b426fdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:30:36 compute-0 podman[316790]: 2025-12-05 01:30:36.445112436 +0000 UTC m=+0.228122122 container attach df90a21290118ae7ff6be2be00fc9a3a61e53ce20dd7d875983a39a21b426fdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_albattani, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  5 01:30:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v630: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:30:37 compute-0 python3.9[316956]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:30:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:30:37 compute-0 serene_albattani[316837]: {
Dec  5 01:30:37 compute-0 serene_albattani[316837]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 01:30:37 compute-0 serene_albattani[316837]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:30:37 compute-0 serene_albattani[316837]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 01:30:37 compute-0 serene_albattani[316837]:        "osd_id": 0,
Dec  5 01:30:37 compute-0 serene_albattani[316837]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:30:37 compute-0 serene_albattani[316837]:        "type": "bluestore"
Dec  5 01:30:37 compute-0 serene_albattani[316837]:    },
Dec  5 01:30:37 compute-0 serene_albattani[316837]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 01:30:37 compute-0 serene_albattani[316837]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:30:37 compute-0 serene_albattani[316837]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 01:30:37 compute-0 serene_albattani[316837]:        "osd_id": 1,
Dec  5 01:30:37 compute-0 serene_albattani[316837]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:30:37 compute-0 serene_albattani[316837]:        "type": "bluestore"
Dec  5 01:30:37 compute-0 serene_albattani[316837]:    },
Dec  5 01:30:37 compute-0 serene_albattani[316837]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 01:30:37 compute-0 serene_albattani[316837]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:30:37 compute-0 serene_albattani[316837]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 01:30:37 compute-0 serene_albattani[316837]:        "osd_id": 2,
Dec  5 01:30:37 compute-0 serene_albattani[316837]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:30:37 compute-0 serene_albattani[316837]:        "type": "bluestore"
Dec  5 01:30:37 compute-0 serene_albattani[316837]:    }
Dec  5 01:30:37 compute-0 serene_albattani[316837]: }
Dec  5 01:30:37 compute-0 systemd[1]: libpod-df90a21290118ae7ff6be2be00fc9a3a61e53ce20dd7d875983a39a21b426fdd.scope: Deactivated successfully.
Dec  5 01:30:37 compute-0 systemd[1]: libpod-df90a21290118ae7ff6be2be00fc9a3a61e53ce20dd7d875983a39a21b426fdd.scope: Consumed 1.181s CPU time.
Dec  5 01:30:37 compute-0 podman[316790]: 2025-12-05 01:30:37.622867329 +0000 UTC m=+1.405877005 container died df90a21290118ae7ff6be2be00fc9a3a61e53ce20dd7d875983a39a21b426fdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  5 01:30:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-0db58f119bedcde94768938759851b30780d955bcce2e20192042f3da70a82c6-merged.mount: Deactivated successfully.
Dec  5 01:30:37 compute-0 podman[316790]: 2025-12-05 01:30:37.710214802 +0000 UTC m=+1.493224488 container remove df90a21290118ae7ff6be2be00fc9a3a61e53ce20dd7d875983a39a21b426fdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_albattani, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  5 01:30:37 compute-0 systemd[1]: libpod-conmon-df90a21290118ae7ff6be2be00fc9a3a61e53ce20dd7d875983a39a21b426fdd.scope: Deactivated successfully.
Dec  5 01:30:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:30:37 compute-0 python3.9[317060]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/systemd/system/edpm_libvirt.target _original_basename=edpm_libvirt.target recurse=False state=file path=/etc/systemd/system/edpm_libvirt.target force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:30:37 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:30:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:30:37 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:30:37 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 1b6fc748-2bc6-4708-b635-3cbeae3bc37d does not exist
Dec  5 01:30:37 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 480e9877-b788-443d-8d38-f61e007658b5 does not exist
Dec  5 01:30:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v631: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:30:38 compute-0 python3.9[317277]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:30:38 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:30:38 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:30:39 compute-0 python3.9[317355]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/systemd/system/edpm_libvirt_guests.service _original_basename=edpm_libvirt_guests.service recurse=False state=file path=/etc/systemd/system/edpm_libvirt_guests.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:30:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v632: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:30:41 compute-0 python3.9[317509]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:30:41 compute-0 podman[317515]: 2025-12-05 01:30:41.748128611 +0000 UTC m=+0.149559005 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543)
Dec  5 01:30:42 compute-0 python3.9[317607]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/systemd/system/virt-guest-shutdown.target _original_basename=virt-guest-shutdown.target recurse=False state=file path=/etc/systemd/system/virt-guest-shutdown.target force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:30:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.546 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.547 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.547 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f83151a5f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f83151a6690>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8316c39160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee59a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f941a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee79e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f942c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee6300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee74d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.550 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8314f94050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.551 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8314f940e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.551 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f831506dc10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.551 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8314ee7950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.551 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8314ee7a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.552 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8314f94170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8314ee79b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8314f94200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8314f94290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8314ee7ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8314f94320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8314ee59d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8314ee7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8314ee7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8314ee74a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8314ee7500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8314ee7560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8314ee75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.552 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.555 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8314f945f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8314ee7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.556 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee76b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.557 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8314ee7680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8314ee76e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.557 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.558 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.559 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes.rate': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8314ee7f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8314ee7740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8314ee7f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.560 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.564 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.565 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.565 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.565 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:30:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:30:42.565 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:30:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v633: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:30:43 compute-0 systemd[1]: session-55.scope: Deactivated successfully.
Dec  5 01:30:43 compute-0 systemd[1]: session-55.scope: Consumed 2min 52.570s CPU time.
Dec  5 01:30:43 compute-0 systemd-logind[792]: Session 55 logged out. Waiting for processes to exit.
Dec  5 01:30:43 compute-0 systemd-logind[792]: Removed session 55.
Dec  5 01:30:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v634: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:30:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:30:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:30:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:30:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:30:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:30:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:30:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v635: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:30:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:30:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v636: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:30:48 compute-0 podman[317633]: 2025-12-05 01:30:48.711290693 +0000 UTC m=+0.110418650 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, distribution-scope=public, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, architecture=x86_64, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., maintainer=Red Hat, Inc.)
Dec  5 01:30:49 compute-0 systemd-logind[792]: New session 56 of user zuul.
Dec  5 01:30:49 compute-0 systemd[1]: Started Session 56 of User zuul.
Dec  5 01:30:50 compute-0 podman[317781]: 2025-12-05 01:30:50.505778929 +0000 UTC m=+0.101756118 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  5 01:30:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v637: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:30:50 compute-0 python3.9[317822]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  5 01:30:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:30:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v638: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:30:53 compute-0 python3.9[317985]: ansible-ansible.builtin.service_facts Invoked
Dec  5 01:30:53 compute-0 network[318002]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  5 01:30:53 compute-0 network[318003]: 'network-scripts' will be removed from distribution in near future.
Dec  5 01:30:53 compute-0 network[318004]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  5 01:30:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v639: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:30:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:30:56.157 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:30:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:30:56.157 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:30:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:30:56.158 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:30:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v640: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:30:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:30:58 compute-0 podman[318247]: 2025-12-05 01:30:58.507326117 +0000 UTC m=+0.103527647 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  5 01:30:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v641: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:30:58 compute-0 python3.9[318294]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  5 01:30:59 compute-0 podman[158197]: time="2025-12-05T01:30:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:30:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:30:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35733 "" "Go-http-client/1.1"
Dec  5 01:30:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:30:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7282 "" "Go-http-client/1.1"
Dec  5 01:30:59 compute-0 python3.9[318378]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  5 01:31:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v642: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:31:01 compute-0 openstack_network_exporter[160350]: ERROR   01:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:31:01 compute-0 openstack_network_exporter[160350]: ERROR   01:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:31:01 compute-0 openstack_network_exporter[160350]: ERROR   01:31:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:31:01 compute-0 openstack_network_exporter[160350]: ERROR   01:31:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:31:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:31:01 compute-0 openstack_network_exporter[160350]: ERROR   01:31:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:31:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:31:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:31:02 compute-0 python3.9[318531]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  5 01:31:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v643: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:31:04 compute-0 python3.9[318683]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:31:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v644: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:31:05 compute-0 python3.9[318836]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  5 01:31:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v645: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:31:06 compute-0 podman[318960]: 2025-12-05 01:31:06.715692289 +0000 UTC m=+0.103590178 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  5 01:31:06 compute-0 podman[318961]: 2025-12-05 01:31:06.721457661 +0000 UTC m=+0.105629566 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  5 01:31:06 compute-0 podman[318962]: 2025-12-05 01:31:06.729512566 +0000 UTC m=+0.107143248 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec  5 01:31:06 compute-0 podman[318963]: 2025-12-05 01:31:06.786017666 +0000 UTC m=+0.162140906 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  5 01:31:06 compute-0 python3.9[319063]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:31:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:31:07 compute-0 python3.9[319225]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:31:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v646: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:31:08 compute-0 python3.9[319348]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764898267.1762218-95-200585234844741/.source.iscsi _original_basename=.6wij8x2y follow=False checksum=01b6663853d932e08cc55f332ece7cb3fc654e0a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:31:10 compute-0 python3.9[319500]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:31:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v647: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:31:11 compute-0 python3.9[319652]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:31:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:31:12 compute-0 podman[319776]: 2025-12-05 01:31:12.421607706 +0000 UTC m=+0.116294105 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vcs-type=git, com.redhat.component=ubi9-container, config_id=edpm, maintainer=Red Hat, Inc., version=9.4, container_name=kepler, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, distribution-scope=public, release-0.7.12=)
Dec  5 01:31:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v648: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:31:12 compute-0 python3.9[319824]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  5 01:31:12 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Dec  5 01:31:14 compute-0 python3.9[319980]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  5 01:31:14 compute-0 systemd[1]: Reloading.
Dec  5 01:31:14 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:31:14 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:31:14 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec  5 01:31:14 compute-0 systemd[1]: Starting Open-iSCSI...
Dec  5 01:31:14 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Dec  5 01:31:14 compute-0 systemd[1]: Started Open-iSCSI.
Dec  5 01:31:14 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Dec  5 01:31:14 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Dec  5 01:31:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v649: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:31:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:31:16
Dec  5 01:31:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 01:31:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 01:31:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'images', 'default.rgw.log', 'vms', 'default.rgw.meta', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta']
Dec  5 01:31:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 01:31:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:31:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:31:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:31:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:31:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:31:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:31:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 01:31:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:31:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 01:31:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:31:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:31:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:31:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:31:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:31:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:31:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:31:16 compute-0 python3.9[320181]: ansible-ansible.builtin.service_facts Invoked
Dec  5 01:31:16 compute-0 network[320198]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  5 01:31:16 compute-0 network[320199]: 'network-scripts' will be removed from distribution in near future.
Dec  5 01:31:16 compute-0 network[320200]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  5 01:31:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v650: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:31:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:31:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v651: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:31:18 compute-0 podman[320236]: 2025-12-05 01:31:18.89400788 +0000 UTC m=+0.123597500 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, vcs-type=git, version=9.6, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, name=ubi9-minimal, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, release=1755695350, container_name=openstack_network_exporter)
Dec  5 01:31:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v652: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:31:20 compute-0 podman[320307]: 2025-12-05 01:31:20.680315507 +0000 UTC m=+0.114443804 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  5 01:31:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:31:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v653: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:31:22 compute-0 python3.9[320517]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  5 01:31:24 compute-0 python3.9[320669]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Dec  5 01:31:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v654: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:31:25 compute-0 python3.9[320825]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:31:25 compute-0 python3.9[320948]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764898284.4031699-172-104956794601973/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:31:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 01:31:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v655: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:31:27 compute-0 python3.9[321100]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:31:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:31:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v656: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:31:28 compute-0 podman[321223]: 2025-12-05 01:31:28.699380671 +0000 UTC m=+0.108074687 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Dec  5 01:31:29 compute-0 python3.9[321271]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  5 01:31:29 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec  5 01:31:29 compute-0 systemd[1]: Stopped Load Kernel Modules.
Dec  5 01:31:29 compute-0 systemd[1]: Stopping Load Kernel Modules...
Dec  5 01:31:29 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec  5 01:31:29 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec  5 01:31:29 compute-0 podman[158197]: time="2025-12-05T01:31:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:31:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:31:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35733 "" "Go-http-client/1.1"
Dec  5 01:31:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:31:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7281 "" "Go-http-client/1.1"
Dec  5 01:31:30 compute-0 python3.9[321427]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:31:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v657: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:31:31 compute-0 openstack_network_exporter[160350]: ERROR   01:31:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:31:31 compute-0 openstack_network_exporter[160350]: ERROR   01:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:31:31 compute-0 openstack_network_exporter[160350]: ERROR   01:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:31:31 compute-0 openstack_network_exporter[160350]: ERROR   01:31:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:31:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:31:31 compute-0 openstack_network_exporter[160350]: ERROR   01:31:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:31:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:31:32 compute-0 python3.9[321579]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  5 01:31:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:31:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v658: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:31:33 compute-0 python3.9[321731]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  5 01:31:34 compute-0 python3.9[321883]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:31:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v659: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:31:34 compute-0 python3.9[322006]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764898293.319895-230-199805032067491/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:31:35 compute-0 python3.9[322158]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:31:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v660: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:31:36 compute-0 python3.9[322311]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:31:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:31:37 compute-0 podman[322411]: 2025-12-05 01:31:37.676622742 +0000 UTC m=+0.081137175 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  5 01:31:37 compute-0 podman[322416]: 2025-12-05 01:31:37.698078341 +0000 UTC m=+0.091883015 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  5 01:31:37 compute-0 podman[322425]: 2025-12-05 01:31:37.723281074 +0000 UTC m=+0.109535257 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  5 01:31:37 compute-0 podman[322414]: 2025-12-05 01:31:37.734331712 +0000 UTC m=+0.120124353 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  5 01:31:37 compute-0 python3.9[322547]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:31:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v661: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:31:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:31:38 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:31:38 compute-0 python3.9[322814]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:31:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 01:31:38 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:31:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 01:31:38 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:31:38 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 22df35af-1fff-4771-9b37-811be718a10b does not exist
Dec  5 01:31:38 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev fb682c7b-c0b1-48cc-8b74-599dc176ddb4 does not exist
Dec  5 01:31:38 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev e62674e8-92ae-49b8-a2bc-16ae455fa6d7 does not exist
Dec  5 01:31:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 01:31:38 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 01:31:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 01:31:38 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:31:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:31:38 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:31:39 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:31:39 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:31:39 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:31:39 compute-0 python3.9[323110]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:31:39 compute-0 podman[323122]: 2025-12-05 01:31:39.829211732 +0000 UTC m=+0.113183070 container create d351e89151aae82f42d3de309ddb1ca735f41368d5ece24eb47d7fd61671369d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  5 01:31:39 compute-0 podman[323122]: 2025-12-05 01:31:39.76535851 +0000 UTC m=+0.049329888 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:31:39 compute-0 systemd[1]: Started libpod-conmon-d351e89151aae82f42d3de309ddb1ca735f41368d5ece24eb47d7fd61671369d.scope.
Dec  5 01:31:39 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:31:39 compute-0 podman[323122]: 2025-12-05 01:31:39.975152044 +0000 UTC m=+0.259123392 container init d351e89151aae82f42d3de309ddb1ca735f41368d5ece24eb47d7fd61671369d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:31:39 compute-0 podman[323122]: 2025-12-05 01:31:39.992398255 +0000 UTC m=+0.276369613 container start d351e89151aae82f42d3de309ddb1ca735f41368d5ece24eb47d7fd61671369d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_matsumoto, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  5 01:31:39 compute-0 podman[323122]: 2025-12-05 01:31:39.999111893 +0000 UTC m=+0.283083281 container attach d351e89151aae82f42d3de309ddb1ca735f41368d5ece24eb47d7fd61671369d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_matsumoto, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:31:40 compute-0 focused_matsumoto[323136]: 167 167
Dec  5 01:31:40 compute-0 systemd[1]: libpod-d351e89151aae82f42d3de309ddb1ca735f41368d5ece24eb47d7fd61671369d.scope: Deactivated successfully.
Dec  5 01:31:40 compute-0 podman[323122]: 2025-12-05 01:31:40.005164452 +0000 UTC m=+0.289135760 container died d351e89151aae82f42d3de309ddb1ca735f41368d5ece24eb47d7fd61671369d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  5 01:31:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bc365e79b3b9aaf465676cc4688fef67a4d3a88d7b030465c185b84873f94f5-merged.mount: Deactivated successfully.
Dec  5 01:31:40 compute-0 podman[323122]: 2025-12-05 01:31:40.079392963 +0000 UTC m=+0.363364281 container remove d351e89151aae82f42d3de309ddb1ca735f41368d5ece24eb47d7fd61671369d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:31:40 compute-0 systemd[1]: libpod-conmon-d351e89151aae82f42d3de309ddb1ca735f41368d5ece24eb47d7fd61671369d.scope: Deactivated successfully.
Dec  5 01:31:40 compute-0 podman[323159]: 2025-12-05 01:31:40.354055168 +0000 UTC m=+0.091052852 container create 18be16e34ff2bec75ac1d8ce4e4c90904fdb72d33359a6ff893b499c337f9c74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ishizaka, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:31:40 compute-0 podman[323159]: 2025-12-05 01:31:40.321216401 +0000 UTC m=+0.058214135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:31:40 compute-0 systemd[1]: Started libpod-conmon-18be16e34ff2bec75ac1d8ce4e4c90904fdb72d33359a6ff893b499c337f9c74.scope.
Dec  5 01:31:40 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:31:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb27211814bbbe337f49bd1cae8cb49b1845dc3dcb866f9e5924f067dfc42a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:31:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb27211814bbbe337f49bd1cae8cb49b1845dc3dcb866f9e5924f067dfc42a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:31:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb27211814bbbe337f49bd1cae8cb49b1845dc3dcb866f9e5924f067dfc42a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:31:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb27211814bbbe337f49bd1cae8cb49b1845dc3dcb866f9e5924f067dfc42a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:31:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/feb27211814bbbe337f49bd1cae8cb49b1845dc3dcb866f9e5924f067dfc42a3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:31:40 compute-0 podman[323159]: 2025-12-05 01:31:40.501761739 +0000 UTC m=+0.238759413 container init 18be16e34ff2bec75ac1d8ce4e4c90904fdb72d33359a6ff893b499c337f9c74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ishizaka, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  5 01:31:40 compute-0 podman[323159]: 2025-12-05 01:31:40.521346876 +0000 UTC m=+0.258344570 container start 18be16e34ff2bec75ac1d8ce4e4c90904fdb72d33359a6ff893b499c337f9c74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ishizaka, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:31:40 compute-0 podman[323159]: 2025-12-05 01:31:40.527857198 +0000 UTC m=+0.264854942 container attach 18be16e34ff2bec75ac1d8ce4e4c90904fdb72d33359a6ff893b499c337f9c74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:31:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v662: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:31:41 compute-0 python3.9[323339]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:31:41 compute-0 trusting_ishizaka[323175]: --> passed data devices: 0 physical, 3 LVM
Dec  5 01:31:41 compute-0 trusting_ishizaka[323175]: --> relative data size: 1.0
Dec  5 01:31:41 compute-0 trusting_ishizaka[323175]: --> All data devices are unavailable
Dec  5 01:31:41 compute-0 systemd[1]: libpod-18be16e34ff2bec75ac1d8ce4e4c90904fdb72d33359a6ff893b499c337f9c74.scope: Deactivated successfully.
Dec  5 01:31:41 compute-0 systemd[1]: libpod-18be16e34ff2bec75ac1d8ce4e4c90904fdb72d33359a6ff893b499c337f9c74.scope: Consumed 1.046s CPU time.
Dec  5 01:31:41 compute-0 podman[323159]: 2025-12-05 01:31:41.652941683 +0000 UTC m=+1.389939357 container died 18be16e34ff2bec75ac1d8ce4e4c90904fdb72d33359a6ff893b499c337f9c74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ishizaka, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  5 01:31:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-feb27211814bbbe337f49bd1cae8cb49b1845dc3dcb866f9e5924f067dfc42a3-merged.mount: Deactivated successfully.
Dec  5 01:31:41 compute-0 podman[323159]: 2025-12-05 01:31:41.736204266 +0000 UTC m=+1.473201920 container remove 18be16e34ff2bec75ac1d8ce4e4c90904fdb72d33359a6ff893b499c337f9c74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  5 01:31:41 compute-0 systemd[1]: libpod-conmon-18be16e34ff2bec75ac1d8ce4e4c90904fdb72d33359a6ff893b499c337f9c74.scope: Deactivated successfully.
Dec  5 01:31:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:31:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v663: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:31:42 compute-0 podman[323591]: 2025-12-05 01:31:42.724202147 +0000 UTC m=+0.126086020 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., architecture=x86_64, release=1214.1726694543, distribution-scope=public, io.buildah.version=1.29.0, version=9.4, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, release-0.7.12=, com.redhat.component=ubi9-container, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  5 01:31:42 compute-0 podman[323639]: 2025-12-05 01:31:42.766442996 +0000 UTC m=+0.081914387 container create 41f495cd3f9be573bda24b69aa8f8fe2f1a92aeda4c38b0d785670005dd12449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_feistel, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  5 01:31:42 compute-0 systemd[1]: Started libpod-conmon-41f495cd3f9be573bda24b69aa8f8fe2f1a92aeda4c38b0d785670005dd12449.scope.
Dec  5 01:31:42 compute-0 podman[323639]: 2025-12-05 01:31:42.736746577 +0000 UTC m=+0.052217998 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:31:42 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:31:42 compute-0 podman[323639]: 2025-12-05 01:31:42.869552213 +0000 UTC m=+0.185023624 container init 41f495cd3f9be573bda24b69aa8f8fe2f1a92aeda4c38b0d785670005dd12449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_feistel, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  5 01:31:42 compute-0 podman[323639]: 2025-12-05 01:31:42.89023878 +0000 UTC m=+0.205710171 container start 41f495cd3f9be573bda24b69aa8f8fe2f1a92aeda4c38b0d785670005dd12449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:31:42 compute-0 podman[323639]: 2025-12-05 01:31:42.895225719 +0000 UTC m=+0.210697110 container attach 41f495cd3f9be573bda24b69aa8f8fe2f1a92aeda4c38b0d785670005dd12449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_feistel, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  5 01:31:42 compute-0 infallible_feistel[323691]: 167 167
Dec  5 01:31:42 compute-0 systemd[1]: libpod-41f495cd3f9be573bda24b69aa8f8fe2f1a92aeda4c38b0d785670005dd12449.scope: Deactivated successfully.
Dec  5 01:31:42 compute-0 podman[323639]: 2025-12-05 01:31:42.899058976 +0000 UTC m=+0.214530367 container died 41f495cd3f9be573bda24b69aa8f8fe2f1a92aeda4c38b0d785670005dd12449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_feistel, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:31:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbd857a46606b17196855c1e6a9a97182b19a75a37a2844d2b4cad29b6dde7c8-merged.mount: Deactivated successfully.
Dec  5 01:31:42 compute-0 podman[323639]: 2025-12-05 01:31:42.94900394 +0000 UTC m=+0.264475331 container remove 41f495cd3f9be573bda24b69aa8f8fe2f1a92aeda4c38b0d785670005dd12449 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_feistel, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:31:42 compute-0 systemd[1]: libpod-conmon-41f495cd3f9be573bda24b69aa8f8fe2f1a92aeda4c38b0d785670005dd12449.scope: Deactivated successfully.
Dec  5 01:31:43 compute-0 python3.9[323690]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:31:43 compute-0 podman[323722]: 2025-12-05 01:31:43.182825435 +0000 UTC m=+0.075284782 container create 43b1524bc969b76319f26e33031f4982d5ee453e64b5ca6813fa9a567033e414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:31:43 compute-0 podman[323722]: 2025-12-05 01:31:43.153833716 +0000 UTC m=+0.046293133 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:31:43 compute-0 systemd[1]: Started libpod-conmon-43b1524bc969b76319f26e33031f4982d5ee453e64b5ca6813fa9a567033e414.scope.
Dec  5 01:31:43 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:31:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a7985eca371c1c607d87fd2bdd39fef8463363b44fc5fdf09344cc1dc7de6de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:31:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a7985eca371c1c607d87fd2bdd39fef8463363b44fc5fdf09344cc1dc7de6de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:31:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a7985eca371c1c607d87fd2bdd39fef8463363b44fc5fdf09344cc1dc7de6de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:31:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a7985eca371c1c607d87fd2bdd39fef8463363b44fc5fdf09344cc1dc7de6de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:31:43 compute-0 podman[323722]: 2025-12-05 01:31:43.329454437 +0000 UTC m=+0.221913874 container init 43b1524bc969b76319f26e33031f4982d5ee453e64b5ca6813fa9a567033e414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  5 01:31:43 compute-0 podman[323722]: 2025-12-05 01:31:43.363101586 +0000 UTC m=+0.255560943 container start 43b1524bc969b76319f26e33031f4982d5ee453e64b5ca6813fa9a567033e414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_euclid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  5 01:31:43 compute-0 podman[323722]: 2025-12-05 01:31:43.368857157 +0000 UTC m=+0.261316594 container attach 43b1524bc969b76319f26e33031f4982d5ee453e64b5ca6813fa9a567033e414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  5 01:31:43 compute-0 python3.9[323887]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:31:44 compute-0 awesome_euclid[323778]: {
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:    "0": [
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:        {
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:            "devices": [
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:                "/dev/loop3"
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:            ],
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:            "lv_name": "ceph_lv0",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:            "lv_size": "21470642176",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:            "name": "ceph_lv0",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:            "tags": {
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:                "ceph.cluster_name": "ceph",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:                "ceph.crush_device_class": "",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:                "ceph.encrypted": "0",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:                "ceph.osd_id": "0",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:                "ceph.type": "block",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:                "ceph.vdo": "0"
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:            },
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:            "type": "block",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:            "vg_name": "ceph_vg0"
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:        }
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:    ],
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:    "1": [
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:        {
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:            "devices": [
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:                "/dev/loop4"
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:            ],
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:            "lv_name": "ceph_lv1",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:            "lv_size": "21470642176",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:            "name": "ceph_lv1",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:            "tags": {
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:                "ceph.cluster_name": "ceph",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:                "ceph.crush_device_class": "",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:                "ceph.encrypted": "0",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:                "ceph.osd_id": "1",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:                "ceph.type": "block",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:                "ceph.vdo": "0"
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:            },
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:            "type": "block",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:            "vg_name": "ceph_vg1"
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:        }
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:    ],
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:    "2": [
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:        {
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:            "devices": [
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:                "/dev/loop5"
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:            ],
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:            "lv_name": "ceph_lv2",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:            "lv_size": "21470642176",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:            "name": "ceph_lv2",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:            "tags": {
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:                "ceph.cluster_name": "ceph",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:                "ceph.crush_device_class": "",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:                "ceph.encrypted": "0",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:                "ceph.osd_id": "2",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:                "ceph.type": "block",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:                "ceph.vdo": "0"
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:            },
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:            "type": "block",
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:            "vg_name": "ceph_vg2"
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:        }
Dec  5 01:31:44 compute-0 awesome_euclid[323778]:    ]
Dec  5 01:31:44 compute-0 awesome_euclid[323778]: }
Dec  5 01:31:44 compute-0 systemd[1]: libpod-43b1524bc969b76319f26e33031f4982d5ee453e64b5ca6813fa9a567033e414.scope: Deactivated successfully.
Dec  5 01:31:44 compute-0 podman[323722]: 2025-12-05 01:31:44.286476103 +0000 UTC m=+1.178935470 container died 43b1524bc969b76319f26e33031f4982d5ee453e64b5ca6813fa9a567033e414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:31:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a7985eca371c1c607d87fd2bdd39fef8463363b44fc5fdf09344cc1dc7de6de-merged.mount: Deactivated successfully.
Dec  5 01:31:44 compute-0 podman[323722]: 2025-12-05 01:31:44.385535777 +0000 UTC m=+1.277995134 container remove 43b1524bc969b76319f26e33031f4982d5ee453e64b5ca6813fa9a567033e414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_euclid, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:31:44 compute-0 systemd[1]: libpod-conmon-43b1524bc969b76319f26e33031f4982d5ee453e64b5ca6813fa9a567033e414.scope: Deactivated successfully.
Dec  5 01:31:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v664: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:31:44 compute-0 python3.9[324105]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  5 01:31:45 compute-0 podman[324279]: 2025-12-05 01:31:45.435923938 +0000 UTC m=+0.065378556 container create 1150cf13eed1f54baa8867da4360e04b8a6d7f06b1604cfe0919fd837df33f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_allen, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:31:45 compute-0 systemd[1]: Started libpod-conmon-1150cf13eed1f54baa8867da4360e04b8a6d7f06b1604cfe0919fd837df33f32.scope.
Dec  5 01:31:45 compute-0 podman[324279]: 2025-12-05 01:31:45.409641814 +0000 UTC m=+0.039096452 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:31:45 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:31:45 compute-0 podman[324279]: 2025-12-05 01:31:45.564863546 +0000 UTC m=+0.194318234 container init 1150cf13eed1f54baa8867da4360e04b8a6d7f06b1604cfe0919fd837df33f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_allen, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:31:45 compute-0 podman[324279]: 2025-12-05 01:31:45.582189539 +0000 UTC m=+0.211644137 container start 1150cf13eed1f54baa8867da4360e04b8a6d7f06b1604cfe0919fd837df33f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  5 01:31:45 compute-0 serene_allen[324333]: 167 167
Dec  5 01:31:45 compute-0 systemd[1]: libpod-1150cf13eed1f54baa8867da4360e04b8a6d7f06b1604cfe0919fd837df33f32.scope: Deactivated successfully.
Dec  5 01:31:45 compute-0 podman[324279]: 2025-12-05 01:31:45.597571539 +0000 UTC m=+0.227026237 container attach 1150cf13eed1f54baa8867da4360e04b8a6d7f06b1604cfe0919fd837df33f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:31:45 compute-0 podman[324279]: 2025-12-05 01:31:45.598604457 +0000 UTC m=+0.228059105 container died 1150cf13eed1f54baa8867da4360e04b8a6d7f06b1604cfe0919fd837df33f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Dec  5 01:31:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1d192f9b1267d9c79f1de8257e095fbdf243866bf88f8ee0e454573dfa54ca0-merged.mount: Deactivated successfully.
Dec  5 01:31:45 compute-0 podman[324279]: 2025-12-05 01:31:45.66608216 +0000 UTC m=+0.295536768 container remove 1150cf13eed1f54baa8867da4360e04b8a6d7f06b1604cfe0919fd837df33f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:31:45 compute-0 systemd[1]: libpod-conmon-1150cf13eed1f54baa8867da4360e04b8a6d7f06b1604cfe0919fd837df33f32.scope: Deactivated successfully.
Dec  5 01:31:45 compute-0 python3.9[324373]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:31:45 compute-0 podman[324384]: 2025-12-05 01:31:45.929376748 +0000 UTC m=+0.074346566 container create e2ed9d85c5581dfe4ab10af10724c19795eb13aa4921f1eca44e700ab71a2e73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:31:45 compute-0 podman[324384]: 2025-12-05 01:31:45.897754985 +0000 UTC m=+0.042724873 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:31:45 compute-0 systemd[1]: Started libpod-conmon-e2ed9d85c5581dfe4ab10af10724c19795eb13aa4921f1eca44e700ab71a2e73.scope.
Dec  5 01:31:46 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:31:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aad5b6b2523faafe2fa06a6a48957f285fc98e9f3691f8b8c343a2f1c6129f6d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:31:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aad5b6b2523faafe2fa06a6a48957f285fc98e9f3691f8b8c343a2f1c6129f6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:31:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aad5b6b2523faafe2fa06a6a48957f285fc98e9f3691f8b8c343a2f1c6129f6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:31:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aad5b6b2523faafe2fa06a6a48957f285fc98e9f3691f8b8c343a2f1c6129f6d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:31:46 compute-0 podman[324384]: 2025-12-05 01:31:46.076802812 +0000 UTC m=+0.221772710 container init e2ed9d85c5581dfe4ab10af10724c19795eb13aa4921f1eca44e700ab71a2e73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:31:46 compute-0 podman[324384]: 2025-12-05 01:31:46.0878653 +0000 UTC m=+0.232835138 container start e2ed9d85c5581dfe4ab10af10724c19795eb13aa4921f1eca44e700ab71a2e73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:31:46 compute-0 podman[324384]: 2025-12-05 01:31:46.095768291 +0000 UTC m=+0.240738109 container attach e2ed9d85c5581dfe4ab10af10724c19795eb13aa4921f1eca44e700ab71a2e73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_varahamihira, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:31:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:31:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:31:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:31:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:31:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:31:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:31:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v665: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:31:46 compute-0 python3.9[324555]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:31:47 compute-0 friendly_varahamihira[324423]: {
Dec  5 01:31:47 compute-0 friendly_varahamihira[324423]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 01:31:47 compute-0 friendly_varahamihira[324423]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:31:47 compute-0 friendly_varahamihira[324423]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 01:31:47 compute-0 friendly_varahamihira[324423]:        "osd_id": 0,
Dec  5 01:31:47 compute-0 friendly_varahamihira[324423]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:31:47 compute-0 friendly_varahamihira[324423]:        "type": "bluestore"
Dec  5 01:31:47 compute-0 friendly_varahamihira[324423]:    },
Dec  5 01:31:47 compute-0 friendly_varahamihira[324423]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 01:31:47 compute-0 friendly_varahamihira[324423]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:31:47 compute-0 friendly_varahamihira[324423]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 01:31:47 compute-0 friendly_varahamihira[324423]:        "osd_id": 1,
Dec  5 01:31:47 compute-0 friendly_varahamihira[324423]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:31:47 compute-0 friendly_varahamihira[324423]:        "type": "bluestore"
Dec  5 01:31:47 compute-0 friendly_varahamihira[324423]:    },
Dec  5 01:31:47 compute-0 friendly_varahamihira[324423]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 01:31:47 compute-0 friendly_varahamihira[324423]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:31:47 compute-0 friendly_varahamihira[324423]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 01:31:47 compute-0 friendly_varahamihira[324423]:        "osd_id": 2,
Dec  5 01:31:47 compute-0 friendly_varahamihira[324423]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:31:47 compute-0 friendly_varahamihira[324423]:        "type": "bluestore"
Dec  5 01:31:47 compute-0 friendly_varahamihira[324423]:    }
Dec  5 01:31:47 compute-0 friendly_varahamihira[324423]: }
Dec  5 01:31:47 compute-0 systemd[1]: libpod-e2ed9d85c5581dfe4ab10af10724c19795eb13aa4921f1eca44e700ab71a2e73.scope: Deactivated successfully.
Dec  5 01:31:47 compute-0 systemd[1]: libpod-e2ed9d85c5581dfe4ab10af10724c19795eb13aa4921f1eca44e700ab71a2e73.scope: Consumed 1.097s CPU time.
Dec  5 01:31:47 compute-0 podman[324384]: 2025-12-05 01:31:47.187078884 +0000 UTC m=+1.332048692 container died e2ed9d85c5581dfe4ab10af10724c19795eb13aa4921f1eca44e700ab71a2e73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  5 01:31:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-aad5b6b2523faafe2fa06a6a48957f285fc98e9f3691f8b8c343a2f1c6129f6d-merged.mount: Deactivated successfully.
Dec  5 01:31:47 compute-0 podman[324384]: 2025-12-05 01:31:47.279185215 +0000 UTC m=+1.424155023 container remove e2ed9d85c5581dfe4ab10af10724c19795eb13aa4921f1eca44e700ab71a2e73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_varahamihira, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:31:47 compute-0 systemd[1]: libpod-conmon-e2ed9d85c5581dfe4ab10af10724c19795eb13aa4921f1eca44e700ab71a2e73.scope: Deactivated successfully.
Dec  5 01:31:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:31:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:31:47 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:31:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:31:47 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:31:47 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 05456b62-3519-4b28-86ee-d2a718ba5e1a does not exist
Dec  5 01:31:47 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 6c08dc9f-db9a-4ee6-9a0d-6a83bee8dca8 does not exist
Dec  5 01:31:47 compute-0 python3.9[324797]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:31:47 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:31:47 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:31:48 compute-0 python3.9[324876]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:31:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v666: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:31:49 compute-0 podman[325000]: 2025-12-05 01:31:49.189236375 +0000 UTC m=+0.133371093 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, config_id=edpm, distribution-scope=public, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, container_name=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9)
Dec  5 01:31:49 compute-0 python3.9[325048]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:31:50 compute-0 python3.9[325127]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:31:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v667: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:31:50 compute-0 podman[325251]: 2025-12-05 01:31:50.969527534 +0000 UTC m=+0.115780581 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  5 01:31:51 compute-0 python3.9[325296]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:31:52 compute-0 python3.9[325453]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:31:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:31:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v668: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:31:52 compute-0 python3.9[325531]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:31:53 compute-0 python3.9[325683]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:31:54 compute-0 python3.9[325761]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:31:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v669: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:31:55 compute-0 python3.9[325913]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  5 01:31:55 compute-0 systemd[1]: Reloading.
Dec  5 01:31:55 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:31:55 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:31:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:31:56.158 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:31:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:31:56.159 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:31:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:31:56.159 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:31:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v670: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:31:57 compute-0 python3.9[326103]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:31:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:31:57 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Dec  5 01:31:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:31:57.337957) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  5 01:31:57 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Dec  5 01:31:57 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898317337989, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1213, "num_deletes": 507, "total_data_size": 1379934, "memory_usage": 1414848, "flush_reason": "Manual Compaction"}
Dec  5 01:31:57 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Dec  5 01:31:57 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898317349113, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1356189, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13524, "largest_seqno": 14736, "table_properties": {"data_size": 1350782, "index_size": 2355, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 13913, "raw_average_key_size": 17, "raw_value_size": 1338030, "raw_average_value_size": 1719, "num_data_blocks": 108, "num_entries": 778, "num_filter_entries": 778, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764898228, "oldest_key_time": 1764898228, "file_creation_time": 1764898317, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Dec  5 01:31:57 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 11230 microseconds, and 5382 cpu microseconds.
Dec  5 01:31:57 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 01:31:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:31:57.349172) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1356189 bytes OK
Dec  5 01:31:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:31:57.349206) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Dec  5 01:31:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:31:57.351454) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Dec  5 01:31:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:31:57.351469) EVENT_LOG_v1 {"time_micros": 1764898317351465, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  5 01:31:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:31:57.351484) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  5 01:31:57 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1373312, prev total WAL file size 1373312, number of live WAL files 2.
Dec  5 01:31:57 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 01:31:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:31:57.352377) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323533' seq:0, type:0; will stop at (end)
Dec  5 01:31:57 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  5 01:31:57 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1324KB)], [32(7330KB)]
Dec  5 01:31:57 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898317352424, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 8862678, "oldest_snapshot_seqno": -1}
Dec  5 01:31:57 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3746 keys, 6957720 bytes, temperature: kUnknown
Dec  5 01:31:57 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898317400993, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 6957720, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6931066, "index_size": 16177, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9413, "raw_key_size": 91922, "raw_average_key_size": 24, "raw_value_size": 6861561, "raw_average_value_size": 1831, "num_data_blocks": 686, "num_entries": 3746, "num_filter_entries": 3746, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764898317, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec  5 01:31:57 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 01:31:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:31:57.401322) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 6957720 bytes
Dec  5 01:31:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:31:57.403167) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.1 rd, 142.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 7.2 +0.0 blob) out(6.6 +0.0 blob), read-write-amplify(11.7) write-amplify(5.1) OK, records in: 4773, records dropped: 1027 output_compression: NoCompression
Dec  5 01:31:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:31:57.403189) EVENT_LOG_v1 {"time_micros": 1764898317403177, "job": 14, "event": "compaction_finished", "compaction_time_micros": 48673, "compaction_time_cpu_micros": 20817, "output_level": 6, "num_output_files": 1, "total_output_size": 6957720, "num_input_records": 4773, "num_output_records": 3746, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  5 01:31:57 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 01:31:57 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898317403699, "job": 14, "event": "table_file_deletion", "file_number": 34}
Dec  5 01:31:57 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 01:31:57 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898317405941, "job": 14, "event": "table_file_deletion", "file_number": 32}
Dec  5 01:31:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:31:57.352241) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:31:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:31:57.406117) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:31:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:31:57.406124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:31:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:31:57.406126) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:31:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:31:57.406129) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:31:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:31:57.406131) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:31:57 compute-0 python3.9[326181]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:31:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v671: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:31:58 compute-0 python3.9[326333]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:31:59 compute-0 podman[326383]: 2025-12-05 01:31:59.375606757 +0000 UTC m=+0.110469643 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 01:31:59 compute-0 python3.9[326429]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:31:59 compute-0 podman[158197]: time="2025-12-05T01:31:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:31:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:31:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 35733 "" "Go-http-client/1.1"
Dec  5 01:31:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:31:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7284 "" "Go-http-client/1.1"
Dec  5 01:32:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v672: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:32:00 compute-0 python3.9[326581]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  5 01:32:00 compute-0 systemd[1]: Reloading.
Dec  5 01:32:00 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:32:00 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:32:01 compute-0 systemd[1]: Starting Create netns directory...
Dec  5 01:32:01 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  5 01:32:01 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  5 01:32:01 compute-0 systemd[1]: Finished Create netns directory.
Dec  5 01:32:01 compute-0 openstack_network_exporter[160350]: ERROR   01:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:32:01 compute-0 openstack_network_exporter[160350]: ERROR   01:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:32:01 compute-0 openstack_network_exporter[160350]: ERROR   01:32:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:32:01 compute-0 openstack_network_exporter[160350]: ERROR   01:32:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:32:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:32:01 compute-0 openstack_network_exporter[160350]: ERROR   01:32:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:32:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:32:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:32:02 compute-0 python3.9[326775]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:32:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v673: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:32:03 compute-0 python3.9[326927]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:32:03 compute-0 python3.9[327050]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764898322.6056979-437-111670610304198/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:32:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v674: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:32:05 compute-0 python3.9[327202]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:32:06 compute-0 python3.9[327354]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:32:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v675: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:32:06 compute-0 python3.9[327477]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764898325.3932378-462-219182495185765/.source.json _original_basename=.vz_b2g86 follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:32:06 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec  5 01:32:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:32:07 compute-0 python3.9[327630]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:32:08 compute-0 podman[327680]: 2025-12-05 01:32:08.683377283 +0000 UTC m=+0.089556180 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, config_id=edpm, container_name=ceilometer_agent_ipmi)
Dec  5 01:32:08 compute-0 podman[327679]: 2025-12-05 01:32:08.68971212 +0000 UTC m=+0.098118769 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  5 01:32:08 compute-0 podman[327678]: 2025-12-05 01:32:08.699772551 +0000 UTC m=+0.107300676 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  5 01:32:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v676: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:32:08 compute-0 podman[327681]: 2025-12-05 01:32:08.734042187 +0000 UTC m=+0.135276106 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 01:32:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v677: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:32:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:32:12 compute-0 python3.9[328137]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Dec  5 01:32:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v678: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:32:13 compute-0 podman[328261]: 2025-12-05 01:32:13.409301729 +0000 UTC m=+0.139876344 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., container_name=kepler, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, release=1214.1726694543, release-0.7.12=, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container)
Dec  5 01:32:13 compute-0 python3.9[328308]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  5 01:32:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v679: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:32:14 compute-0 python3.9[328460]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  5 01:32:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:32:16
Dec  5 01:32:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 01:32:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 01:32:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'images', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'backups', '.mgr']
Dec  5 01:32:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 01:32:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:32:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:32:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:32:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:32:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:32:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:32:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 01:32:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:32:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 01:32:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:32:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:32:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:32:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:32:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:32:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:32:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:32:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v680: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:32:16 compute-0 python3[328638]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  5 01:32:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:32:18 compute-0 podman[328650]: 2025-12-05 01:32:18.586182823 +0000 UTC m=+1.486691128 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec  5 01:32:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v681: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:32:18 compute-0 podman[328704]: 2025-12-05 01:32:18.868312546 +0000 UTC m=+0.101541315 container create 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Dec  5 01:32:18 compute-0 podman[328704]: 2025-12-05 01:32:18.823114705 +0000 UTC m=+0.056343524 image pull 9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec  5 01:32:18 compute-0 python3[328638]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec  5 01:32:20 compute-0 podman[328768]: 2025-12-05 01:32:20.039860689 +0000 UTC m=+0.102816510 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, name=ubi9-minimal, release=1755695350, version=9.6, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.buildah.version=1.33.7)
Dec  5 01:32:20 compute-0 python3.9[328913]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  5 01:32:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v682: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:32:21 compute-0 podman[329039]: 2025-12-05 01:32:21.522631145 +0000 UTC m=+0.098630763 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 01:32:21 compute-0 python3.9[329088]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:32:22 compute-0 python3.9[329164]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  5 01:32:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:32:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v683: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:32:23 compute-0 python3.9[329315]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764898342.4308462-550-247819364566459/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:32:24 compute-0 python3.9[329391]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  5 01:32:24 compute-0 systemd[1]: Reloading.
Dec  5 01:32:24 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:32:24 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:32:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v684: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:32:25 compute-0 python3.9[329502]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  5 01:32:25 compute-0 systemd[1]: Reloading.
Dec  5 01:32:25 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:32:25 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:32:25 compute-0 systemd[1]: Starting multipathd container...
Dec  5 01:32:25 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:32:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dedb3334143486a48819d334ba71eb496fd83633e8380e82d90af06bbe44260/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  5 01:32:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dedb3334143486a48819d334ba71eb496fd83633e8380e82d90af06bbe44260/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  5 01:32:26 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee.
Dec  5 01:32:26 compute-0 podman[329543]: 2025-12-05 01:32:26.024662926 +0000 UTC m=+0.180378845 container init 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=multipathd)
Dec  5 01:32:26 compute-0 multipathd[329558]: + sudo -E kolla_set_configs
Dec  5 01:32:26 compute-0 podman[329543]: 2025-12-05 01:32:26.070770942 +0000 UTC m=+0.226486881 container start 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  5 01:32:26 compute-0 podman[329543]: multipathd
Dec  5 01:32:26 compute-0 systemd[1]: Started multipathd container.
Dec  5 01:32:26 compute-0 multipathd[329558]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  5 01:32:26 compute-0 multipathd[329558]: INFO:__main__:Validating config file
Dec  5 01:32:26 compute-0 multipathd[329558]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  5 01:32:26 compute-0 multipathd[329558]: INFO:__main__:Writing out command to execute
Dec  5 01:32:26 compute-0 multipathd[329558]: ++ cat /run_command
Dec  5 01:32:26 compute-0 multipathd[329558]: + CMD='/usr/sbin/multipathd -d'
Dec  5 01:32:26 compute-0 multipathd[329558]: + ARGS=
Dec  5 01:32:26 compute-0 multipathd[329558]: + sudo kolla_copy_cacerts
Dec  5 01:32:26 compute-0 multipathd[329558]: + [[ ! -n '' ]]
Dec  5 01:32:26 compute-0 multipathd[329558]: + . kolla_extend_start
Dec  5 01:32:26 compute-0 multipathd[329558]: Running command: '/usr/sbin/multipathd -d'
Dec  5 01:32:26 compute-0 multipathd[329558]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec  5 01:32:26 compute-0 multipathd[329558]: + umask 0022
Dec  5 01:32:26 compute-0 multipathd[329558]: + exec /usr/sbin/multipathd -d
Dec  5 01:32:26 compute-0 multipathd[329558]: 4433.331495 | --------start up--------
Dec  5 01:32:26 compute-0 multipathd[329558]: 4433.331534 | read /etc/multipath.conf
Dec  5 01:32:26 compute-0 multipathd[329558]: 4433.343690 | path checkers start up
Dec  5 01:32:26 compute-0 podman[329565]: 2025-12-05 01:32:26.231872028 +0000 UTC m=+0.141093708 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:32:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 01:32:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v685: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:32:26 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 01:32:26 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3306 writes, 14K keys, 3306 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 3306 writes, 3306 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1280 writes, 5807 keys, 1280 commit groups, 1.0 writes per commit group, ingest: 8.47 MB, 0.01 MB/s#012Interval WAL: 1280 writes, 1280 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    127.7      0.12              0.06         7    0.017       0      0       0.0       0.0#012  L6      1/0    6.64 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    149.0    122.8      0.33              0.16         6    0.055     24K   3205       0.0       0.0#012 Sum      1/0    6.64 MB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.7    109.3    124.1      0.45              0.21        13    0.034     24K   3205       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.8    135.5    136.2      0.25              0.12         8    0.031     17K   2471       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    149.0    122.8      0.33              0.16         6    0.055     24K   3205       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    132.8      0.11              0.06         6    0.019       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.4      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.015, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.05 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 0.4 seconds#012Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56463779d1f0#2 capacity: 308.00 MB usage: 1.54 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000117 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(99,1.32 MB,0.429292%) FilterBlock(14,74.42 KB,0.0235966%) IndexBlock(14,145.05 KB,0.0459894%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  5 01:32:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:32:27 compute-0 python3.9[329744]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  5 01:32:28 compute-0 python3.9[329898]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:32:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v686: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:32:29 compute-0 podman[329956]: 2025-12-05 01:32:29.729240043 +0000 UTC m=+0.131630464 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  5 01:32:29 compute-0 podman[158197]: time="2025-12-05T01:32:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:32:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:32:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 38323 "" "Go-http-client/1.1"
Dec  5 01:32:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:32:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7701 "" "Go-http-client/1.1"
Dec  5 01:32:30 compute-0 python3.9[330080]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  5 01:32:30 compute-0 systemd[1]: Stopping multipathd container...
Dec  5 01:32:30 compute-0 multipathd[329558]: 4437.751355 | exit (signal)
Dec  5 01:32:30 compute-0 multipathd[329558]: 4437.751616 | --------shut down-------
Dec  5 01:32:30 compute-0 systemd[1]: libpod-4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee.scope: Deactivated successfully.
Dec  5 01:32:30 compute-0 conmon[329558]: conmon 4b650b296b7a2b28da70 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee.scope/container/memory.events
Dec  5 01:32:30 compute-0 podman[330084]: 2025-12-05 01:32:30.644537565 +0000 UTC m=+0.110113564 container died 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  5 01:32:30 compute-0 systemd[1]: 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee-70675f2a8c31aaaf.timer: Deactivated successfully.
Dec  5 01:32:30 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee.
Dec  5 01:32:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee-userdata-shm.mount: Deactivated successfully.
Dec  5 01:32:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-4dedb3334143486a48819d334ba71eb496fd83633e8380e82d90af06bbe44260-merged.mount: Deactivated successfully.
Dec  5 01:32:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v687: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:32:30 compute-0 podman[330084]: 2025-12-05 01:32:30.727073238 +0000 UTC m=+0.192649247 container cleanup 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Dec  5 01:32:30 compute-0 podman[330084]: multipathd
Dec  5 01:32:30 compute-0 podman[330109]: multipathd
Dec  5 01:32:30 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Dec  5 01:32:30 compute-0 systemd[1]: Stopped multipathd container.
Dec  5 01:32:30 compute-0 systemd[1]: Starting multipathd container...
Dec  5 01:32:30 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:32:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dedb3334143486a48819d334ba71eb496fd83633e8380e82d90af06bbe44260/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  5 01:32:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dedb3334143486a48819d334ba71eb496fd83633e8380e82d90af06bbe44260/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  5 01:32:31 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee.
Dec  5 01:32:31 compute-0 podman[330122]: 2025-12-05 01:32:31.092178735 +0000 UTC m=+0.212176900 container init 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec  5 01:32:31 compute-0 multipathd[330136]: + sudo -E kolla_set_configs
Dec  5 01:32:31 compute-0 podman[330122]: 2025-12-05 01:32:31.131096361 +0000 UTC m=+0.251094436 container start 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 01:32:31 compute-0 podman[330122]: multipathd
Dec  5 01:32:31 compute-0 systemd[1]: Started multipathd container.
Dec  5 01:32:31 compute-0 multipathd[330136]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  5 01:32:31 compute-0 multipathd[330136]: INFO:__main__:Validating config file
Dec  5 01:32:31 compute-0 multipathd[330136]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  5 01:32:31 compute-0 multipathd[330136]: INFO:__main__:Writing out command to execute
Dec  5 01:32:31 compute-0 multipathd[330136]: ++ cat /run_command
Dec  5 01:32:31 compute-0 multipathd[330136]: + CMD='/usr/sbin/multipathd -d'
Dec  5 01:32:31 compute-0 multipathd[330136]: + ARGS=
Dec  5 01:32:31 compute-0 multipathd[330136]: + sudo kolla_copy_cacerts
Dec  5 01:32:31 compute-0 multipathd[330136]: + [[ ! -n '' ]]
Dec  5 01:32:31 compute-0 multipathd[330136]: + . kolla_extend_start
Dec  5 01:32:31 compute-0 multipathd[330136]: Running command: '/usr/sbin/multipathd -d'
Dec  5 01:32:31 compute-0 multipathd[330136]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec  5 01:32:31 compute-0 multipathd[330136]: + umask 0022
Dec  5 01:32:31 compute-0 multipathd[330136]: + exec /usr/sbin/multipathd -d
Dec  5 01:32:31 compute-0 podman[330143]: 2025-12-05 01:32:31.268071354 +0000 UTC m=+0.111467582 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  5 01:32:31 compute-0 systemd[1]: 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee-684d6b443d7fa374.service: Main process exited, code=exited, status=1/FAILURE
Dec  5 01:32:31 compute-0 systemd[1]: 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee-684d6b443d7fa374.service: Failed with result 'exit-code'.
Dec  5 01:32:31 compute-0 multipathd[330136]: 4438.428700 | --------start up--------
Dec  5 01:32:31 compute-0 multipathd[330136]: 4438.428745 | read /etc/multipath.conf
Dec  5 01:32:31 compute-0 multipathd[330136]: 4438.438180 | path checkers start up
Dec  5 01:32:31 compute-0 openstack_network_exporter[160350]: ERROR   01:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:32:31 compute-0 openstack_network_exporter[160350]: ERROR   01:32:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:32:31 compute-0 openstack_network_exporter[160350]: ERROR   01:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:32:31 compute-0 openstack_network_exporter[160350]: ERROR   01:32:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:32:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:32:31 compute-0 openstack_network_exporter[160350]: ERROR   01:32:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:32:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:32:32 compute-0 python3.9[330328]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:32:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:32:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v688: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:32:33 compute-0 python3.9[330480]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  5 01:32:34 compute-0 python3.9[330632]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Dec  5 01:32:34 compute-0 kernel: Key type psk registered
Dec  5 01:32:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v689: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:32:35 compute-0 python3.9[330795]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:32:36 compute-0 python3.9[330918]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764898354.64073-630-179403272687463/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:32:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v690: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:32:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:32:37 compute-0 python3.9[331070]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:32:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v691: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:32:39 compute-0 podman[331195]: 2025-12-05 01:32:39.227995097 +0000 UTC m=+0.110532775 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 01:32:39 compute-0 podman[331196]: 2025-12-05 01:32:39.24243586 +0000 UTC m=+0.115427122 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  5 01:32:39 compute-0 podman[331194]: 2025-12-05 01:32:39.258526379 +0000 UTC m=+0.136545681 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  5 01:32:39 compute-0 podman[331197]: 2025-12-05 01:32:39.277732945 +0000 UTC m=+0.153939997 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  5 01:32:39 compute-0 python3.9[331299]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  5 01:32:39 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec  5 01:32:39 compute-0 systemd[1]: Stopped Load Kernel Modules.
Dec  5 01:32:39 compute-0 systemd[1]: Stopping Load Kernel Modules...
Dec  5 01:32:39 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec  5 01:32:39 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec  5 01:32:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v692: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:32:40 compute-0 python3.9[331459]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  5 01:32:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.547 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.548 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f83151a5f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.548 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f83151a6690>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8316c39160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee59a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f941a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee79e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.551 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.551 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8314f94050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f942c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.551 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8314f940e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.552 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f831506dc10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.552 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.552 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee6300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8314ee7950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8314ee7a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8314f94170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.553 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8314ee79b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.554 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8314f94200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8314f94290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8314ee7ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8314f94320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8314ee59d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.554 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.556 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.556 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee74d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.556 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.556 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.556 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.556 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.556 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.556 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee76b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.556 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.556 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.556 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.557 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{'disk.device.allocation': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'cpu': [], 'memory.usage': [], 'network.incoming.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'network.incoming.packets': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8314ee7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8314ee7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8314ee74a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8314ee7500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8314ee7560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8314ee75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8314f945f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8314ee7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8314ee7680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8314ee76e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8314ee7f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8314ee7740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8314ee7f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.559 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.559 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.559 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.559 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:32:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:32:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:32:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v693: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:32:43 compute-0 systemd[1]: Reloading.
Dec  5 01:32:43 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:32:43 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:32:43 compute-0 systemd[1]: Reloading.
Dec  5 01:32:43 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:32:43 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:32:43 compute-0 podman[331502]: 2025-12-05 01:32:43.915287427 +0000 UTC m=+0.154491312 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-container, container_name=kepler, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_id=edpm, io.openshift.tags=base rhel9)
Dec  5 01:32:44 compute-0 systemd-logind[792]: Watching system buttons on /dev/input/event0 (Power Button)
Dec  5 01:32:44 compute-0 systemd-logind[792]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec  5 01:32:44 compute-0 lvm[331595]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  5 01:32:44 compute-0 lvm[331595]: VG ceph_vg0 finished
Dec  5 01:32:44 compute-0 lvm[331596]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  5 01:32:44 compute-0 lvm[331596]: VG ceph_vg2 finished
Dec  5 01:32:44 compute-0 lvm[331597]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  5 01:32:44 compute-0 lvm[331597]: VG ceph_vg1 finished
Dec  5 01:32:44 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  5 01:32:44 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec  5 01:32:44 compute-0 systemd[1]: Reloading.
Dec  5 01:32:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v694: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:32:44 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:32:44 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:32:45 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  5 01:32:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:32:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:32:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:32:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:32:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:32:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:32:46 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  5 01:32:46 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec  5 01:32:46 compute-0 systemd[1]: man-db-cache-update.service: Consumed 2.112s CPU time.
Dec  5 01:32:46 compute-0 systemd[1]: run-re83b8ce83f334bbe83c861ce623b1d48.service: Deactivated successfully.
Dec  5 01:32:46 compute-0 python3.9[332937]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  5 01:32:46 compute-0 systemd[1]: Stopping Open-iSCSI...
Dec  5 01:32:46 compute-0 iscsid[320020]: iscsid shutting down.
Dec  5 01:32:46 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Dec  5 01:32:46 compute-0 systemd[1]: Stopped Open-iSCSI.
Dec  5 01:32:46 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec  5 01:32:46 compute-0 systemd[1]: Starting Open-iSCSI...
Dec  5 01:32:46 compute-0 systemd[1]: Started Open-iSCSI.
Dec  5 01:32:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v695: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:32:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:32:47 compute-0 python3.9[333092]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  5 01:32:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:32:48 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:32:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 01:32:48 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:32:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 01:32:48 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:32:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v696: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:32:48 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 23fa31b3-2c1f-4352-9e82-6a35106beb16 does not exist
Dec  5 01:32:48 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 7ff44f00-bec6-4e3a-b7a8-fedae456313f does not exist
Dec  5 01:32:48 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev d585d513-c540-4802-85dc-7130e56b05ca does not exist
Dec  5 01:32:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 01:32:48 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 01:32:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 01:32:48 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:32:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:32:48 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:32:48 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:32:48 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:32:48 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:32:49 compute-0 python3.9[333425]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:32:49 compute-0 podman[333546]: 2025-12-05 01:32:49.622612591 +0000 UTC m=+0.054074880 container create 5c1d300a1342399b4c217c92daf6a5fd69276dc62db50df48c8c7ad1d6664fbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  5 01:32:49 compute-0 systemd[1]: Started libpod-conmon-5c1d300a1342399b4c217c92daf6a5fd69276dc62db50df48c8c7ad1d6664fbd.scope.
Dec  5 01:32:49 compute-0 podman[333546]: 2025-12-05 01:32:49.599865816 +0000 UTC m=+0.031328125 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:32:49 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:32:49 compute-0 podman[333546]: 2025-12-05 01:32:49.74441259 +0000 UTC m=+0.175874899 container init 5c1d300a1342399b4c217c92daf6a5fd69276dc62db50df48c8c7ad1d6664fbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  5 01:32:49 compute-0 podman[333546]: 2025-12-05 01:32:49.756456646 +0000 UTC m=+0.187918935 container start 5c1d300a1342399b4c217c92daf6a5fd69276dc62db50df48c8c7ad1d6664fbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  5 01:32:49 compute-0 podman[333546]: 2025-12-05 01:32:49.760521689 +0000 UTC m=+0.191984028 container attach 5c1d300a1342399b4c217c92daf6a5fd69276dc62db50df48c8c7ad1d6664fbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:32:49 compute-0 friendly_brown[333607]: 167 167
Dec  5 01:32:49 compute-0 systemd[1]: libpod-5c1d300a1342399b4c217c92daf6a5fd69276dc62db50df48c8c7ad1d6664fbd.scope: Deactivated successfully.
Dec  5 01:32:49 compute-0 podman[333546]: 2025-12-05 01:32:49.766443275 +0000 UTC m=+0.197905554 container died 5c1d300a1342399b4c217c92daf6a5fd69276dc62db50df48c8c7ad1d6664fbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  5 01:32:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d9b757b32deab524aed848b607f7b06f4cbba903c45227e10caa1022a7afe6c-merged.mount: Deactivated successfully.
Dec  5 01:32:49 compute-0 podman[333546]: 2025-12-05 01:32:49.832719304 +0000 UTC m=+0.264181593 container remove 5c1d300a1342399b4c217c92daf6a5fd69276dc62db50df48c8c7ad1d6664fbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_brown, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  5 01:32:49 compute-0 systemd[1]: libpod-conmon-5c1d300a1342399b4c217c92daf6a5fd69276dc62db50df48c8c7ad1d6664fbd.scope: Deactivated successfully.
Dec  5 01:32:50 compute-0 podman[333658]: 2025-12-05 01:32:50.097707669 +0000 UTC m=+0.106713899 container create 5c79dadbbd1ff3db488fceab249330d1f3d31b8a6fa01e187d2eafdde4fe58e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chaum, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:32:50 compute-0 podman[333658]: 2025-12-05 01:32:50.061013255 +0000 UTC m=+0.070019565 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:32:50 compute-0 systemd[1]: Started libpod-conmon-5c79dadbbd1ff3db488fceab249330d1f3d31b8a6fa01e187d2eafdde4fe58e1.scope.
Dec  5 01:32:50 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:32:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9544aa2c1cd04dedeff1dfe56d41c044f59de67ee8511552158524c9e30df1c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:32:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9544aa2c1cd04dedeff1dfe56d41c044f59de67ee8511552158524c9e30df1c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:32:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9544aa2c1cd04dedeff1dfe56d41c044f59de67ee8511552158524c9e30df1c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:32:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9544aa2c1cd04dedeff1dfe56d41c044f59de67ee8511552158524c9e30df1c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:32:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9544aa2c1cd04dedeff1dfe56d41c044f59de67ee8511552158524c9e30df1c5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:32:50 compute-0 podman[333658]: 2025-12-05 01:32:50.26867803 +0000 UTC m=+0.277684300 container init 5c79dadbbd1ff3db488fceab249330d1f3d31b8a6fa01e187d2eafdde4fe58e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chaum, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  5 01:32:50 compute-0 podman[333658]: 2025-12-05 01:32:50.290783187 +0000 UTC m=+0.299789427 container start 5c79dadbbd1ff3db488fceab249330d1f3d31b8a6fa01e187d2eafdde4fe58e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chaum, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:32:50 compute-0 podman[333658]: 2025-12-05 01:32:50.296600259 +0000 UTC m=+0.305606499 container attach 5c79dadbbd1ff3db488fceab249330d1f3d31b8a6fa01e187d2eafdde4fe58e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Dec  5 01:32:50 compute-0 podman[333670]: 2025-12-05 01:32:50.308063409 +0000 UTC m=+0.149601076 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, build-date=2025-08-20T13:12:41, distribution-scope=public, release=1755695350, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, name=ubi9-minimal, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, managed_by=edpm_ansible, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container)
Dec  5 01:32:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v697: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:32:50 compute-0 python3.9[333751]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  5 01:32:51 compute-0 systemd[1]: Reloading.
Dec  5 01:32:51 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:32:51 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:32:51 compute-0 silly_chaum[333684]: --> passed data devices: 0 physical, 3 LVM
Dec  5 01:32:51 compute-0 silly_chaum[333684]: --> relative data size: 1.0
Dec  5 01:32:51 compute-0 silly_chaum[333684]: --> All data devices are unavailable
Dec  5 01:32:51 compute-0 podman[333806]: 2025-12-05 01:32:51.693197182 +0000 UTC m=+0.101575176 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 01:32:51 compute-0 systemd[1]: libpod-5c79dadbbd1ff3db488fceab249330d1f3d31b8a6fa01e187d2eafdde4fe58e1.scope: Deactivated successfully.
Dec  5 01:32:51 compute-0 systemd[1]: libpod-5c79dadbbd1ff3db488fceab249330d1f3d31b8a6fa01e187d2eafdde4fe58e1.scope: Consumed 1.287s CPU time.
Dec  5 01:32:51 compute-0 podman[333658]: 2025-12-05 01:32:51.714566298 +0000 UTC m=+1.723572558 container died 5c79dadbbd1ff3db488fceab249330d1f3d31b8a6fa01e187d2eafdde4fe58e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Dec  5 01:32:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-9544aa2c1cd04dedeff1dfe56d41c044f59de67ee8511552158524c9e30df1c5-merged.mount: Deactivated successfully.
Dec  5 01:32:51 compute-0 podman[333658]: 2025-12-05 01:32:51.813291813 +0000 UTC m=+1.822298033 container remove 5c79dadbbd1ff3db488fceab249330d1f3d31b8a6fa01e187d2eafdde4fe58e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:32:51 compute-0 systemd[1]: libpod-conmon-5c79dadbbd1ff3db488fceab249330d1f3d31b8a6fa01e187d2eafdde4fe58e1.scope: Deactivated successfully.
Dec  5 01:32:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:32:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v698: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:32:52 compute-0 podman[334133]: 2025-12-05 01:32:52.861335979 +0000 UTC m=+0.074913022 container create 63478234e6795d340475e4cbc52eed9cf0459cc5384b1a8cb8827d7cc16ab626 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_gagarin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  5 01:32:52 compute-0 podman[334133]: 2025-12-05 01:32:52.842220025 +0000 UTC m=+0.055797068 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:32:52 compute-0 systemd[1]: Started libpod-conmon-63478234e6795d340475e4cbc52eed9cf0459cc5384b1a8cb8827d7cc16ab626.scope.
Dec  5 01:32:52 compute-0 python3.9[334135]: ansible-ansible.builtin.service_facts Invoked
Dec  5 01:32:52 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:32:53 compute-0 podman[334133]: 2025-12-05 01:32:53.01904439 +0000 UTC m=+0.232621523 container init 63478234e6795d340475e4cbc52eed9cf0459cc5384b1a8cb8827d7cc16ab626 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:32:53 compute-0 podman[334133]: 2025-12-05 01:32:53.034552742 +0000 UTC m=+0.248129825 container start 63478234e6795d340475e4cbc52eed9cf0459cc5384b1a8cb8827d7cc16ab626 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_gagarin, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  5 01:32:53 compute-0 podman[334133]: 2025-12-05 01:32:53.041026523 +0000 UTC m=+0.254603606 container attach 63478234e6795d340475e4cbc52eed9cf0459cc5384b1a8cb8827d7cc16ab626 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:32:53 compute-0 intelligent_gagarin[334150]: 167 167
Dec  5 01:32:53 compute-0 systemd[1]: libpod-63478234e6795d340475e4cbc52eed9cf0459cc5384b1a8cb8827d7cc16ab626.scope: Deactivated successfully.
Dec  5 01:32:53 compute-0 podman[334133]: 2025-12-05 01:32:53.045463727 +0000 UTC m=+0.259040830 container died 63478234e6795d340475e4cbc52eed9cf0459cc5384b1a8cb8827d7cc16ab626 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:32:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-55ba86f5c6760b8e0b025ada95d926efc7f09d17ed3300615802288cebe94133-merged.mount: Deactivated successfully.
Dec  5 01:32:53 compute-0 network[334181]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  5 01:32:53 compute-0 podman[334133]: 2025-12-05 01:32:53.112026874 +0000 UTC m=+0.325603927 container remove 63478234e6795d340475e4cbc52eed9cf0459cc5384b1a8cb8827d7cc16ab626 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_gagarin, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:32:53 compute-0 network[334183]: 'network-scripts' will be removed from distribution in near future.
Dec  5 01:32:53 compute-0 network[334184]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  5 01:32:53 compute-0 systemd[1]: libpod-conmon-63478234e6795d340475e4cbc52eed9cf0459cc5384b1a8cb8827d7cc16ab626.scope: Deactivated successfully.
Dec  5 01:32:53 compute-0 podman[334197]: 2025-12-05 01:32:53.379005354 +0000 UTC m=+0.088257203 container create aa682e3671d914a044d00c7d2dbec53abc7f6d74c0ec7e41eae7e050b8f0e213 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:32:53 compute-0 podman[334197]: 2025-12-05 01:32:53.356446075 +0000 UTC m=+0.065697954 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:32:54 compute-0 systemd[1]: Started libpod-conmon-aa682e3671d914a044d00c7d2dbec53abc7f6d74c0ec7e41eae7e050b8f0e213.scope.
Dec  5 01:32:54 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5b1c5f09093968b0f21d7a7b78d9142f0ee5f9045bef6a0863b46e998a0935f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5b1c5f09093968b0f21d7a7b78d9142f0ee5f9045bef6a0863b46e998a0935f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5b1c5f09093968b0f21d7a7b78d9142f0ee5f9045bef6a0863b46e998a0935f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:32:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5b1c5f09093968b0f21d7a7b78d9142f0ee5f9045bef6a0863b46e998a0935f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:32:54 compute-0 podman[334197]: 2025-12-05 01:32:54.302573047 +0000 UTC m=+1.011824956 container init aa682e3671d914a044d00c7d2dbec53abc7f6d74c0ec7e41eae7e050b8f0e213 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mccarthy, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  5 01:32:54 compute-0 podman[334197]: 2025-12-05 01:32:54.317531894 +0000 UTC m=+1.026783743 container start aa682e3671d914a044d00c7d2dbec53abc7f6d74c0ec7e41eae7e050b8f0e213 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:32:54 compute-0 podman[334197]: 2025-12-05 01:32:54.323174282 +0000 UTC m=+1.032426141 container attach aa682e3671d914a044d00c7d2dbec53abc7f6d74c0ec7e41eae7e050b8f0e213 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mccarthy, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:32:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v699: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]: {
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:    "0": [
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:        {
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:            "devices": [
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:                "/dev/loop3"
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:            ],
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:            "lv_name": "ceph_lv0",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:            "lv_size": "21470642176",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:            "name": "ceph_lv0",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:            "tags": {
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:                "ceph.cluster_name": "ceph",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:                "ceph.crush_device_class": "",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:                "ceph.encrypted": "0",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:                "ceph.osd_id": "0",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:                "ceph.type": "block",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:                "ceph.vdo": "0"
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:            },
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:            "type": "block",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:            "vg_name": "ceph_vg0"
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:        }
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:    ],
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:    "1": [
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:        {
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:            "devices": [
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:                "/dev/loop4"
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:            ],
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:            "lv_name": "ceph_lv1",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:            "lv_size": "21470642176",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:            "name": "ceph_lv1",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:            "tags": {
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:                "ceph.cluster_name": "ceph",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:                "ceph.crush_device_class": "",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:                "ceph.encrypted": "0",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:                "ceph.osd_id": "1",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:                "ceph.type": "block",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:                "ceph.vdo": "0"
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:            },
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:            "type": "block",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:            "vg_name": "ceph_vg1"
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:        }
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:    ],
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:    "2": [
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:        {
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:            "devices": [
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:                "/dev/loop5"
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:            ],
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:            "lv_name": "ceph_lv2",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:            "lv_size": "21470642176",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:            "name": "ceph_lv2",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:            "tags": {
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:                "ceph.cluster_name": "ceph",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:                "ceph.crush_device_class": "",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:                "ceph.encrypted": "0",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:                "ceph.osd_id": "2",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:                "ceph.type": "block",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:                "ceph.vdo": "0"
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:            },
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:            "type": "block",
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:            "vg_name": "ceph_vg2"
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:        }
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]:    ]
Dec  5 01:32:55 compute-0 festive_mccarthy[334215]: }
Dec  5 01:32:55 compute-0 systemd[1]: libpod-aa682e3671d914a044d00c7d2dbec53abc7f6d74c0ec7e41eae7e050b8f0e213.scope: Deactivated successfully.
Dec  5 01:32:55 compute-0 podman[334197]: 2025-12-05 01:32:55.132391224 +0000 UTC m=+1.841643083 container died aa682e3671d914a044d00c7d2dbec53abc7f6d74c0ec7e41eae7e050b8f0e213 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:32:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5b1c5f09093968b0f21d7a7b78d9142f0ee5f9045bef6a0863b46e998a0935f-merged.mount: Deactivated successfully.
Dec  5 01:32:55 compute-0 podman[334197]: 2025-12-05 01:32:55.21718515 +0000 UTC m=+1.926437009 container remove aa682e3671d914a044d00c7d2dbec53abc7f6d74c0ec7e41eae7e050b8f0e213 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  5 01:32:55 compute-0 systemd[1]: libpod-conmon-aa682e3671d914a044d00c7d2dbec53abc7f6d74c0ec7e41eae7e050b8f0e213.scope: Deactivated successfully.
Dec  5 01:32:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:32:56.160 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:32:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:32:56.161 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:32:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:32:56.161 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:32:56 compute-0 podman[334438]: 2025-12-05 01:32:56.256875782 +0000 UTC m=+0.074526851 container create dec08016215a2a56cebf9f1f3a34293aad2945c0b94e97bcda35187bb8476254 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  5 01:32:56 compute-0 systemd[1]: Started libpod-conmon-dec08016215a2a56cebf9f1f3a34293aad2945c0b94e97bcda35187bb8476254.scope.
Dec  5 01:32:56 compute-0 podman[334438]: 2025-12-05 01:32:56.228537581 +0000 UTC m=+0.046188690 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:32:56 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:32:56 compute-0 podman[334438]: 2025-12-05 01:32:56.396401195 +0000 UTC m=+0.214052304 container init dec08016215a2a56cebf9f1f3a34293aad2945c0b94e97bcda35187bb8476254 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  5 01:32:56 compute-0 podman[334438]: 2025-12-05 01:32:56.407925027 +0000 UTC m=+0.225576086 container start dec08016215a2a56cebf9f1f3a34293aad2945c0b94e97bcda35187bb8476254 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  5 01:32:56 compute-0 podman[334438]: 2025-12-05 01:32:56.41554332 +0000 UTC m=+0.233194409 container attach dec08016215a2a56cebf9f1f3a34293aad2945c0b94e97bcda35187bb8476254 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_moser, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:32:56 compute-0 zen_moser[334457]: 167 167
Dec  5 01:32:56 compute-0 systemd[1]: libpod-dec08016215a2a56cebf9f1f3a34293aad2945c0b94e97bcda35187bb8476254.scope: Deactivated successfully.
Dec  5 01:32:56 compute-0 podman[334438]: 2025-12-05 01:32:56.421239099 +0000 UTC m=+0.238890188 container died dec08016215a2a56cebf9f1f3a34293aad2945c0b94e97bcda35187bb8476254 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_moser, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  5 01:32:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e19c2a5b7a1688e9d31e1380026204a75f36484a494c6c434733d934c524ec0-merged.mount: Deactivated successfully.
Dec  5 01:32:56 compute-0 podman[334438]: 2025-12-05 01:32:56.488591968 +0000 UTC m=+0.306243037 container remove dec08016215a2a56cebf9f1f3a34293aad2945c0b94e97bcda35187bb8476254 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_moser, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:32:56 compute-0 systemd[1]: libpod-conmon-dec08016215a2a56cebf9f1f3a34293aad2945c0b94e97bcda35187bb8476254.scope: Deactivated successfully.
Dec  5 01:32:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v700: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:32:56 compute-0 podman[334492]: 2025-12-05 01:32:56.762730268 +0000 UTC m=+0.082510284 container create 93225c5ebee2616a19a686b405692b0556885390d0621a26f2ef524aacf068cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_swanson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:32:56 compute-0 podman[334492]: 2025-12-05 01:32:56.724709707 +0000 UTC m=+0.044489823 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:32:56 compute-0 systemd[1]: Started libpod-conmon-93225c5ebee2616a19a686b405692b0556885390d0621a26f2ef524aacf068cf.scope.
Dec  5 01:32:56 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:32:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3c7790a5c013f62800b330a52ca789eb30ff23301048de6bf2fc4d2b66f6de9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:32:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3c7790a5c013f62800b330a52ca789eb30ff23301048de6bf2fc4d2b66f6de9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:32:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3c7790a5c013f62800b330a52ca789eb30ff23301048de6bf2fc4d2b66f6de9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:32:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3c7790a5c013f62800b330a52ca789eb30ff23301048de6bf2fc4d2b66f6de9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:32:56 compute-0 podman[334492]: 2025-12-05 01:32:56.886573574 +0000 UTC m=+0.206353640 container init 93225c5ebee2616a19a686b405692b0556885390d0621a26f2ef524aacf068cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:32:56 compute-0 podman[334492]: 2025-12-05 01:32:56.912813906 +0000 UTC m=+0.232593932 container start 93225c5ebee2616a19a686b405692b0556885390d0621a26f2ef524aacf068cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_swanson, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:32:56 compute-0 podman[334492]: 2025-12-05 01:32:56.919280617 +0000 UTC m=+0.239060663 container attach 93225c5ebee2616a19a686b405692b0556885390d0621a26f2ef524aacf068cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  5 01:32:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:32:57 compute-0 quizzical_swanson[334513]: {
Dec  5 01:32:57 compute-0 quizzical_swanson[334513]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 01:32:57 compute-0 quizzical_swanson[334513]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:32:57 compute-0 quizzical_swanson[334513]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 01:32:57 compute-0 quizzical_swanson[334513]:        "osd_id": 0,
Dec  5 01:32:57 compute-0 quizzical_swanson[334513]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:32:57 compute-0 quizzical_swanson[334513]:        "type": "bluestore"
Dec  5 01:32:57 compute-0 quizzical_swanson[334513]:    },
Dec  5 01:32:57 compute-0 quizzical_swanson[334513]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 01:32:57 compute-0 quizzical_swanson[334513]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:32:57 compute-0 quizzical_swanson[334513]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 01:32:57 compute-0 quizzical_swanson[334513]:        "osd_id": 1,
Dec  5 01:32:57 compute-0 quizzical_swanson[334513]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:32:57 compute-0 quizzical_swanson[334513]:        "type": "bluestore"
Dec  5 01:32:57 compute-0 quizzical_swanson[334513]:    },
Dec  5 01:32:57 compute-0 quizzical_swanson[334513]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 01:32:57 compute-0 quizzical_swanson[334513]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:32:57 compute-0 quizzical_swanson[334513]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 01:32:57 compute-0 quizzical_swanson[334513]:        "osd_id": 2,
Dec  5 01:32:57 compute-0 quizzical_swanson[334513]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:32:57 compute-0 quizzical_swanson[334513]:        "type": "bluestore"
Dec  5 01:32:57 compute-0 quizzical_swanson[334513]:    }
Dec  5 01:32:57 compute-0 quizzical_swanson[334513]: }
Dec  5 01:32:58 compute-0 systemd[1]: libpod-93225c5ebee2616a19a686b405692b0556885390d0621a26f2ef524aacf068cf.scope: Deactivated successfully.
Dec  5 01:32:58 compute-0 systemd[1]: libpod-93225c5ebee2616a19a686b405692b0556885390d0621a26f2ef524aacf068cf.scope: Consumed 1.101s CPU time.
Dec  5 01:32:58 compute-0 podman[334492]: 2025-12-05 01:32:58.028803688 +0000 UTC m=+1.348583744 container died 93225c5ebee2616a19a686b405692b0556885390d0621a26f2ef524aacf068cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_swanson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  5 01:32:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3c7790a5c013f62800b330a52ca789eb30ff23301048de6bf2fc4d2b66f6de9-merged.mount: Deactivated successfully.
Dec  5 01:32:58 compute-0 podman[334492]: 2025-12-05 01:32:58.116844925 +0000 UTC m=+1.436624941 container remove 93225c5ebee2616a19a686b405692b0556885390d0621a26f2ef524aacf068cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_swanson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  5 01:32:58 compute-0 systemd[1]: libpod-conmon-93225c5ebee2616a19a686b405692b0556885390d0621a26f2ef524aacf068cf.scope: Deactivated successfully.
Dec  5 01:32:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:32:58 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:32:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:32:58 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:32:58 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 46d09c64-8068-48b6-9331-00162b20c71c does not exist
Dec  5 01:32:58 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 96b5dabe-4790-44c6-ac6f-86f987475bc4 does not exist
Dec  5 01:32:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v701: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:32:59 compute-0 python3.9[334794]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  5 01:32:59 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:32:59 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:32:59 compute-0 podman[158197]: time="2025-12-05T01:32:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:32:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:32:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 38322 "" "Go-http-client/1.1"
Dec  5 01:32:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:32:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7683 "" "Go-http-client/1.1"
Dec  5 01:32:59 compute-0 podman[334947]: 2025-12-05 01:32:59.876269992 +0000 UTC m=+0.083507212 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  5 01:33:00 compute-0 python3.9[334948]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  5 01:33:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v702: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:33:01 compute-0 python3.9[335120]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  5 01:33:01 compute-0 openstack_network_exporter[160350]: ERROR   01:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:33:01 compute-0 openstack_network_exporter[160350]: ERROR   01:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:33:01 compute-0 openstack_network_exporter[160350]: ERROR   01:33:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:33:01 compute-0 openstack_network_exporter[160350]: ERROR   01:33:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:33:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:33:01 compute-0 openstack_network_exporter[160350]: ERROR   01:33:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:33:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:33:01 compute-0 podman[335122]: 2025-12-05 01:33:01.430429901 +0000 UTC m=+0.087664817 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 01:33:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:33:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v703: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:33:03 compute-0 python3.9[335292]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  5 01:33:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v704: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:33:04 compute-0 python3.9[335445]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  5 01:33:06 compute-0 python3.9[335598]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  5 01:33:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v705: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:33:07 compute-0 python3.9[335751]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  5 01:33:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:33:08 compute-0 python3.9[335904]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  5 01:33:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v706: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:33:09 compute-0 podman[336059]: 2025-12-05 01:33:09.483829994 +0000 UTC m=+0.097361238 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  5 01:33:09 compute-0 podman[336057]: 2025-12-05 01:33:09.486748035 +0000 UTC m=+0.103964082 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  5 01:33:09 compute-0 podman[336058]: 2025-12-05 01:33:09.506874297 +0000 UTC m=+0.123160648 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  5 01:33:09 compute-0 podman[336060]: 2025-12-05 01:33:09.54531641 +0000 UTC m=+0.156684274 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  5 01:33:09 compute-0 python3.9[336071]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:33:10 compute-0 python3.9[336295]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:33:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v707: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:33:11 compute-0 python3.9[336447]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:33:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:33:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v708: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:33:13 compute-0 python3.9[336599]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:33:14 compute-0 python3.9[336751]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:33:14 compute-0 podman[336752]: 2025-12-05 01:33:14.707670155 +0000 UTC m=+0.118007954 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, config_id=edpm, vendor=Red Hat, Inc., release=1214.1726694543, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, container_name=kepler, maintainer=Red Hat, Inc.)
Dec  5 01:33:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v709: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:33:15 compute-0 python3.9[336922]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:33:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:33:16
Dec  5 01:33:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 01:33:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 01:33:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'volumes', 'backups', 'cephfs.cephfs.data', 'vms', '.mgr', 'default.rgw.log', 'images']
Dec  5 01:33:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 01:33:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:33:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:33:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:33:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:33:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:33:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:33:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 01:33:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:33:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 01:33:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:33:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:33:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:33:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:33:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:33:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:33:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:33:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v710: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:33:16 compute-0 python3.9[337074]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:33:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:33:17 compute-0 python3.9[337226]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:33:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v711: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:33:18 compute-0 python3.9[337378]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:33:19 compute-0 python3.9[337531]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:33:20 compute-0 podman[337655]: 2025-12-05 01:33:20.511838533 +0000 UTC m=+0.122963693 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., container_name=openstack_network_exporter, version=9.6, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9-minimal, managed_by=edpm_ansible, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  5 01:33:20 compute-0 python3.9[337704]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:33:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v712: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:33:21 compute-0 python3.9[337856]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:33:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:33:22 compute-0 podman[337980]: 2025-12-05 01:33:22.418095887 +0000 UTC m=+0.093426308 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  5 01:33:22 compute-0 python3.9[338032]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:33:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v713: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:33:23 compute-0 python3.9[338184]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:33:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v714: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:33:25 compute-0 python3.9[338336]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:33:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 01:33:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v715: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:33:27 compute-0 python3.9[338488]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:33:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:33:28 compute-0 python3.9[338640]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:33:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v716: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:33:29 compute-0 python3.9[338792]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  5 01:33:29 compute-0 podman[158197]: time="2025-12-05T01:33:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:33:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:33:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 38321 "" "Go-http-client/1.1"
Dec  5 01:33:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:33:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7698 "" "Go-http-client/1.1"
Dec  5 01:33:30 compute-0 podman[338916]: 2025-12-05 01:33:30.395688594 +0000 UTC m=+0.101520544 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec  5 01:33:30 compute-0 python3.9[338960]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  5 01:33:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v717: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:33:30 compute-0 systemd[1]: Reloading.
Dec  5 01:33:30 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:33:30 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:33:31 compute-0 openstack_network_exporter[160350]: ERROR   01:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:33:31 compute-0 openstack_network_exporter[160350]: ERROR   01:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:33:31 compute-0 openstack_network_exporter[160350]: ERROR   01:33:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:33:31 compute-0 openstack_network_exporter[160350]: ERROR   01:33:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:33:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:33:31 compute-0 openstack_network_exporter[160350]: ERROR   01:33:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:33:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:33:31 compute-0 podman[339044]: 2025-12-05 01:33:31.6777316 +0000 UTC m=+0.083732298 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  5 01:33:32 compute-0 python3.9[339166]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:33:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:33:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v718: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:33:33 compute-0 python3.9[339319]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:33:34 compute-0 python3.9[339472]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:33:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v719: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:33:35 compute-0 python3.9[339625]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:33:36 compute-0 python3.9[339778]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:33:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v720: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:33:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:33:37 compute-0 python3.9[339931]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:33:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v721: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:33:38 compute-0 python3.9[340084]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:33:39 compute-0 podman[340111]: 2025-12-05 01:33:39.676702012 +0000 UTC m=+0.082706239 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Dec  5 01:33:39 compute-0 podman[340116]: 2025-12-05 01:33:39.694050146 +0000 UTC m=+0.094611651 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 01:33:39 compute-0 podman[340118]: 2025-12-05 01:33:39.713493508 +0000 UTC m=+0.119001211 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:33:39 compute-0 podman[340120]: 2025-12-05 01:33:39.722483399 +0000 UTC m=+0.118831877 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec  5 01:33:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v722: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:33:40 compute-0 python3.9[340319]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:33:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:33:42 compute-0 python3.9[340472]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:33:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v723: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:33:43 compute-0 python3.9[340624]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:33:44 compute-0 python3.9[340776]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:33:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v724: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:33:44 compute-0 podman[340805]: 2025-12-05 01:33:44.844676276 +0000 UTC m=+0.098112939 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.openshift.tags=base rhel9, io.openshift.expose-services=, vcs-type=git, container_name=kepler)
Dec  5 01:33:45 compute-0 python3.9[340945]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:33:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:33:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:33:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:33:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:33:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:33:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:33:46 compute-0 python3.9[341097]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:33:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v725: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:33:47 compute-0 python3.9[341249]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:33:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:33:48 compute-0 python3.9[341401]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:33:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v726: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:33:49 compute-0 python3.9[341553]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:33:50 compute-0 python3.9[341706]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:33:50 compute-0 podman[341731]: 2025-12-05 01:33:50.74232628 +0000 UTC m=+0.150257054 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, architecture=x86_64, distribution-scope=public, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, name=ubi9-minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  5 01:33:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v727: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:33:52 compute-0 python3.9[341878]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:33:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:33:52 compute-0 podman[341903]: 2025-12-05 01:33:52.693040725 +0000 UTC m=+0.098391586 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  5 01:33:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v728: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:33:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v729: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:33:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:33:56.161 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:33:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:33:56.161 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:33:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:33:56.162 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:33:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v730: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:33:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:33:58 compute-0 python3.9[342052]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Dec  5 01:33:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v731: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:33:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:33:59 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:33:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 01:33:59 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:33:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 01:33:59 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:33:59 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev a9e95ca3-4e8d-4c4e-8cfb-86c1c4928c86 does not exist
Dec  5 01:33:59 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 560a3136-509a-428a-b58b-803e3b17fc03 does not exist
Dec  5 01:33:59 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev fe101afd-b09c-47a1-8dcb-e63129a8984b does not exist
Dec  5 01:33:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 01:33:59 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 01:33:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 01:33:59 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:33:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:33:59 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:33:59 compute-0 podman[158197]: time="2025-12-05T01:33:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:33:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:33:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 38321 "" "Go-http-client/1.1"
Dec  5 01:33:59 compute-0 python3.9[342341]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  5 01:33:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:33:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7709 "" "Go-http-client/1.1"
Dec  5 01:33:59 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:33:59 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:33:59 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:34:00 compute-0 podman[342558]: 2025-12-05 01:34:00.462458803 +0000 UTC m=+0.097641126 container create d296c4e90b2ae6bab3538e442d98bed8d0cf63eff7d029c3ee65407fb94ac7eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_chaplygin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  5 01:34:00 compute-0 podman[342558]: 2025-12-05 01:34:00.42187085 +0000 UTC m=+0.057053243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:34:00 compute-0 systemd[1]: Started libpod-conmon-d296c4e90b2ae6bab3538e442d98bed8d0cf63eff7d029c3ee65407fb94ac7eb.scope.
Dec  5 01:34:00 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:34:00 compute-0 podman[342558]: 2025-12-05 01:34:00.641534319 +0000 UTC m=+0.276716732 container init d296c4e90b2ae6bab3538e442d98bed8d0cf63eff7d029c3ee65407fb94ac7eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_chaplygin, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Dec  5 01:34:00 compute-0 podman[342575]: 2025-12-05 01:34:00.644051399 +0000 UTC m=+0.114988809 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  5 01:34:00 compute-0 podman[342558]: 2025-12-05 01:34:00.657423183 +0000 UTC m=+0.292605536 container start d296c4e90b2ae6bab3538e442d98bed8d0cf63eff7d029c3ee65407fb94ac7eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_chaplygin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:34:00 compute-0 podman[342558]: 2025-12-05 01:34:00.664555692 +0000 UTC m=+0.299738045 container attach d296c4e90b2ae6bab3538e442d98bed8d0cf63eff7d029c3ee65407fb94ac7eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_chaplygin, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:34:00 compute-0 nostalgic_chaplygin[342606]: 167 167
Dec  5 01:34:00 compute-0 systemd[1]: libpod-d296c4e90b2ae6bab3538e442d98bed8d0cf63eff7d029c3ee65407fb94ac7eb.scope: Deactivated successfully.
Dec  5 01:34:00 compute-0 podman[342558]: 2025-12-05 01:34:00.66985678 +0000 UTC m=+0.305039133 container died d296c4e90b2ae6bab3538e442d98bed8d0cf63eff7d029c3ee65407fb94ac7eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_chaplygin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  5 01:34:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-013c64b8af0236e9f2ea180c4e79df7b7840720f09776e6a9b74f6bf372d0c60-merged.mount: Deactivated successfully.
Dec  5 01:34:00 compute-0 podman[342558]: 2025-12-05 01:34:00.72181495 +0000 UTC m=+0.356997263 container remove d296c4e90b2ae6bab3538e442d98bed8d0cf63eff7d029c3ee65407fb94ac7eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  5 01:34:00 compute-0 systemd[1]: libpod-conmon-d296c4e90b2ae6bab3538e442d98bed8d0cf63eff7d029c3ee65407fb94ac7eb.scope: Deactivated successfully.
Dec  5 01:34:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v732: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:34:00 compute-0 podman[342692]: 2025-12-05 01:34:00.940620056 +0000 UTC m=+0.066373183 container create 9bc9dc1b900ed808ad0fb7f0e72d95acd5ca66ac4fe5af720d1fcc1901957a17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_elion, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:34:01 compute-0 podman[342692]: 2025-12-05 01:34:00.908774887 +0000 UTC m=+0.034528054 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:34:01 compute-0 systemd[1]: Started libpod-conmon-9bc9dc1b900ed808ad0fb7f0e72d95acd5ca66ac4fe5af720d1fcc1901957a17.scope.
Dec  5 01:34:01 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:34:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/307c3fdbb472154534fe6324601cedf4878139ab65bf4d2e85ec4e18387072e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:34:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/307c3fdbb472154534fe6324601cedf4878139ab65bf4d2e85ec4e18387072e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:34:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/307c3fdbb472154534fe6324601cedf4878139ab65bf4d2e85ec4e18387072e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:34:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/307c3fdbb472154534fe6324601cedf4878139ab65bf4d2e85ec4e18387072e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:34:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/307c3fdbb472154534fe6324601cedf4878139ab65bf4d2e85ec4e18387072e6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:34:01 compute-0 python3.9[342686]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  5 01:34:01 compute-0 podman[342692]: 2025-12-05 01:34:01.103077719 +0000 UTC m=+0.228830866 container init 9bc9dc1b900ed808ad0fb7f0e72d95acd5ca66ac4fe5af720d1fcc1901957a17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_elion, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  5 01:34:01 compute-0 rsyslogd[188644]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  5 01:34:01 compute-0 rsyslogd[188644]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  5 01:34:01 compute-0 podman[342692]: 2025-12-05 01:34:01.119381934 +0000 UTC m=+0.245135041 container start 9bc9dc1b900ed808ad0fb7f0e72d95acd5ca66ac4fe5af720d1fcc1901957a17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_elion, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec  5 01:34:01 compute-0 podman[342692]: 2025-12-05 01:34:01.126716939 +0000 UTC m=+0.252470046 container attach 9bc9dc1b900ed808ad0fb7f0e72d95acd5ca66ac4fe5af720d1fcc1901957a17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  5 01:34:01 compute-0 openstack_network_exporter[160350]: ERROR   01:34:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:34:01 compute-0 openstack_network_exporter[160350]: ERROR   01:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:34:01 compute-0 openstack_network_exporter[160350]: ERROR   01:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:34:01 compute-0 openstack_network_exporter[160350]: ERROR   01:34:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:34:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:34:01 compute-0 openstack_network_exporter[160350]: ERROR   01:34:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:34:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:34:02 compute-0 systemd-logind[792]: New session 57 of user zuul.
Dec  5 01:34:02 compute-0 systemd[1]: Started Session 57 of User zuul.
Dec  5 01:34:02 compute-0 podman[342763]: 2025-12-05 01:34:02.282475512 +0000 UTC m=+0.119344302 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Dec  5 01:34:02 compute-0 jovial_elion[342709]: --> passed data devices: 0 physical, 3 LVM
Dec  5 01:34:02 compute-0 jovial_elion[342709]: --> relative data size: 1.0
Dec  5 01:34:02 compute-0 jovial_elion[342709]: --> All data devices are unavailable
Dec  5 01:34:02 compute-0 systemd[1]: session-57.scope: Deactivated successfully.
Dec  5 01:34:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:34:02 compute-0 systemd-logind[792]: Session 57 logged out. Waiting for processes to exit.
Dec  5 01:34:02 compute-0 systemd-logind[792]: Removed session 57.
Dec  5 01:34:02 compute-0 systemd[1]: libpod-9bc9dc1b900ed808ad0fb7f0e72d95acd5ca66ac4fe5af720d1fcc1901957a17.scope: Deactivated successfully.
Dec  5 01:34:02 compute-0 systemd[1]: libpod-9bc9dc1b900ed808ad0fb7f0e72d95acd5ca66ac4fe5af720d1fcc1901957a17.scope: Consumed 1.185s CPU time.
Dec  5 01:34:02 compute-0 podman[342815]: 2025-12-05 01:34:02.456456777 +0000 UTC m=+0.050201182 container died 9bc9dc1b900ed808ad0fb7f0e72d95acd5ca66ac4fe5af720d1fcc1901957a17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Dec  5 01:34:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-307c3fdbb472154534fe6324601cedf4878139ab65bf4d2e85ec4e18387072e6-merged.mount: Deactivated successfully.
Dec  5 01:34:02 compute-0 podman[342815]: 2025-12-05 01:34:02.542173609 +0000 UTC m=+0.135918054 container remove 9bc9dc1b900ed808ad0fb7f0e72d95acd5ca66ac4fe5af720d1fcc1901957a17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:34:02 compute-0 systemd[1]: libpod-conmon-9bc9dc1b900ed808ad0fb7f0e72d95acd5ca66ac4fe5af720d1fcc1901957a17.scope: Deactivated successfully.
Dec  5 01:34:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v733: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:34:03 compute-0 python3.9[343052]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:34:03 compute-0 podman[343115]: 2025-12-05 01:34:03.637962757 +0000 UTC m=+0.088921612 container create c61b1e4ce4a58849e60a79175b2152a79b4ceb44ae3c1732b72cf0c05a7ec494 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  5 01:34:03 compute-0 podman[343115]: 2025-12-05 01:34:03.601730496 +0000 UTC m=+0.052689371 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:34:03 compute-0 systemd[1]: Started libpod-conmon-c61b1e4ce4a58849e60a79175b2152a79b4ceb44ae3c1732b72cf0c05a7ec494.scope.
Dec  5 01:34:03 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:34:03 compute-0 podman[343115]: 2025-12-05 01:34:03.764848238 +0000 UTC m=+0.215807133 container init c61b1e4ce4a58849e60a79175b2152a79b4ceb44ae3c1732b72cf0c05a7ec494 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:34:03 compute-0 podman[343115]: 2025-12-05 01:34:03.781486862 +0000 UTC m=+0.232445707 container start c61b1e4ce4a58849e60a79175b2152a79b4ceb44ae3c1732b72cf0c05a7ec494 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_buck, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  5 01:34:03 compute-0 podman[343115]: 2025-12-05 01:34:03.787736567 +0000 UTC m=+0.238695472 container attach c61b1e4ce4a58849e60a79175b2152a79b4ceb44ae3c1732b72cf0c05a7ec494 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_buck, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  5 01:34:03 compute-0 recursing_buck[343130]: 167 167
Dec  5 01:34:03 compute-0 systemd[1]: libpod-c61b1e4ce4a58849e60a79175b2152a79b4ceb44ae3c1732b72cf0c05a7ec494.scope: Deactivated successfully.
Dec  5 01:34:03 compute-0 podman[343115]: 2025-12-05 01:34:03.792648164 +0000 UTC m=+0.243607009 container died c61b1e4ce4a58849e60a79175b2152a79b4ceb44ae3c1732b72cf0c05a7ec494 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_buck, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  5 01:34:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-aceff54755a3c2fab8f208344b978e53436005de3c62489df5fb6602c306136a-merged.mount: Deactivated successfully.
Dec  5 01:34:03 compute-0 podman[343115]: 2025-12-05 01:34:03.861531196 +0000 UTC m=+0.312490041 container remove c61b1e4ce4a58849e60a79175b2152a79b4ceb44ae3c1732b72cf0c05a7ec494 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:34:03 compute-0 systemd[1]: libpod-conmon-c61b1e4ce4a58849e60a79175b2152a79b4ceb44ae3c1732b72cf0c05a7ec494.scope: Deactivated successfully.
Dec  5 01:34:04 compute-0 podman[343173]: 2025-12-05 01:34:04.100610507 +0000 UTC m=+0.075917119 container create 9327bde0e646df1dbd3d2700ae95b1af7749fdf9cf06aafbcba6c26c80cc53c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_greider, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  5 01:34:04 compute-0 systemd[1]: Started libpod-conmon-9327bde0e646df1dbd3d2700ae95b1af7749fdf9cf06aafbcba6c26c80cc53c6.scope.
Dec  5 01:34:04 compute-0 podman[343173]: 2025-12-05 01:34:04.075178988 +0000 UTC m=+0.050485600 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:34:04 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:34:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6f0d8df8c212c4f3e3837d21327d5220d289e6bb6c256009783a63b7a9a03d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:34:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6f0d8df8c212c4f3e3837d21327d5220d289e6bb6c256009783a63b7a9a03d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:34:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6f0d8df8c212c4f3e3837d21327d5220d289e6bb6c256009783a63b7a9a03d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:34:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6f0d8df8c212c4f3e3837d21327d5220d289e6bb6c256009783a63b7a9a03d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:34:04 compute-0 podman[343173]: 2025-12-05 01:34:04.246090736 +0000 UTC m=+0.221397328 container init 9327bde0e646df1dbd3d2700ae95b1af7749fdf9cf06aafbcba6c26c80cc53c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:34:04 compute-0 podman[343173]: 2025-12-05 01:34:04.265606781 +0000 UTC m=+0.240913403 container start 9327bde0e646df1dbd3d2700ae95b1af7749fdf9cf06aafbcba6c26c80cc53c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:34:04 compute-0 podman[343173]: 2025-12-05 01:34:04.271198157 +0000 UTC m=+0.246504759 container attach 9327bde0e646df1dbd3d2700ae95b1af7749fdf9cf06aafbcba6c26c80cc53c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_greider, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Dec  5 01:34:04 compute-0 python3.9[343271]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764898442.5957577-1249-269042340164892/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:34:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v734: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:34:05 compute-0 determined_greider[343216]: {
Dec  5 01:34:05 compute-0 determined_greider[343216]:    "0": [
Dec  5 01:34:05 compute-0 determined_greider[343216]:        {
Dec  5 01:34:05 compute-0 determined_greider[343216]:            "devices": [
Dec  5 01:34:05 compute-0 determined_greider[343216]:                "/dev/loop3"
Dec  5 01:34:05 compute-0 determined_greider[343216]:            ],
Dec  5 01:34:05 compute-0 determined_greider[343216]:            "lv_name": "ceph_lv0",
Dec  5 01:34:05 compute-0 determined_greider[343216]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:34:05 compute-0 determined_greider[343216]:            "lv_size": "21470642176",
Dec  5 01:34:05 compute-0 determined_greider[343216]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:34:05 compute-0 determined_greider[343216]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:34:05 compute-0 determined_greider[343216]:            "name": "ceph_lv0",
Dec  5 01:34:05 compute-0 determined_greider[343216]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:34:05 compute-0 determined_greider[343216]:            "tags": {
Dec  5 01:34:05 compute-0 determined_greider[343216]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:34:05 compute-0 determined_greider[343216]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:34:05 compute-0 determined_greider[343216]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:34:05 compute-0 determined_greider[343216]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:34:05 compute-0 determined_greider[343216]:                "ceph.cluster_name": "ceph",
Dec  5 01:34:05 compute-0 determined_greider[343216]:                "ceph.crush_device_class": "",
Dec  5 01:34:05 compute-0 determined_greider[343216]:                "ceph.encrypted": "0",
Dec  5 01:34:05 compute-0 determined_greider[343216]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:34:05 compute-0 determined_greider[343216]:                "ceph.osd_id": "0",
Dec  5 01:34:05 compute-0 determined_greider[343216]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:34:05 compute-0 determined_greider[343216]:                "ceph.type": "block",
Dec  5 01:34:05 compute-0 determined_greider[343216]:                "ceph.vdo": "0"
Dec  5 01:34:05 compute-0 determined_greider[343216]:            },
Dec  5 01:34:05 compute-0 determined_greider[343216]:            "type": "block",
Dec  5 01:34:05 compute-0 determined_greider[343216]:            "vg_name": "ceph_vg0"
Dec  5 01:34:05 compute-0 determined_greider[343216]:        }
Dec  5 01:34:05 compute-0 determined_greider[343216]:    ],
Dec  5 01:34:05 compute-0 determined_greider[343216]:    "1": [
Dec  5 01:34:05 compute-0 determined_greider[343216]:        {
Dec  5 01:34:05 compute-0 determined_greider[343216]:            "devices": [
Dec  5 01:34:05 compute-0 determined_greider[343216]:                "/dev/loop4"
Dec  5 01:34:05 compute-0 determined_greider[343216]:            ],
Dec  5 01:34:05 compute-0 determined_greider[343216]:            "lv_name": "ceph_lv1",
Dec  5 01:34:05 compute-0 determined_greider[343216]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:34:05 compute-0 determined_greider[343216]:            "lv_size": "21470642176",
Dec  5 01:34:05 compute-0 determined_greider[343216]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:34:05 compute-0 determined_greider[343216]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:34:05 compute-0 determined_greider[343216]:            "name": "ceph_lv1",
Dec  5 01:34:05 compute-0 determined_greider[343216]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:34:05 compute-0 determined_greider[343216]:            "tags": {
Dec  5 01:34:05 compute-0 determined_greider[343216]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:34:05 compute-0 determined_greider[343216]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:34:05 compute-0 determined_greider[343216]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:34:05 compute-0 determined_greider[343216]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:34:05 compute-0 determined_greider[343216]:                "ceph.cluster_name": "ceph",
Dec  5 01:34:05 compute-0 determined_greider[343216]:                "ceph.crush_device_class": "",
Dec  5 01:34:05 compute-0 determined_greider[343216]:                "ceph.encrypted": "0",
Dec  5 01:34:05 compute-0 determined_greider[343216]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:34:05 compute-0 determined_greider[343216]:                "ceph.osd_id": "1",
Dec  5 01:34:05 compute-0 determined_greider[343216]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:34:05 compute-0 determined_greider[343216]:                "ceph.type": "block",
Dec  5 01:34:05 compute-0 determined_greider[343216]:                "ceph.vdo": "0"
Dec  5 01:34:05 compute-0 determined_greider[343216]:            },
Dec  5 01:34:05 compute-0 determined_greider[343216]:            "type": "block",
Dec  5 01:34:05 compute-0 determined_greider[343216]:            "vg_name": "ceph_vg1"
Dec  5 01:34:05 compute-0 determined_greider[343216]:        }
Dec  5 01:34:05 compute-0 determined_greider[343216]:    ],
Dec  5 01:34:05 compute-0 determined_greider[343216]:    "2": [
Dec  5 01:34:05 compute-0 determined_greider[343216]:        {
Dec  5 01:34:05 compute-0 determined_greider[343216]:            "devices": [
Dec  5 01:34:05 compute-0 determined_greider[343216]:                "/dev/loop5"
Dec  5 01:34:05 compute-0 determined_greider[343216]:            ],
Dec  5 01:34:05 compute-0 determined_greider[343216]:            "lv_name": "ceph_lv2",
Dec  5 01:34:05 compute-0 determined_greider[343216]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:34:05 compute-0 determined_greider[343216]:            "lv_size": "21470642176",
Dec  5 01:34:05 compute-0 determined_greider[343216]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:34:05 compute-0 determined_greider[343216]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:34:05 compute-0 determined_greider[343216]:            "name": "ceph_lv2",
Dec  5 01:34:05 compute-0 determined_greider[343216]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:34:05 compute-0 determined_greider[343216]:            "tags": {
Dec  5 01:34:05 compute-0 determined_greider[343216]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:34:05 compute-0 determined_greider[343216]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:34:05 compute-0 determined_greider[343216]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:34:05 compute-0 determined_greider[343216]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:34:05 compute-0 determined_greider[343216]:                "ceph.cluster_name": "ceph",
Dec  5 01:34:05 compute-0 determined_greider[343216]:                "ceph.crush_device_class": "",
Dec  5 01:34:05 compute-0 determined_greider[343216]:                "ceph.encrypted": "0",
Dec  5 01:34:05 compute-0 determined_greider[343216]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:34:05 compute-0 determined_greider[343216]:                "ceph.osd_id": "2",
Dec  5 01:34:05 compute-0 determined_greider[343216]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:34:05 compute-0 determined_greider[343216]:                "ceph.type": "block",
Dec  5 01:34:05 compute-0 determined_greider[343216]:                "ceph.vdo": "0"
Dec  5 01:34:05 compute-0 determined_greider[343216]:            },
Dec  5 01:34:05 compute-0 determined_greider[343216]:            "type": "block",
Dec  5 01:34:05 compute-0 determined_greider[343216]:            "vg_name": "ceph_vg2"
Dec  5 01:34:05 compute-0 determined_greider[343216]:        }
Dec  5 01:34:05 compute-0 determined_greider[343216]:    ]
Dec  5 01:34:05 compute-0 determined_greider[343216]: }
Dec  5 01:34:05 compute-0 systemd[1]: libpod-9327bde0e646df1dbd3d2700ae95b1af7749fdf9cf06aafbcba6c26c80cc53c6.scope: Deactivated successfully.
Dec  5 01:34:05 compute-0 podman[343173]: 2025-12-05 01:34:05.254358512 +0000 UTC m=+1.229665134 container died 9327bde0e646df1dbd3d2700ae95b1af7749fdf9cf06aafbcba6c26c80cc53c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_greider, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:34:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6f0d8df8c212c4f3e3837d21327d5220d289e6bb6c256009783a63b7a9a03d9-merged.mount: Deactivated successfully.
Dec  5 01:34:05 compute-0 podman[343173]: 2025-12-05 01:34:05.352503371 +0000 UTC m=+1.327809943 container remove 9327bde0e646df1dbd3d2700ae95b1af7749fdf9cf06aafbcba6c26c80cc53c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_greider, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Dec  5 01:34:05 compute-0 systemd[1]: libpod-conmon-9327bde0e646df1dbd3d2700ae95b1af7749fdf9cf06aafbcba6c26c80cc53c6.scope: Deactivated successfully.
Dec  5 01:34:05 compute-0 python3.9[343426]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:34:06 compute-0 podman[343579]: 2025-12-05 01:34:06.245178171 +0000 UTC m=+0.068209084 container create 6a3f672dada0da837346a89c457f8e7863ad2ac730e9de2516cadb88bf38f988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_swartz, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Dec  5 01:34:06 compute-0 podman[343579]: 2025-12-05 01:34:06.210765201 +0000 UTC m=+0.033796194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:34:06 compute-0 systemd[1]: Started libpod-conmon-6a3f672dada0da837346a89c457f8e7863ad2ac730e9de2516cadb88bf38f988.scope.
Dec  5 01:34:06 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:34:06 compute-0 podman[343579]: 2025-12-05 01:34:06.352714242 +0000 UTC m=+0.175745145 container init 6a3f672dada0da837346a89c457f8e7863ad2ac730e9de2516cadb88bf38f988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_swartz, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  5 01:34:06 compute-0 podman[343579]: 2025-12-05 01:34:06.365386946 +0000 UTC m=+0.188417849 container start 6a3f672dada0da837346a89c457f8e7863ad2ac730e9de2516cadb88bf38f988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_swartz, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  5 01:34:06 compute-0 podman[343579]: 2025-12-05 01:34:06.369293775 +0000 UTC m=+0.192324728 container attach 6a3f672dada0da837346a89c457f8e7863ad2ac730e9de2516cadb88bf38f988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_swartz, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  5 01:34:06 compute-0 happy_swartz[343613]: 167 167
Dec  5 01:34:06 compute-0 systemd[1]: libpod-6a3f672dada0da837346a89c457f8e7863ad2ac730e9de2516cadb88bf38f988.scope: Deactivated successfully.
Dec  5 01:34:06 compute-0 podman[343579]: 2025-12-05 01:34:06.374634674 +0000 UTC m=+0.197665607 container died 6a3f672dada0da837346a89c457f8e7863ad2ac730e9de2516cadb88bf38f988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_swartz, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  5 01:34:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-58e80a61d960c5587b574c88d14efcada466dbf8e5eb80648ca5db737bcf632f-merged.mount: Deactivated successfully.
Dec  5 01:34:06 compute-0 podman[343579]: 2025-12-05 01:34:06.427069477 +0000 UTC m=+0.250100380 container remove 6a3f672dada0da837346a89c457f8e7863ad2ac730e9de2516cadb88bf38f988 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_swartz, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  5 01:34:06 compute-0 systemd[1]: libpod-conmon-6a3f672dada0da837346a89c457f8e7863ad2ac730e9de2516cadb88bf38f988.scope: Deactivated successfully.
Dec  5 01:34:06 compute-0 podman[343691]: 2025-12-05 01:34:06.668083193 +0000 UTC m=+0.074322635 container create c19d64a8a8c42294b92b3217a5e240f8457694fb9fd4f74feb6755f3cb0e1306 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kilby, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Dec  5 01:34:06 compute-0 python3.9[343685]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:34:06 compute-0 podman[343691]: 2025-12-05 01:34:06.641977194 +0000 UTC m=+0.048216646 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:34:06 compute-0 systemd[1]: Started libpod-conmon-c19d64a8a8c42294b92b3217a5e240f8457694fb9fd4f74feb6755f3cb0e1306.scope.
Dec  5 01:34:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v735: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:34:06 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01298082028ab38d72b2951a674af384a9a062aec394f53d2894ddbecd71bb8d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01298082028ab38d72b2951a674af384a9a062aec394f53d2894ddbecd71bb8d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01298082028ab38d72b2951a674af384a9a062aec394f53d2894ddbecd71bb8d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01298082028ab38d72b2951a674af384a9a062aec394f53d2894ddbecd71bb8d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:34:06 compute-0 podman[343691]: 2025-12-05 01:34:06.832291755 +0000 UTC m=+0.238531267 container init c19d64a8a8c42294b92b3217a5e240f8457694fb9fd4f74feb6755f3cb0e1306 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:34:06 compute-0 podman[343691]: 2025-12-05 01:34:06.848834777 +0000 UTC m=+0.255074229 container start c19d64a8a8c42294b92b3217a5e240f8457694fb9fd4f74feb6755f3cb0e1306 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kilby, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:34:06 compute-0 podman[343691]: 2025-12-05 01:34:06.855483162 +0000 UTC m=+0.261722614 container attach c19d64a8a8c42294b92b3217a5e240f8457694fb9fd4f74feb6755f3cb0e1306 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kilby, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  5 01:34:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:34:07 compute-0 python3.9[343861]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:34:08 compute-0 exciting_kilby[343707]: {
Dec  5 01:34:08 compute-0 exciting_kilby[343707]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 01:34:08 compute-0 exciting_kilby[343707]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:34:08 compute-0 exciting_kilby[343707]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 01:34:08 compute-0 exciting_kilby[343707]:        "osd_id": 0,
Dec  5 01:34:08 compute-0 exciting_kilby[343707]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:34:08 compute-0 exciting_kilby[343707]:        "type": "bluestore"
Dec  5 01:34:08 compute-0 exciting_kilby[343707]:    },
Dec  5 01:34:08 compute-0 exciting_kilby[343707]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 01:34:08 compute-0 exciting_kilby[343707]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:34:08 compute-0 exciting_kilby[343707]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 01:34:08 compute-0 exciting_kilby[343707]:        "osd_id": 1,
Dec  5 01:34:08 compute-0 exciting_kilby[343707]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:34:08 compute-0 exciting_kilby[343707]:        "type": "bluestore"
Dec  5 01:34:08 compute-0 exciting_kilby[343707]:    },
Dec  5 01:34:08 compute-0 exciting_kilby[343707]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 01:34:08 compute-0 exciting_kilby[343707]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:34:08 compute-0 exciting_kilby[343707]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 01:34:08 compute-0 exciting_kilby[343707]:        "osd_id": 2,
Dec  5 01:34:08 compute-0 exciting_kilby[343707]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:34:08 compute-0 exciting_kilby[343707]:        "type": "bluestore"
Dec  5 01:34:08 compute-0 exciting_kilby[343707]:    }
Dec  5 01:34:08 compute-0 exciting_kilby[343707]: }
Dec  5 01:34:08 compute-0 systemd[1]: libpod-c19d64a8a8c42294b92b3217a5e240f8457694fb9fd4f74feb6755f3cb0e1306.scope: Deactivated successfully.
Dec  5 01:34:08 compute-0 systemd[1]: libpod-c19d64a8a8c42294b92b3217a5e240f8457694fb9fd4f74feb6755f3cb0e1306.scope: Consumed 1.223s CPU time.
Dec  5 01:34:08 compute-0 podman[343962]: 2025-12-05 01:34:08.172604686 +0000 UTC m=+0.070032015 container died c19d64a8a8c42294b92b3217a5e240f8457694fb9fd4f74feb6755f3cb0e1306 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:34:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-01298082028ab38d72b2951a674af384a9a062aec394f53d2894ddbecd71bb8d-merged.mount: Deactivated successfully.
Dec  5 01:34:08 compute-0 podman[343962]: 2025-12-05 01:34:08.28454834 +0000 UTC m=+0.181975679 container remove c19d64a8a8c42294b92b3217a5e240f8457694fb9fd4f74feb6755f3cb0e1306 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:34:08 compute-0 systemd[1]: libpod-conmon-c19d64a8a8c42294b92b3217a5e240f8457694fb9fd4f74feb6755f3cb0e1306.scope: Deactivated successfully.
Dec  5 01:34:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:34:08 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:34:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:34:08 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:34:08 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 35bb95c6-2613-4809-b8d6-4f8732e32256 does not exist
Dec  5 01:34:08 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev fbfb9c7f-41eb-4f93-8145-2a4fd4c3fae1 does not exist
Dec  5 01:34:08 compute-0 python3.9[344021]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764898446.9884043-1249-164230920362177/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:34:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v736: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:34:09 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:34:09 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:34:09 compute-0 python3.9[344221]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:34:10 compute-0 podman[344316]: 2025-12-05 01:34:10.206578736 +0000 UTC m=+0.087233266 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Dec  5 01:34:10 compute-0 podman[344318]: 2025-12-05 01:34:10.222545851 +0000 UTC m=+0.095965109 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec  5 01:34:10 compute-0 podman[344317]: 2025-12-05 01:34:10.22859322 +0000 UTC m=+0.096967277 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 01:34:10 compute-0 podman[344319]: 2025-12-05 01:34:10.268085382 +0000 UTC m=+0.130837922 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:34:10 compute-0 python3.9[344407]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764898448.7559974-1249-86805877696747/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:34:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v737: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:34:11 compute-0 python3.9[344575]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:34:12 compute-0 python3.9[344696]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764898450.6688058-1249-88355885668195/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:34:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:34:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v738: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:34:12 compute-0 python3.9[344846]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:34:13 compute-0 python3.9[344967]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764898452.2692113-1249-228933640642297/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:34:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v739: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:34:14 compute-0 python3.9[345119]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:34:15 compute-0 podman[345243]: 2025-12-05 01:34:15.59535958 +0000 UTC m=+0.123671302 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.openshift.tags=base rhel9, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, managed_by=edpm_ansible, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1214.1726694543, release-0.7.12=, com.redhat.component=ubi9-container, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  5 01:34:15 compute-0 python3.9[345287]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:34:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:34:16
Dec  5 01:34:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 01:34:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 01:34:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'volumes', 'backups', '.rgw.root', 'vms', 'images', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta']
Dec  5 01:34:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 01:34:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:34:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:34:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:34:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:34:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:34:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:34:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 01:34:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:34:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 01:34:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:34:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:34:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:34:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:34:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:34:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:34:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:34:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v740: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:34:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:34:17 compute-0 python3.9[345441]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  5 01:34:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v741: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:34:19 compute-0 python3.9[345593]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:34:20 compute-0 python3.9[345717]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764898457.6766553-1356-153094691212899/.source _original_basename=.3knihjcz follow=False checksum=837b301b2ce47747228e2c392556c83935f6fd48 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Dec  5 01:34:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v742: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:34:21 compute-0 podman[345843]: 2025-12-05 01:34:21.158424879 +0000 UTC m=+0.098621873 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, container_name=openstack_network_exporter, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64)
Dec  5 01:34:21 compute-0 python3.9[345885]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  5 01:34:22 compute-0 python3.9[346041]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:34:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:34:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v743: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:34:22 compute-0 podman[346136]: 2025-12-05 01:34:22.889397352 +0000 UTC m=+0.101367290 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 01:34:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 01:34:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5667 writes, 23K keys, 5667 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5667 writes, 889 syncs, 6.37 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s#012Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5630e4c90dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5630e4c90dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Dec  5 01:34:23 compute-0 python3.9[346174]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764898461.6264431-1382-128199124472342/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:34:24 compute-0 python3.9[346333]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:34:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v744: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:34:24 compute-0 python3.9[346454]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764898463.3931947-1397-254673578141208/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:34:26 compute-0 python3.9[346606]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Dec  5 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:34:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 01:34:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v745: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:34:27 compute-0 python3.9[346758]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  5 01:34:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:34:28 compute-0 python3[346910]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Dec  5 01:34:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v746: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:34:29 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 01:34:29 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.2 total, 600.0 interval#012Cumulative writes: 7007 writes, 28K keys, 7007 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 7007 writes, 1237 syncs, 5.66 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 271 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56484670add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56484670add0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.2 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Dec  5 01:34:29 compute-0 podman[158197]: time="2025-12-05T01:34:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:34:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:34:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 38321 "" "Go-http-client/1.1"
Dec  5 01:34:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:34:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 7692 "" "Go-http-client/1.1"
Dec  5 01:34:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v747: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:34:31 compute-0 openstack_network_exporter[160350]: ERROR   01:34:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:34:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:34:31 compute-0 openstack_network_exporter[160350]: ERROR   01:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:34:31 compute-0 openstack_network_exporter[160350]: ERROR   01:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:34:31 compute-0 openstack_network_exporter[160350]: ERROR   01:34:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:34:31 compute-0 openstack_network_exporter[160350]: ERROR   01:34:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:34:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:34:31 compute-0 podman[346948]: 2025-12-05 01:34:31.674132402 +0000 UTC m=+0.089811638 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Dec  5 01:34:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:34:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v748: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:34:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v749: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:34:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 01:34:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5737 writes, 24K keys, 5737 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5737 writes, 931 syncs, 6.16 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c43575edd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55c43575edd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl
Dec  5 01:34:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v750: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:34:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:34:37 compute-0 podman[346981]: 2025-12-05 01:34:37.51274498 +0000 UTC m=+5.121480687 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  5 01:34:37 compute-0 ceph-mgr[193209]: [devicehealth INFO root] Check health
Dec  5 01:34:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v751: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:34:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v752: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:34:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.547 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.548 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.548 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f83151a5f70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.549 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f83151a6690>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.549 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8316c39160>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee59a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f941a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee79e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.550 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f942c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee6300>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7470>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee74d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7530>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7590>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.551 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.552 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314f94620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.552 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.552 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee76b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.552 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.552 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7f50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.552 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.552 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8314ee7fb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f83140f1bb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.552 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8314f94050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8314f940e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f831506dc10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.553 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8314ee7950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8314ee7a10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8314f94170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.554 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.554 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8314ee79b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8314f94200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8314f94290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8314ee7ef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.555 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8314f94320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8314ee59d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8314ee7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.556 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8314ee7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8314ee74a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8314ee7500>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.557 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8314ee7560>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8314ee75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8314f945f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.558 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8314ee7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8314ee7680>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8314ee76e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8314ee7f20>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.559 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8314ee7740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.560 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8314ee7f80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f83150abd40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.560 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.560 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.561 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.562 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:34:42 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:34:42.563 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:34:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v753: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:34:44 compute-0 podman[347017]: 2025-12-05 01:34:44.579554531 +0000 UTC m=+3.991445363 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  5 01:34:44 compute-0 podman[347018]: 2025-12-05 01:34:44.591627228 +0000 UTC m=+3.998738947 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  5 01:34:44 compute-0 podman[347019]: 2025-12-05 01:34:44.595667151 +0000 UTC m=+4.001364891 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  5 01:34:44 compute-0 podman[346925]: 2025-12-05 01:34:44.612182832 +0000 UTC m=+16.179576647 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec  5 01:34:44 compute-0 podman[347020]: 2025-12-05 01:34:44.620040951 +0000 UTC m=+4.020709900 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  5 01:34:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v754: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:34:44 compute-0 podman[347118]: 2025-12-05 01:34:44.829571898 +0000 UTC m=+0.098218642 container create 4d8938d8db32fcae4f45945a49d34b745f8e8c75a9d36333a9dd0778cc2dcac2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=nova_compute_init, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec  5 01:34:44 compute-0 podman[347118]: 2025-12-05 01:34:44.764636726 +0000 UTC m=+0.033283460 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec  5 01:34:44 compute-0 python3[346910]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Dec  5 01:34:46 compute-0 podman[347280]: 2025-12-05 01:34:46.025349657 +0000 UTC m=+0.117310385 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., managed_by=edpm_ansible, config_id=edpm, release=1214.1726694543, version=9.4, io.openshift.tags=base rhel9, io.openshift.expose-services=, vcs-type=git, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  5 01:34:46 compute-0 python3.9[347326]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  5 01:34:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:34:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:34:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:34:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:34:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:34:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:34:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v755: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:34:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:34:47 compute-0 python3.9[347480]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Dec  5 01:34:48 compute-0 python3.9[347632]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  5 01:34:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v756: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:34:49 compute-0 python3[347784]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec  5 01:34:50 compute-0 podman[347817]: 2025-12-05 01:34:50.15552359 +0000 UTC m=+0.100069843 container create 7e4d1102a0626942d9f944e09cc1dcb68eba5a8bb8d27cbb786766a0e2d545b6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  5 01:34:50 compute-0 podman[347817]: 2025-12-05 01:34:50.101656557 +0000 UTC m=+0.046202860 image pull 5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec  5 01:34:50 compute-0 python3[347784]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Dec  5 01:34:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v757: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:34:51 compute-0 python3.9[348004]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  5 01:34:51 compute-0 podman[348031]: 2025-12-05 01:34:51.709673068 +0000 UTC m=+0.116497891 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, distribution-scope=public, managed_by=edpm_ansible, architecture=x86_64, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal)
Dec  5 01:34:51 compute-0 auditd[704]: Audit daemon rotating log files
Dec  5 01:34:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:34:52 compute-0 python3.9[348178]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:34:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v758: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:34:53 compute-0 podman[348301]: 2025-12-05 01:34:53.497064557 +0000 UTC m=+0.155694636 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 01:34:53 compute-0 python3.9[348352]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764898492.7316506-1489-225511827972435/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:34:54 compute-0 python3.9[348428]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  5 01:34:54 compute-0 systemd[1]: Reloading.
Dec  5 01:34:54 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:34:54 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:34:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v759: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:34:54 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Dec  5 01:34:54 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:34:54.897078) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  5 01:34:54 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Dec  5 01:34:54 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898494897132, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1566, "num_deletes": 251, "total_data_size": 2589134, "memory_usage": 2619608, "flush_reason": "Manual Compaction"}
Dec  5 01:34:54 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Dec  5 01:34:54 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898494911515, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2554353, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14737, "largest_seqno": 16302, "table_properties": {"data_size": 2547042, "index_size": 4382, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14509, "raw_average_key_size": 19, "raw_value_size": 2532543, "raw_average_value_size": 3422, "num_data_blocks": 200, "num_entries": 740, "num_filter_entries": 740, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764898318, "oldest_key_time": 1764898318, "file_creation_time": 1764898494, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec  5 01:34:54 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 14480 microseconds, and 5655 cpu microseconds.
Dec  5 01:34:54 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 01:34:54 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:34:54.911568) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2554353 bytes OK
Dec  5 01:34:54 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:34:54.911586) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Dec  5 01:34:54 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:34:54.914754) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Dec  5 01:34:54 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:34:54.914766) EVENT_LOG_v1 {"time_micros": 1764898494914763, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  5 01:34:54 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:34:54.914779) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  5 01:34:54 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2582361, prev total WAL file size 2582361, number of live WAL files 2.
Dec  5 01:34:54 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 01:34:54 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:34:54.916451) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Dec  5 01:34:54 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  5 01:34:54 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2494KB)], [35(6794KB)]
Dec  5 01:34:54 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898494916561, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9512073, "oldest_snapshot_seqno": -1}
Dec  5 01:34:54 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 3972 keys, 7756297 bytes, temperature: kUnknown
Dec  5 01:34:54 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898494976439, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7756297, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7727350, "index_size": 17893, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9989, "raw_key_size": 97004, "raw_average_key_size": 24, "raw_value_size": 7653006, "raw_average_value_size": 1926, "num_data_blocks": 759, "num_entries": 3972, "num_filter_entries": 3972, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764898494, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Dec  5 01:34:54 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 01:34:54 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:34:54.976850) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7756297 bytes
Dec  5 01:34:54 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:34:54.979493) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 158.2 rd, 129.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 6.6 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(6.8) write-amplify(3.0) OK, records in: 4486, records dropped: 514 output_compression: NoCompression
Dec  5 01:34:54 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:34:54.979525) EVENT_LOG_v1 {"time_micros": 1764898494979510, "job": 16, "event": "compaction_finished", "compaction_time_micros": 60125, "compaction_time_cpu_micros": 34748, "output_level": 6, "num_output_files": 1, "total_output_size": 7756297, "num_input_records": 4486, "num_output_records": 3972, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  5 01:34:54 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 01:34:54 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898494980768, "job": 16, "event": "table_file_deletion", "file_number": 37}
Dec  5 01:34:54 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 01:34:54 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898494986379, "job": 16, "event": "table_file_deletion", "file_number": 35}
Dec  5 01:34:54 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:34:54.916130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:34:54 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:34:54.986674) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:34:54 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:34:54.986681) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:34:54 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:34:54.986684) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:34:54 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:34:54.986687) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:34:54 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:34:54.986691) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:34:55 compute-0 python3.9[348538]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  5 01:34:55 compute-0 systemd[1]: Reloading.
Dec  5 01:34:56 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:34:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:34:56.162 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:34:56 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:34:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:34:56.163 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:34:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:34:56.163 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:34:56 compute-0 systemd[1]: Starting nova_compute container...
Dec  5 01:34:56 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:34:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eacfa37577497aeadf1a19b9a8d6f7d0cc735e29817b277f1b42b6463a7744c0/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  5 01:34:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eacfa37577497aeadf1a19b9a8d6f7d0cc735e29817b277f1b42b6463a7744c0/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec  5 01:34:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eacfa37577497aeadf1a19b9a8d6f7d0cc735e29817b277f1b42b6463a7744c0/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  5 01:34:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eacfa37577497aeadf1a19b9a8d6f7d0cc735e29817b277f1b42b6463a7744c0/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  5 01:34:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eacfa37577497aeadf1a19b9a8d6f7d0cc735e29817b277f1b42b6463a7744c0/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec  5 01:34:56 compute-0 podman[348577]: 2025-12-05 01:34:56.656419659 +0000 UTC m=+0.158252437 container init 7e4d1102a0626942d9f944e09cc1dcb68eba5a8bb8d27cbb786766a0e2d545b6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  5 01:34:56 compute-0 podman[348577]: 2025-12-05 01:34:56.679014909 +0000 UTC m=+0.180847667 container start 7e4d1102a0626942d9f944e09cc1dcb68eba5a8bb8d27cbb786766a0e2d545b6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, container_name=nova_compute)
Dec  5 01:34:56 compute-0 podman[348577]: nova_compute
Dec  5 01:34:56 compute-0 nova_compute[348591]: + sudo -E kolla_set_configs
Dec  5 01:34:56 compute-0 systemd[1]: Started nova_compute container.
Dec  5 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  5 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Validating config file
Dec  5 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  5 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Copying service configuration files
Dec  5 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec  5 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec  5 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec  5 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec  5 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec  5 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  5 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  5 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  5 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  5 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec  5 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec  5 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  5 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  5 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Deleting /etc/ceph
Dec  5 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Creating directory /etc/ceph
Dec  5 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Setting permission for /etc/ceph
Dec  5 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec  5 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec  5 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec  5 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec  5 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec  5 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  5 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec  5 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  5 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec  5 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec  5 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec  5 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Writing out command to execute
Dec  5 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec  5 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec  5 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec  5 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  5 01:34:56 compute-0 nova_compute[348591]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  5 01:34:56 compute-0 nova_compute[348591]: ++ cat /run_command
Dec  5 01:34:56 compute-0 nova_compute[348591]: + CMD=nova-compute
Dec  5 01:34:56 compute-0 nova_compute[348591]: + ARGS=
Dec  5 01:34:56 compute-0 nova_compute[348591]: + sudo kolla_copy_cacerts
Dec  5 01:34:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v760: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:34:56 compute-0 nova_compute[348591]: + [[ ! -n '' ]]
Dec  5 01:34:56 compute-0 nova_compute[348591]: + . kolla_extend_start
Dec  5 01:34:56 compute-0 nova_compute[348591]: Running command: 'nova-compute'
Dec  5 01:34:56 compute-0 nova_compute[348591]: + echo 'Running command: '\''nova-compute'\'''
Dec  5 01:34:56 compute-0 nova_compute[348591]: + umask 0022
Dec  5 01:34:56 compute-0 nova_compute[348591]: + exec nova-compute
Dec  5 01:34:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:34:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v761: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:34:58 compute-0 python3.9[348753]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  5 01:34:59 compute-0 nova_compute[348591]: 2025-12-05 01:34:59.272 348595 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  5 01:34:59 compute-0 nova_compute[348591]: 2025-12-05 01:34:59.273 348595 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  5 01:34:59 compute-0 nova_compute[348591]: 2025-12-05 01:34:59.274 348595 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  5 01:34:59 compute-0 nova_compute[348591]: 2025-12-05 01:34:59.274 348595 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Dec  5 01:34:59 compute-0 nova_compute[348591]: 2025-12-05 01:34:59.544 348595 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:34:59 compute-0 nova_compute[348591]: 2025-12-05 01:34:59.577 348595 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:34:59 compute-0 nova_compute[348591]: 2025-12-05 01:34:59.578 348595 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Dec  5 01:34:59 compute-0 podman[158197]: time="2025-12-05T01:34:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:34:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:34:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42588 "" "Go-http-client/1.1"
Dec  5 01:34:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:34:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8123 "" "Go-http-client/1.1"
Dec  5 01:34:59 compute-0 python3.9[348907]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.279 348595 INFO nova.virt.driver [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.434 348595 INFO nova.compute.provider_config [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.453 348595 DEBUG oslo_concurrency.lockutils [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.453 348595 DEBUG oslo_concurrency.lockutils [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.454 348595 DEBUG oslo_concurrency.lockutils [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.454 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.454 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.454 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.454 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.455 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.455 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.455 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.455 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.456 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.456 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.456 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.456 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.456 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.457 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.457 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.457 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.457 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.457 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.458 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.458 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.458 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.458 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.458 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.459 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.459 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.459 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.459 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.460 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.460 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.460 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.460 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.460 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.461 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.461 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.461 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.461 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.461 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.462 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.462 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.462 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.462 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.463 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.463 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.463 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.463 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.463 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.464 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.464 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.464 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.464 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.465 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.465 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.465 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.465 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.465 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.466 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.466 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.466 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.466 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.467 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.467 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.467 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.467 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.467 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.468 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.468 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.468 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.468 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.468 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.468 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.468 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.469 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.469 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.469 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.469 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.469 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.469 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.470 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.470 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.470 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.470 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.470 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.470 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.471 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.471 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.471 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.471 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.471 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.471 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.472 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.472 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.472 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.472 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.472 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.472 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.472 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.473 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.473 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.473 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.473 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.473 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.473 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.473 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.474 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.474 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.474 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.474 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.474 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.474 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.474 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.475 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.475 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.475 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.475 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.475 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.475 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.476 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.476 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.476 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.476 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.476 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.476 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.476 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.477 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.477 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.477 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.477 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.477 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.477 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.477 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.477 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.478 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.478 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.478 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.478 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.478 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.478 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.478 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.479 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.479 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.479 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.479 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.479 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.479 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.480 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.480 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.480 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.480 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.480 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.480 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.480 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.481 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.481 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.481 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.481 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.481 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.481 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.481 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.482 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.482 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.482 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.482 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.482 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.482 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.482 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.483 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.483 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.483 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.483 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.483 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.484 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.484 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.484 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.484 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.484 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.484 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.484 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.485 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.485 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.485 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.485 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.485 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.485 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.486 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.486 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.486 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.486 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.486 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.486 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.486 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.487 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.487 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.487 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.487 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.487 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.487 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.488 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.488 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.488 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.488 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.488 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.488 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.488 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.489 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.489 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.489 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.489 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.489 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.489 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.489 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.490 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.490 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.490 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.490 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.490 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.490 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.490 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.491 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.491 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.491 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.491 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.491 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.492 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.492 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.492 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.492 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.492 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.493 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.493 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.493 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.493 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.493 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.493 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.493 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.494 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.494 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.494 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.494 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.494 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.494 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.495 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.495 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.495 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.495 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.495 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.495 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.495 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.496 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.496 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.496 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.496 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.496 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.496 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.496 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.497 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.497 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.497 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.497 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.497 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.497 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.498 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.498 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.498 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.498 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.498 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.498 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.499 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.499 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.499 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.499 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.499 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.499 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.499 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.500 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.500 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.500 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.500 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.500 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.500 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.501 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.501 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.501 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.501 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.501 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.501 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.501 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.502 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.502 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.502 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.502 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.502 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.503 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.503 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.503 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.503 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.503 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.503 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.503 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.504 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.504 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.504 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.504 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.504 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.504 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.504 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.505 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.505 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.505 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.505 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.505 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.505 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.505 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.506 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.506 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.506 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.506 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.506 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.506 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.507 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.507 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.507 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.507 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.507 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.508 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.508 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.508 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.508 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.508 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.508 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.509 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.509 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.509 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.509 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.509 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.509 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.509 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.510 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.510 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.510 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.510 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.510 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.510 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.511 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.511 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.511 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.511 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.511 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.511 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.512 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.512 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.512 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.512 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.512 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.513 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.513 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.513 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.513 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.514 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.514 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.514 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.514 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.514 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.514 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.515 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.515 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.515 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.515 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.515 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.516 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.516 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.516 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.516 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.516 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.516 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.517 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.517 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.517 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.517 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.517 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.518 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.518 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.518 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.518 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.518 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.519 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.519 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.519 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.519 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.519 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.519 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.520 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.520 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.520 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.520 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.520 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.520 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.521 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.521 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.521 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.521 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.521 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.521 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.521 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.522 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.522 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.522 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.522 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.522 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.522 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.523 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.523 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.523 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.523 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.523 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.523 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.523 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.524 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.524 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.524 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.524 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.524 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.525 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.525 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.525 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.525 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.525 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.525 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.526 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.526 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.526 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.526 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.526 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.526 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.526 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.527 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.527 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.527 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.527 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.527 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.527 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.528 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.528 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.528 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.528 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.528 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.528 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.529 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.529 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.529 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.529 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.529 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.529 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.530 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.530 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.530 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.530 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.530 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.530 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.530 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.531 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.531 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.531 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.531 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.531 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.531 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.532 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.532 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.532 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.532 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.532 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.532 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.533 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.533 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.533 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.533 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.533 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.533 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.533 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.534 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.534 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.534 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.534 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.534 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.534 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.535 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.535 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.535 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.535 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.535 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.535 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.535 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.536 348595 WARNING oslo_config.cfg [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec  5 01:35:00 compute-0 nova_compute[348591]: live_migration_uri is deprecated for removal in favor of two other options that
Dec  5 01:35:00 compute-0 nova_compute[348591]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec  5 01:35:00 compute-0 nova_compute[348591]: and ``live_migration_inbound_addr`` respectively.
Dec  5 01:35:00 compute-0 nova_compute[348591]: ).  Its value may be silently ignored in the future.#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.536 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.536 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.536 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.536 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.537 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.537 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.537 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.537 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.537 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.538 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.538 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.538 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.538 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.538 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.538 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.539 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.539 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.539 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.539 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.rbd_secret_uuid        = cbd280d3-cbd8-528b-ace6-2b3a887cdcee log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.539 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.539 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.540 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.540 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.540 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.540 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.540 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.540 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.540 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.541 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.541 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.541 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.541 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.541 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.541 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.542 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.542 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.542 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.542 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.542 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.543 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.543 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.543 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.543 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.543 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.544 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.544 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.544 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.544 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.544 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.545 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.545 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.545 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.545 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.545 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.546 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.546 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.546 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.546 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.546 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.546 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.546 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.547 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.547 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.547 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.547 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.547 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.547 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.548 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.548 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.548 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.548 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.548 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.548 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.548 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.549 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.549 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.549 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.549 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.549 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.549 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.550 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.550 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.550 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.550 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.550 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.550 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.551 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.551 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.551 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.551 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.551 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.552 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.552 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.552 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.552 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.552 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.553 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.553 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.553 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.553 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.553 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.554 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.554 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.554 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.554 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.554 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.555 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.555 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.555 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.555 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.555 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.555 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.555 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.556 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.556 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.556 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.556 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.556 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.556 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.557 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.557 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.557 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.557 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.557 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.557 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.558 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.558 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.558 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.558 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.558 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.558 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.559 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.559 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.559 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.559 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.559 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.559 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.560 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.560 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.560 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.560 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.560 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.561 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.561 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.561 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.561 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.561 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.561 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.562 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.562 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.562 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.562 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.562 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.562 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.563 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.563 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.563 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.563 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.564 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.564 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.564 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.564 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.564 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.565 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.565 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.565 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.565 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.565 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.566 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.566 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.566 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.566 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.566 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.567 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.567 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.567 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.567 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.567 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.568 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.568 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.568 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.568 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.568 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.569 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.569 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.569 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.569 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.569 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.569 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.570 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.570 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.570 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.570 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.570 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.570 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.571 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.571 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.571 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.571 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.571 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.572 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.572 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.572 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.572 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.572 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.572 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.572 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.573 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.573 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.573 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.573 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.573 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.574 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.574 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.574 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.574 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.574 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.574 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.575 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.575 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.575 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.575 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.575 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.575 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.575 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.576 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.576 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.576 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.576 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.576 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.577 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.577 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.577 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.577 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.577 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.577 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.578 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.578 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.578 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.579 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.579 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.579 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.579 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.579 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.580 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.580 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.580 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.580 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.580 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.580 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.581 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.581 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.581 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.581 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.581 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.581 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.582 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.582 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.582 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.582 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.582 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.583 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.583 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.583 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.583 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.583 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.584 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.584 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.584 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.584 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.584 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.585 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.585 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.585 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.585 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.585 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.586 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.586 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.586 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.586 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.587 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.587 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.587 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.587 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.587 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.587 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.588 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.588 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.588 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.588 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.588 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.588 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.589 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.589 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.589 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.589 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.589 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.589 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.590 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.590 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.590 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.590 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.590 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.590 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.591 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.591 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.591 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.591 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.591 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.591 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.592 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.592 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.592 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.592 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.592 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.592 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.593 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.593 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.593 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.593 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.593 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.593 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.593 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.594 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.594 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.594 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.594 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.594 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.594 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.595 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.595 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.595 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.595 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.595 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.595 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.596 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.596 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.596 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.596 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.596 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.596 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.597 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.597 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.597 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.597 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.597 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.597 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.598 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.598 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.598 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.598 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.598 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.598 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.598 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.599 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.599 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.599 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.599 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.599 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.599 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.600 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.600 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.600 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.600 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.600 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.600 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.601 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.601 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.601 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.601 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.601 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.601 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.602 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.602 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.602 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.602 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.602 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.603 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.603 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.603 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.603 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.603 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.604 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.604 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.604 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.604 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.604 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.605 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.605 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.605 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.605 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.605 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.606 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.606 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.606 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.606 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.606 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.607 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.607 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.607 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.607 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.607 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.607 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.608 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.608 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.608 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.608 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.608 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.609 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.609 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.609 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.609 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.609 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.609 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.610 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.610 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.610 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.610 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.610 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.610 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.610 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.611 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.611 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.611 348595 DEBUG oslo_service.service [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.612 348595 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.634 348595 DEBUG nova.virt.libvirt.host [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.635 348595 DEBUG nova.virt.libvirt.host [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.635 348595 DEBUG nova.virt.libvirt.host [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.635 348595 DEBUG nova.virt.libvirt.host [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.654 348595 DEBUG nova.virt.libvirt.host [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f4b97d32190> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.660 348595 DEBUG nova.virt.libvirt.host [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f4b97d32190> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.662 348595 INFO nova.virt.libvirt.driver [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Connection event '1' reason 'None'#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.682 348595 WARNING nova.virt.libvirt.driver [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec  5 01:35:00 compute-0 nova_compute[348591]: 2025-12-05 01:35:00.682 348595 DEBUG nova.virt.libvirt.volume.mount [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Dec  5 01:35:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v762: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:35:00 compute-0 python3.9[349080]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  5 01:35:01 compute-0 openstack_network_exporter[160350]: ERROR   01:35:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:35:01 compute-0 openstack_network_exporter[160350]: ERROR   01:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:35:01 compute-0 openstack_network_exporter[160350]: ERROR   01:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:35:01 compute-0 openstack_network_exporter[160350]: ERROR   01:35:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:35:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:35:01 compute-0 openstack_network_exporter[160350]: ERROR   01:35:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:35:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:35:01 compute-0 nova_compute[348591]: 2025-12-05 01:35:01.767 348595 INFO nova.virt.libvirt.host [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Libvirt host capabilities <capabilities>
Dec  5 01:35:01 compute-0 nova_compute[348591]: 
Dec  5 01:35:01 compute-0 nova_compute[348591]:  <host>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <uuid>6c9ead2d-8495-4e2b-9845-f862956e441e</uuid>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <cpu>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <arch>x86_64</arch>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model>EPYC-Rome-v4</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <vendor>AMD</vendor>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <microcode version='16777317'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <signature family='23' model='49' stepping='0'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <maxphysaddr mode='emulate' bits='40'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature name='x2apic'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature name='tsc-deadline'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature name='osxsave'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature name='hypervisor'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature name='tsc_adjust'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature name='spec-ctrl'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature name='stibp'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature name='arch-capabilities'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature name='ssbd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature name='cmp_legacy'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature name='topoext'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature name='virt-ssbd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature name='lbrv'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature name='tsc-scale'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature name='vmcb-clean'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature name='pause-filter'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature name='pfthreshold'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature name='svme-addr-chk'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature name='rdctl-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature name='skip-l1dfl-vmentry'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature name='mds-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature name='pschange-mc-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <pages unit='KiB' size='4'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <pages unit='KiB' size='2048'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <pages unit='KiB' size='1048576'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    </cpu>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <power_management>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <suspend_mem/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    </power_management>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <iommu support='no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <migration_features>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <live/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <uri_transports>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <uri_transport>tcp</uri_transport>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <uri_transport>rdma</uri_transport>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </uri_transports>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    </migration_features>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <topology>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <cells num='1'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <cell id='0'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:          <memory unit='KiB'>7864320</memory>
Dec  5 01:35:01 compute-0 nova_compute[348591]:          <pages unit='KiB' size='4'>1966080</pages>
Dec  5 01:35:01 compute-0 nova_compute[348591]:          <pages unit='KiB' size='2048'>0</pages>
Dec  5 01:35:01 compute-0 nova_compute[348591]:          <pages unit='KiB' size='1048576'>0</pages>
Dec  5 01:35:01 compute-0 nova_compute[348591]:          <distances>
Dec  5 01:35:01 compute-0 nova_compute[348591]:            <sibling id='0' value='10'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:          </distances>
Dec  5 01:35:01 compute-0 nova_compute[348591]:          <cpus num='8'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:          </cpus>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        </cell>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </cells>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    </topology>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <cache>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    </cache>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <secmodel>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model>selinux</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <doi>0</doi>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    </secmodel>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <secmodel>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model>dac</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <doi>0</doi>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <baselabel type='kvm'>+107:+107</baselabel>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <baselabel type='qemu'>+107:+107</baselabel>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    </secmodel>
Dec  5 01:35:01 compute-0 nova_compute[348591]:  </host>
Dec  5 01:35:01 compute-0 nova_compute[348591]: 
Dec  5 01:35:01 compute-0 nova_compute[348591]:  <guest>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <os_type>hvm</os_type>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <arch name='i686'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <wordsize>32</wordsize>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <domain type='qemu'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <domain type='kvm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    </arch>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <features>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <pae/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <nonpae/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <acpi default='on' toggle='yes'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <apic default='on' toggle='no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <cpuselection/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <deviceboot/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <disksnapshot default='on' toggle='no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <externalSnapshot/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    </features>
Dec  5 01:35:01 compute-0 nova_compute[348591]:  </guest>
Dec  5 01:35:01 compute-0 nova_compute[348591]: 
Dec  5 01:35:01 compute-0 nova_compute[348591]:  <guest>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <os_type>hvm</os_type>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <arch name='x86_64'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <wordsize>64</wordsize>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <domain type='qemu'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <domain type='kvm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    </arch>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <features>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <acpi default='on' toggle='yes'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <apic default='on' toggle='no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <cpuselection/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <deviceboot/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <disksnapshot default='on' toggle='no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <externalSnapshot/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    </features>
Dec  5 01:35:01 compute-0 nova_compute[348591]:  </guest>
Dec  5 01:35:01 compute-0 nova_compute[348591]: 
Dec  5 01:35:01 compute-0 nova_compute[348591]: </capabilities>
Dec  5 01:35:01 compute-0 nova_compute[348591]: #033[00m
Dec  5 01:35:01 compute-0 nova_compute[348591]: 2025-12-05 01:35:01.779 348595 DEBUG nova.virt.libvirt.host [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  5 01:35:01 compute-0 nova_compute[348591]: 2025-12-05 01:35:01.831 348595 DEBUG nova.virt.libvirt.host [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec  5 01:35:01 compute-0 nova_compute[348591]: <domainCapabilities>
Dec  5 01:35:01 compute-0 nova_compute[348591]:  <path>/usr/libexec/qemu-kvm</path>
Dec  5 01:35:01 compute-0 nova_compute[348591]:  <domain>kvm</domain>
Dec  5 01:35:01 compute-0 nova_compute[348591]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  5 01:35:01 compute-0 nova_compute[348591]:  <arch>i686</arch>
Dec  5 01:35:01 compute-0 nova_compute[348591]:  <vcpu max='240'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:  <iothreads supported='yes'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:  <os supported='yes'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <enum name='firmware'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <loader supported='yes'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <enum name='type'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>rom</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>pflash</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <enum name='readonly'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>yes</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>no</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <enum name='secure'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>no</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    </loader>
Dec  5 01:35:01 compute-0 nova_compute[348591]:  </os>
Dec  5 01:35:01 compute-0 nova_compute[348591]:  <cpu>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <mode name='host-passthrough' supported='yes'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <enum name='hostPassthroughMigratable'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>on</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>off</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    </mode>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <mode name='maximum' supported='yes'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <enum name='maximumMigratable'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>on</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>off</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    </mode>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <mode name='host-model' supported='yes'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <vendor>AMD</vendor>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='require' name='x2apic'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='require' name='tsc-deadline'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='require' name='hypervisor'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='require' name='tsc_adjust'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='require' name='spec-ctrl'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='require' name='stibp'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='require' name='ssbd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='require' name='cmp_legacy'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='require' name='overflow-recov'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='require' name='succor'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='require' name='ibrs'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='require' name='amd-ssbd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='require' name='virt-ssbd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='require' name='lbrv'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='require' name='tsc-scale'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='require' name='vmcb-clean'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='require' name='flushbyasid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='require' name='pause-filter'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='require' name='pfthreshold'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='require' name='svme-addr-chk'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='disable' name='xsaves'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    </mode>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <mode name='custom' supported='yes'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Broadwell'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Broadwell-IBRS'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Broadwell-noTSX'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Broadwell-v1'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Broadwell-v2'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Broadwell-v3'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Broadwell-v4'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Cascadelake-Server'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Cascadelake-Server-v1'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Cascadelake-Server-v2'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Cascadelake-Server-v3'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Cascadelake-Server-v4'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Cascadelake-Server-v5'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Cooperlake'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Cooperlake-v1'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Cooperlake-v2'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Denverton'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='mpx'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Denverton-v1'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='mpx'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Denverton-v2'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Denverton-v3'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Dhyana-v2'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='EPYC-Genoa'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='amd-psfd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='auto-ibrs'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='no-nested-data-bp'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='null-sel-clr-base'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='stibp-always-on'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='EPYC-Genoa-v1'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='amd-psfd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='auto-ibrs'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='no-nested-data-bp'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='null-sel-clr-base'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='stibp-always-on'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='EPYC-Milan'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='EPYC-Milan-v1'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='EPYC-Milan-v2'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='amd-psfd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='no-nested-data-bp'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='null-sel-clr-base'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='stibp-always-on'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='EPYC-Rome'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='EPYC-Rome-v1'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='EPYC-Rome-v2'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='EPYC-Rome-v3'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='EPYC-v3'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='EPYC-v4'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='GraniteRapids'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='amx-bf16'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='amx-fp16'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='amx-int8'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='amx-tile'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx-vnni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-fp16'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fbsdp-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fsrc'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fsrs'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fzrm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='mcdt-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pbrsb-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='prefetchiti'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='psdp-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='serialize'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='xfd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='GraniteRapids-v1'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='amx-bf16'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='amx-fp16'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='amx-int8'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='amx-tile'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx-vnni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-fp16'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fbsdp-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fsrc'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fsrs'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fzrm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='mcdt-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pbrsb-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='prefetchiti'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='psdp-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='serialize'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='xfd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='GraniteRapids-v2'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='amx-bf16'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='amx-fp16'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='amx-int8'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='amx-tile'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx-vnni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx10'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx10-128'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx10-256'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx10-512'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-fp16'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='cldemote'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fbsdp-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fsrc'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fsrs'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fzrm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='mcdt-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='movdir64b'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='movdiri'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pbrsb-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='prefetchiti'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='psdp-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='serialize'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='ss'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='xfd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Haswell'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Haswell-IBRS'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Haswell-noTSX'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Haswell-v1'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Haswell-v2'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Haswell-v3'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Haswell-v4'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Icelake-Server'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Icelake-Server-noTSX'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Icelake-Server-v1'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Icelake-Server-v2'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Icelake-Server-v3'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Icelake-Server-v4'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Icelake-Server-v5'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Icelake-Server-v6'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Icelake-Server-v7'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='IvyBridge'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='IvyBridge-IBRS'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='IvyBridge-v1'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='IvyBridge-v2'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='KnightsMill'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-4fmaps'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-4vnniw'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512er'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512pf'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='ss'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='KnightsMill-v1'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-4fmaps'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-4vnniw'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512er'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512pf'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='ss'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Opteron_G4'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fma4'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='xop'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Opteron_G4-v1'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fma4'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='xop'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Opteron_G5'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fma4'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='tbm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='xop'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Opteron_G5-v1'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fma4'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='tbm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='xop'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='SapphireRapids'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='amx-bf16'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='amx-int8'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='amx-tile'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx-vnni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-fp16'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fsrc'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fsrs'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fzrm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='serialize'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='xfd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='SapphireRapids-v1'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='amx-bf16'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='amx-int8'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='amx-tile'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx-vnni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-fp16'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fsrc'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fsrs'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fzrm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='serialize'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='xfd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='SapphireRapids-v2'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='amx-bf16'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='amx-int8'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='amx-tile'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx-vnni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-fp16'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fbsdp-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fsrc'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fsrs'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fzrm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='psdp-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='serialize'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='xfd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='SapphireRapids-v3'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='amx-bf16'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='amx-int8'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='amx-tile'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx-vnni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-fp16'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='cldemote'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fbsdp-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fsrc'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fsrs'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fzrm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='movdir64b'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='movdiri'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='psdp-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='serialize'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='ss'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='xfd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='SierraForest'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx-ifma'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx-ne-convert'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx-vnni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx-vnni-int8'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='cmpccxadd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fbsdp-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fsrs'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='mcdt-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pbrsb-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='psdp-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='serialize'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='SierraForest-v1'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx-ifma'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx-ne-convert'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx-vnni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx-vnni-int8'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='cmpccxadd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fbsdp-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='fsrs'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='mcdt-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pbrsb-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='psdp-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='serialize'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Skylake-Client'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Skylake-Client-IBRS'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Skylake-Client-v1'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Skylake-Client-v2'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Skylake-Client-v3'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Skylake-Client-v4'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Skylake-Server'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Skylake-Server-IBRS'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Skylake-Server-v1'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Skylake-Server-v2'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Skylake-Server-v3'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Skylake-Server-v4'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Skylake-Server-v5'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Snowridge'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='cldemote'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='core-capability'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='movdir64b'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='movdiri'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='mpx'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='split-lock-detect'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Snowridge-v1'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='cldemote'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='core-capability'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='movdir64b'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='movdiri'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='mpx'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='split-lock-detect'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Snowridge-v2'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='cldemote'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='core-capability'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='movdir64b'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='movdiri'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='split-lock-detect'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Snowridge-v3'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='cldemote'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='core-capability'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='movdir64b'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='movdiri'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='split-lock-detect'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Snowridge-v4'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='cldemote'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='movdir64b'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='movdiri'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='athlon'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='3dnow'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='3dnowext'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='athlon-v1'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='3dnow'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='3dnowext'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='core2duo'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='ss'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='core2duo-v1'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='ss'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='coreduo'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='ss'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='coreduo-v1'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='ss'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='n270'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='ss'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='n270-v1'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='ss'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='phenom'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='3dnow'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='3dnowext'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='phenom-v1'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='3dnow'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='3dnowext'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    </mode>
Dec  5 01:35:01 compute-0 nova_compute[348591]:  </cpu>
Dec  5 01:35:01 compute-0 nova_compute[348591]:  <memoryBacking supported='yes'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <enum name='sourceType'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <value>file</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <value>anonymous</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <value>memfd</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    </enum>
Dec  5 01:35:01 compute-0 nova_compute[348591]:  </memoryBacking>
Dec  5 01:35:01 compute-0 nova_compute[348591]:  <devices>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <disk supported='yes'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <enum name='diskDevice'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>disk</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>cdrom</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>floppy</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>lun</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <enum name='bus'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>ide</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>fdc</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>scsi</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>virtio</value>
Dec  5 01:35:01 compute-0 podman[349174]: 2025-12-05 01:35:01.933371663 +0000 UTC m=+0.128356273 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>usb</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>sata</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <enum name='model'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>virtio</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>virtio-transitional</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>virtio-non-transitional</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    </disk>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <graphics supported='yes'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <enum name='type'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>vnc</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>egl-headless</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>dbus</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    </graphics>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <video supported='yes'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <enum name='modelType'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>vga</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>cirrus</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>virtio</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>none</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>bochs</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>ramfb</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    </video>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <hostdev supported='yes'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <enum name='mode'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>subsystem</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <enum name='startupPolicy'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>default</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>mandatory</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>requisite</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>optional</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <enum name='subsysType'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>usb</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>pci</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>scsi</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <enum name='capsType'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <enum name='pciBackend'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    </hostdev>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <rng supported='yes'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <enum name='model'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>virtio</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>virtio-transitional</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>virtio-non-transitional</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <enum name='backendModel'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>random</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>egd</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>builtin</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    </rng>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <filesystem supported='yes'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <enum name='driverType'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>path</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>handle</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>virtiofs</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    </filesystem>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <tpm supported='yes'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <enum name='model'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>tpm-tis</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>tpm-crb</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <enum name='backendModel'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>emulator</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>external</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <enum name='backendVersion'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>2.0</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    </tpm>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <redirdev supported='yes'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <enum name='bus'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>usb</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    </redirdev>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <channel supported='yes'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <enum name='type'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>pty</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>unix</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    </channel>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <crypto supported='yes'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <enum name='model'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <enum name='type'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>qemu</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <enum name='backendModel'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>builtin</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    </crypto>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <interface supported='yes'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <enum name='backendType'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>default</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>passt</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    </interface>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <panic supported='yes'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <enum name='model'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>isa</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>hyperv</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    </panic>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <console supported='yes'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <enum name='type'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>null</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>vc</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>pty</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>dev</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>file</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>pipe</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>stdio</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>udp</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>tcp</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>unix</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>qemu-vdagent</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>dbus</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    </console>
Dec  5 01:35:01 compute-0 nova_compute[348591]:  </devices>
Dec  5 01:35:01 compute-0 nova_compute[348591]:  <features>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <gic supported='no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <vmcoreinfo supported='yes'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <genid supported='yes'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <backingStoreInput supported='yes'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <backup supported='yes'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <async-teardown supported='yes'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <ps2 supported='yes'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <sev supported='no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <sgx supported='no'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <hyperv supported='yes'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <enum name='features'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>relaxed</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>vapic</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>spinlocks</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>vpindex</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>runtime</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>synic</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>stimer</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>reset</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>vendor_id</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>frequencies</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>reenlightenment</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>tlbflush</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>ipi</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>avic</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>emsr_bitmap</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>xmm_input</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <defaults>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <spinlocks>4095</spinlocks>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <stimer_direct>on</stimer_direct>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <tlbflush_direct>on</tlbflush_direct>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <tlbflush_extended>on</tlbflush_extended>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </defaults>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    </hyperv>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <launchSecurity supported='yes'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <enum name='sectype'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>tdx</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    </launchSecurity>
Dec  5 01:35:01 compute-0 nova_compute[348591]:  </features>
Dec  5 01:35:01 compute-0 nova_compute[348591]: </domainCapabilities>
Dec  5 01:35:01 compute-0 nova_compute[348591]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  5 01:35:01 compute-0 nova_compute[348591]: 2025-12-05 01:35:01.843 348595 DEBUG nova.virt.libvirt.host [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec  5 01:35:01 compute-0 nova_compute[348591]: <domainCapabilities>
Dec  5 01:35:01 compute-0 nova_compute[348591]:  <path>/usr/libexec/qemu-kvm</path>
Dec  5 01:35:01 compute-0 nova_compute[348591]:  <domain>kvm</domain>
Dec  5 01:35:01 compute-0 nova_compute[348591]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  5 01:35:01 compute-0 nova_compute[348591]:  <arch>i686</arch>
Dec  5 01:35:01 compute-0 nova_compute[348591]:  <vcpu max='4096'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:  <iothreads supported='yes'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:  <os supported='yes'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <enum name='firmware'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <loader supported='yes'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <enum name='type'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>rom</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>pflash</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <enum name='readonly'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>yes</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>no</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <enum name='secure'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>no</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    </loader>
Dec  5 01:35:01 compute-0 nova_compute[348591]:  </os>
Dec  5 01:35:01 compute-0 nova_compute[348591]:  <cpu>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <mode name='host-passthrough' supported='yes'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <enum name='hostPassthroughMigratable'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>on</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>off</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    </mode>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <mode name='maximum' supported='yes'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <enum name='maximumMigratable'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>on</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <value>off</value>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    </mode>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <mode name='host-model' supported='yes'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <vendor>AMD</vendor>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='require' name='x2apic'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='require' name='tsc-deadline'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='require' name='hypervisor'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='require' name='tsc_adjust'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='require' name='spec-ctrl'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='require' name='stibp'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='require' name='ssbd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='require' name='cmp_legacy'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='require' name='overflow-recov'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='require' name='succor'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='require' name='ibrs'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='require' name='amd-ssbd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='require' name='virt-ssbd'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='require' name='lbrv'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='require' name='tsc-scale'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='require' name='vmcb-clean'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='require' name='flushbyasid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='require' name='pause-filter'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='require' name='pfthreshold'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='require' name='svme-addr-chk'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <feature policy='disable' name='xsaves'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    </mode>
Dec  5 01:35:01 compute-0 nova_compute[348591]:    <mode name='custom' supported='yes'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Broadwell'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Broadwell-IBRS'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <blockers model='Broadwell-noTSX'>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:01 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Broadwell-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Broadwell-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Broadwell-v3'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Broadwell-v4'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Cascadelake-Server'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Cascadelake-Server-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Cascadelake-Server-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Cascadelake-Server-v3'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Cascadelake-Server-v4'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Cascadelake-Server-v5'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Cooperlake'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Cooperlake-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Cooperlake-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Denverton'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='mpx'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Denverton-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='mpx'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Denverton-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Denverton-v3'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Dhyana-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='EPYC-Genoa'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amd-psfd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='auto-ibrs'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='no-nested-data-bp'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='null-sel-clr-base'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='stibp-always-on'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='EPYC-Genoa-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amd-psfd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='auto-ibrs'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='no-nested-data-bp'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='null-sel-clr-base'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='stibp-always-on'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='EPYC-Milan'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='EPYC-Milan-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='EPYC-Milan-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amd-psfd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='no-nested-data-bp'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='null-sel-clr-base'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='stibp-always-on'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='EPYC-Rome'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='EPYC-Rome-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='EPYC-Rome-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='EPYC-Rome-v3'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='EPYC-v3'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='EPYC-v4'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='GraniteRapids'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-fp16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-int8'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-tile'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-fp16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fbsdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrc'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrs'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fzrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='mcdt-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pbrsb-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='prefetchiti'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='psdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='serialize'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xfd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='GraniteRapids-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-fp16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-int8'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-tile'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-fp16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fbsdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrc'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrs'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fzrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='mcdt-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pbrsb-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='prefetchiti'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='psdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='serialize'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xfd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='GraniteRapids-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-fp16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-int8'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-tile'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx10'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx10-128'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx10-256'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx10-512'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-fp16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='cldemote'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fbsdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrc'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrs'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fzrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='mcdt-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='movdir64b'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='movdiri'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pbrsb-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='prefetchiti'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='psdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='serialize'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ss'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xfd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Haswell'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Haswell-IBRS'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Haswell-noTSX'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Haswell-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Haswell-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Haswell-v3'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Haswell-v4'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Icelake-Server'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Icelake-Server-noTSX'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Icelake-Server-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Icelake-Server-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Icelake-Server-v3'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Icelake-Server-v4'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Icelake-Server-v5'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Icelake-Server-v6'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Icelake-Server-v7'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='IvyBridge'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='IvyBridge-IBRS'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='IvyBridge-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='IvyBridge-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='KnightsMill'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-4fmaps'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-4vnniw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512er'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512pf'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ss'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='KnightsMill-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-4fmaps'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-4vnniw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512er'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512pf'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ss'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Opteron_G4'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fma4'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xop'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Opteron_G4-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fma4'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xop'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Opteron_G5'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fma4'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='tbm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xop'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Opteron_G5-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fma4'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='tbm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xop'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='SapphireRapids'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-int8'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-tile'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-fp16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrc'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrs'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fzrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='serialize'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xfd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='SapphireRapids-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-int8'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-tile'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-fp16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrc'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrs'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fzrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='serialize'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xfd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='SapphireRapids-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-int8'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-tile'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-fp16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fbsdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrc'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrs'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fzrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='psdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='serialize'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xfd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='SapphireRapids-v3'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-int8'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-tile'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-fp16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='cldemote'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fbsdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrc'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrs'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fzrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='movdir64b'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='movdiri'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='psdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='serialize'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ss'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xfd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='SierraForest'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-ne-convert'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-vnni-int8'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='cmpccxadd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fbsdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrs'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='mcdt-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pbrsb-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='psdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='serialize'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='SierraForest-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-ne-convert'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-vnni-int8'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='cmpccxadd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fbsdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrs'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='mcdt-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pbrsb-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='psdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='serialize'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Client'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Client-IBRS'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Client-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Client-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Client-v3'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Client-v4'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Server'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Server-IBRS'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Server-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Server-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Server-v3'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Server-v4'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Server-v5'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Snowridge'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='cldemote'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='core-capability'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='movdir64b'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='movdiri'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='mpx'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='split-lock-detect'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Snowridge-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='cldemote'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='core-capability'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='movdir64b'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='movdiri'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='mpx'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='split-lock-detect'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Snowridge-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='cldemote'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='core-capability'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='movdir64b'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='movdiri'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='split-lock-detect'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Snowridge-v3'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='cldemote'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='core-capability'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='movdir64b'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='movdiri'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='split-lock-detect'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Snowridge-v4'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='cldemote'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='movdir64b'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='movdiri'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='athlon'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='3dnow'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='3dnowext'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='athlon-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='3dnow'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='3dnowext'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='core2duo'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ss'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='core2duo-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ss'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='coreduo'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ss'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='coreduo-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ss'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='n270'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ss'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='n270-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ss'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='phenom'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='3dnow'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='3dnowext'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='phenom-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='3dnow'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='3dnowext'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </mode>
Dec  5 01:35:02 compute-0 nova_compute[348591]:  </cpu>
Dec  5 01:35:02 compute-0 nova_compute[348591]:  <memoryBacking supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <enum name='sourceType'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <value>file</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <value>anonymous</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <value>memfd</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:  </memoryBacking>
Dec  5 01:35:02 compute-0 nova_compute[348591]:  <devices>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <disk supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='diskDevice'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>disk</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>cdrom</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>floppy</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>lun</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='bus'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>fdc</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>scsi</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>virtio</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>usb</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>sata</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='model'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>virtio</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>virtio-transitional</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>virtio-non-transitional</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </disk>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <graphics supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='type'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>vnc</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>egl-headless</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>dbus</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </graphics>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <video supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='modelType'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>vga</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>cirrus</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>virtio</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>none</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>bochs</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>ramfb</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </video>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <hostdev supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='mode'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>subsystem</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='startupPolicy'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>default</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>mandatory</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>requisite</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>optional</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='subsysType'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>usb</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>pci</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>scsi</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='capsType'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='pciBackend'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </hostdev>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <rng supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='model'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>virtio</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>virtio-transitional</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>virtio-non-transitional</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='backendModel'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>random</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>egd</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>builtin</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </rng>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <filesystem supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='driverType'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>path</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>handle</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>virtiofs</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </filesystem>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <tpm supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='model'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>tpm-tis</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>tpm-crb</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='backendModel'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>emulator</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>external</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='backendVersion'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>2.0</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </tpm>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <redirdev supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='bus'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>usb</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </redirdev>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <channel supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='type'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>pty</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>unix</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </channel>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <crypto supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='model'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='type'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>qemu</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='backendModel'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>builtin</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </crypto>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <interface supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='backendType'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>default</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>passt</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </interface>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <panic supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='model'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>isa</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>hyperv</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </panic>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <console supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='type'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>null</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>vc</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>pty</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>dev</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>file</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>pipe</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>stdio</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>udp</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>tcp</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>unix</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>qemu-vdagent</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>dbus</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </console>
Dec  5 01:35:02 compute-0 nova_compute[348591]:  </devices>
Dec  5 01:35:02 compute-0 nova_compute[348591]:  <features>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <gic supported='no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <vmcoreinfo supported='yes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <genid supported='yes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <backingStoreInput supported='yes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <backup supported='yes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <async-teardown supported='yes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <ps2 supported='yes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <sev supported='no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <sgx supported='no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <hyperv supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='features'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>relaxed</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>vapic</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>spinlocks</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>vpindex</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>runtime</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>synic</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>stimer</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>reset</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>vendor_id</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>frequencies</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>reenlightenment</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>tlbflush</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>ipi</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>avic</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>emsr_bitmap</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>xmm_input</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <defaults>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <spinlocks>4095</spinlocks>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <stimer_direct>on</stimer_direct>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <tlbflush_direct>on</tlbflush_direct>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <tlbflush_extended>on</tlbflush_extended>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </defaults>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </hyperv>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <launchSecurity supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='sectype'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>tdx</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </launchSecurity>
Dec  5 01:35:02 compute-0 nova_compute[348591]:  </features>
Dec  5 01:35:02 compute-0 nova_compute[348591]: </domainCapabilities>
Dec  5 01:35:02 compute-0 nova_compute[348591]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  5 01:35:02 compute-0 nova_compute[348591]: 2025-12-05 01:35:01.926 348595 DEBUG nova.virt.libvirt.host [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  5 01:35:02 compute-0 nova_compute[348591]: 2025-12-05 01:35:01.934 348595 DEBUG nova.virt.libvirt.host [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec  5 01:35:02 compute-0 nova_compute[348591]: <domainCapabilities>
Dec  5 01:35:02 compute-0 nova_compute[348591]:  <path>/usr/libexec/qemu-kvm</path>
Dec  5 01:35:02 compute-0 nova_compute[348591]:  <domain>kvm</domain>
Dec  5 01:35:02 compute-0 nova_compute[348591]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  5 01:35:02 compute-0 nova_compute[348591]:  <arch>x86_64</arch>
Dec  5 01:35:02 compute-0 nova_compute[348591]:  <vcpu max='240'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:  <iothreads supported='yes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:  <os supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <enum name='firmware'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <loader supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='type'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>rom</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>pflash</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='readonly'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>yes</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>no</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='secure'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>no</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </loader>
Dec  5 01:35:02 compute-0 nova_compute[348591]:  </os>
Dec  5 01:35:02 compute-0 nova_compute[348591]:  <cpu>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <mode name='host-passthrough' supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='hostPassthroughMigratable'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>on</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>off</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </mode>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <mode name='maximum' supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='maximumMigratable'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>on</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>off</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </mode>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <mode name='host-model' supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <vendor>AMD</vendor>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='require' name='x2apic'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='require' name='tsc-deadline'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='require' name='hypervisor'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='require' name='tsc_adjust'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='require' name='spec-ctrl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='require' name='stibp'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='require' name='ssbd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='require' name='cmp_legacy'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='require' name='overflow-recov'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='require' name='succor'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='require' name='ibrs'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='require' name='amd-ssbd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='require' name='virt-ssbd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='require' name='lbrv'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='require' name='tsc-scale'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='require' name='vmcb-clean'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='require' name='flushbyasid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='require' name='pause-filter'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='require' name='pfthreshold'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='require' name='svme-addr-chk'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='disable' name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </mode>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <mode name='custom' supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Broadwell'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Broadwell-IBRS'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Broadwell-noTSX'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Broadwell-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Broadwell-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Broadwell-v3'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Broadwell-v4'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Cascadelake-Server'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Cascadelake-Server-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Cascadelake-Server-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Cascadelake-Server-v3'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Cascadelake-Server-v4'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Cascadelake-Server-v5'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Cooperlake'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Cooperlake-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Cooperlake-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Denverton'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='mpx'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Denverton-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='mpx'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Denverton-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Denverton-v3'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Dhyana-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='EPYC-Genoa'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amd-psfd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='auto-ibrs'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='no-nested-data-bp'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='null-sel-clr-base'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='stibp-always-on'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='EPYC-Genoa-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amd-psfd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='auto-ibrs'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='no-nested-data-bp'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='null-sel-clr-base'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='stibp-always-on'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='EPYC-Milan'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='EPYC-Milan-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='EPYC-Milan-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amd-psfd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='no-nested-data-bp'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='null-sel-clr-base'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='stibp-always-on'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='EPYC-Rome'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='EPYC-Rome-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='EPYC-Rome-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='EPYC-Rome-v3'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='EPYC-v3'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='EPYC-v4'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='GraniteRapids'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-fp16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-int8'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-tile'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-fp16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fbsdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrc'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrs'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fzrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='mcdt-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pbrsb-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='prefetchiti'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='psdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='serialize'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xfd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='GraniteRapids-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-fp16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-int8'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-tile'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-fp16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fbsdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrc'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrs'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fzrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='mcdt-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pbrsb-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='prefetchiti'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='psdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='serialize'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xfd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='GraniteRapids-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-fp16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-int8'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-tile'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx10'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx10-128'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx10-256'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx10-512'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-fp16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='cldemote'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fbsdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrc'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrs'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fzrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='mcdt-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='movdir64b'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='movdiri'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pbrsb-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='prefetchiti'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='psdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='serialize'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ss'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xfd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Haswell'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Haswell-IBRS'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Haswell-noTSX'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Haswell-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Haswell-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Haswell-v3'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Haswell-v4'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Icelake-Server'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Icelake-Server-noTSX'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Icelake-Server-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Icelake-Server-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Icelake-Server-v3'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Icelake-Server-v4'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Icelake-Server-v5'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Icelake-Server-v6'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Icelake-Server-v7'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='IvyBridge'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='IvyBridge-IBRS'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='IvyBridge-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='IvyBridge-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='KnightsMill'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-4fmaps'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-4vnniw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512er'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512pf'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ss'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='KnightsMill-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-4fmaps'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-4vnniw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512er'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512pf'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ss'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Opteron_G4'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fma4'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xop'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Opteron_G4-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fma4'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xop'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Opteron_G5'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fma4'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='tbm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xop'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Opteron_G5-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fma4'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='tbm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xop'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='SapphireRapids'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-int8'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-tile'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-fp16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrc'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrs'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fzrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='serialize'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xfd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='SapphireRapids-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-int8'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-tile'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-fp16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrc'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrs'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fzrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='serialize'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xfd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='SapphireRapids-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-int8'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-tile'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-fp16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fbsdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrc'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrs'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fzrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='psdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='serialize'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xfd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='SapphireRapids-v3'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-int8'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-tile'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-fp16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='cldemote'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fbsdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrc'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrs'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fzrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='movdir64b'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='movdiri'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='psdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='serialize'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ss'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xfd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='SierraForest'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-ne-convert'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-vnni-int8'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='cmpccxadd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fbsdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrs'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='mcdt-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pbrsb-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='psdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='serialize'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='SierraForest-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-ne-convert'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-vnni-int8'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='cmpccxadd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fbsdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrs'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='mcdt-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pbrsb-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='psdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='serialize'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Client'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Client-IBRS'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Client-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Client-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Client-v3'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Client-v4'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Server'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Server-IBRS'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Server-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Server-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Server-v3'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Server-v4'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Server-v5'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Snowridge'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='cldemote'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='core-capability'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='movdir64b'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='movdiri'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='mpx'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='split-lock-detect'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Snowridge-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='cldemote'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='core-capability'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='movdir64b'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='movdiri'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='mpx'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='split-lock-detect'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Snowridge-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='cldemote'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='core-capability'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='movdir64b'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='movdiri'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='split-lock-detect'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Snowridge-v3'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='cldemote'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='core-capability'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='movdir64b'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='movdiri'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='split-lock-detect'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Snowridge-v4'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='cldemote'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='movdir64b'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='movdiri'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='athlon'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='3dnow'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='3dnowext'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='athlon-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='3dnow'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='3dnowext'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='core2duo'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ss'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='core2duo-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ss'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='coreduo'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ss'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='coreduo-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ss'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='n270'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ss'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='n270-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ss'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='phenom'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='3dnow'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='3dnowext'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='phenom-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='3dnow'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='3dnowext'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </mode>
Dec  5 01:35:02 compute-0 nova_compute[348591]:  </cpu>
Dec  5 01:35:02 compute-0 nova_compute[348591]:  <memoryBacking supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <enum name='sourceType'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <value>file</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <value>anonymous</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <value>memfd</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:  </memoryBacking>
Dec  5 01:35:02 compute-0 nova_compute[348591]:  <devices>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <disk supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='diskDevice'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>disk</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>cdrom</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>floppy</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>lun</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='bus'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>ide</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>fdc</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>scsi</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>virtio</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>usb</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>sata</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='model'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>virtio</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>virtio-transitional</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>virtio-non-transitional</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </disk>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <graphics supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='type'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>vnc</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>egl-headless</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>dbus</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </graphics>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <video supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='modelType'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>vga</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>cirrus</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>virtio</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>none</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>bochs</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>ramfb</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </video>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <hostdev supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='mode'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>subsystem</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='startupPolicy'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>default</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>mandatory</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>requisite</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>optional</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='subsysType'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>usb</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>pci</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>scsi</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='capsType'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='pciBackend'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </hostdev>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <rng supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='model'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>virtio</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>virtio-transitional</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>virtio-non-transitional</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='backendModel'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>random</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>egd</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>builtin</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </rng>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <filesystem supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='driverType'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>path</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>handle</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>virtiofs</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </filesystem>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <tpm supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='model'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>tpm-tis</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>tpm-crb</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='backendModel'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>emulator</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>external</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='backendVersion'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>2.0</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </tpm>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <redirdev supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='bus'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>usb</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </redirdev>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <channel supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='type'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>pty</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>unix</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </channel>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <crypto supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='model'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='type'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>qemu</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='backendModel'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>builtin</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </crypto>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <interface supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='backendType'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>default</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>passt</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </interface>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <panic supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='model'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>isa</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>hyperv</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </panic>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <console supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='type'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>null</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>vc</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>pty</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>dev</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>file</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>pipe</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>stdio</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>udp</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>tcp</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>unix</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>qemu-vdagent</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>dbus</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </console>
Dec  5 01:35:02 compute-0 nova_compute[348591]:  </devices>
Dec  5 01:35:02 compute-0 nova_compute[348591]:  <features>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <gic supported='no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <vmcoreinfo supported='yes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <genid supported='yes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <backingStoreInput supported='yes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <backup supported='yes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <async-teardown supported='yes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <ps2 supported='yes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <sev supported='no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <sgx supported='no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <hyperv supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='features'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>relaxed</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>vapic</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>spinlocks</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>vpindex</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>runtime</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>synic</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>stimer</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>reset</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>vendor_id</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>frequencies</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>reenlightenment</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>tlbflush</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>ipi</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>avic</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>emsr_bitmap</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>xmm_input</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <defaults>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <spinlocks>4095</spinlocks>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <stimer_direct>on</stimer_direct>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <tlbflush_direct>on</tlbflush_direct>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <tlbflush_extended>on</tlbflush_extended>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </defaults>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </hyperv>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <launchSecurity supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='sectype'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>tdx</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </launchSecurity>
Dec  5 01:35:02 compute-0 nova_compute[348591]:  </features>
Dec  5 01:35:02 compute-0 nova_compute[348591]: </domainCapabilities>
Dec  5 01:35:02 compute-0 nova_compute[348591]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  5 01:35:02 compute-0 nova_compute[348591]: 2025-12-05 01:35:02.087 348595 DEBUG nova.virt.libvirt.host [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec  5 01:35:02 compute-0 nova_compute[348591]: <domainCapabilities>
Dec  5 01:35:02 compute-0 nova_compute[348591]:  <path>/usr/libexec/qemu-kvm</path>
Dec  5 01:35:02 compute-0 nova_compute[348591]:  <domain>kvm</domain>
Dec  5 01:35:02 compute-0 nova_compute[348591]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  5 01:35:02 compute-0 nova_compute[348591]:  <arch>x86_64</arch>
Dec  5 01:35:02 compute-0 nova_compute[348591]:  <vcpu max='4096'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:  <iothreads supported='yes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:  <os supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <enum name='firmware'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <value>efi</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <loader supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='type'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>rom</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>pflash</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='readonly'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>yes</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>no</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='secure'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>yes</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>no</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </loader>
Dec  5 01:35:02 compute-0 nova_compute[348591]:  </os>
Dec  5 01:35:02 compute-0 nova_compute[348591]:  <cpu>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <mode name='host-passthrough' supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='hostPassthroughMigratable'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>on</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>off</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </mode>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <mode name='maximum' supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='maximumMigratable'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>on</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>off</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </mode>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <mode name='host-model' supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <vendor>AMD</vendor>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='require' name='x2apic'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='require' name='tsc-deadline'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='require' name='hypervisor'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='require' name='tsc_adjust'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='require' name='spec-ctrl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='require' name='stibp'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='require' name='ssbd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='require' name='cmp_legacy'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='require' name='overflow-recov'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='require' name='succor'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='require' name='ibrs'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='require' name='amd-ssbd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='require' name='virt-ssbd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='require' name='lbrv'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='require' name='tsc-scale'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='require' name='vmcb-clean'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='require' name='flushbyasid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='require' name='pause-filter'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='require' name='pfthreshold'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='require' name='svme-addr-chk'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <feature policy='disable' name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </mode>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <mode name='custom' supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Broadwell'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Broadwell-IBRS'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Broadwell-noTSX'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Broadwell-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Broadwell-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Broadwell-v3'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Broadwell-v4'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Cascadelake-Server'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Cascadelake-Server-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Cascadelake-Server-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Cascadelake-Server-v3'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Cascadelake-Server-v4'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Cascadelake-Server-v5'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Cooperlake'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Cooperlake-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Cooperlake-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Denverton'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='mpx'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Denverton-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='mpx'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Denverton-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Denverton-v3'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Dhyana-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='EPYC-Genoa'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amd-psfd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='auto-ibrs'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='no-nested-data-bp'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='null-sel-clr-base'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='stibp-always-on'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='EPYC-Genoa-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amd-psfd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='auto-ibrs'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='no-nested-data-bp'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='null-sel-clr-base'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='stibp-always-on'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='EPYC-Milan'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='EPYC-Milan-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='EPYC-Milan-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amd-psfd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='no-nested-data-bp'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='null-sel-clr-base'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='stibp-always-on'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='EPYC-Rome'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='EPYC-Rome-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='EPYC-Rome-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='EPYC-Rome-v3'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='EPYC-v3'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='EPYC-v4'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='GraniteRapids'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-fp16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-int8'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-tile'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-fp16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fbsdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrc'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrs'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fzrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='mcdt-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pbrsb-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='prefetchiti'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='psdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='serialize'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xfd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='GraniteRapids-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-fp16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-int8'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-tile'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-fp16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fbsdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrc'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrs'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fzrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='mcdt-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pbrsb-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='prefetchiti'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='psdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='serialize'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xfd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='GraniteRapids-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-fp16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-int8'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-tile'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx10'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx10-128'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx10-256'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx10-512'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-fp16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='cldemote'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fbsdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrc'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrs'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fzrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='mcdt-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='movdir64b'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='movdiri'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pbrsb-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='prefetchiti'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='psdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='serialize'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ss'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xfd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Haswell'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Haswell-IBRS'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Haswell-noTSX'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Haswell-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Haswell-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Haswell-v3'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Haswell-v4'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Icelake-Server'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Icelake-Server-noTSX'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Icelake-Server-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Icelake-Server-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Icelake-Server-v3'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Icelake-Server-v4'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Icelake-Server-v5'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Icelake-Server-v6'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Icelake-Server-v7'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='IvyBridge'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='IvyBridge-IBRS'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='IvyBridge-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='IvyBridge-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='KnightsMill'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-4fmaps'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-4vnniw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512er'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512pf'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ss'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='KnightsMill-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-4fmaps'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-4vnniw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512er'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512pf'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ss'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Opteron_G4'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fma4'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xop'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Opteron_G4-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fma4'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xop'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Opteron_G5'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fma4'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='tbm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xop'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Opteron_G5-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fma4'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='tbm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xop'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='SapphireRapids'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-int8'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-tile'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-fp16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrc'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrs'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fzrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='serialize'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xfd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='SapphireRapids-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-int8'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-tile'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-fp16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrc'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrs'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fzrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='serialize'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xfd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='SapphireRapids-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-int8'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-tile'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-fp16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fbsdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrc'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrs'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fzrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='psdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='serialize'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xfd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='SapphireRapids-v3'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-int8'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='amx-tile'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-bf16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-fp16'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bitalg'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='cldemote'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fbsdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrc'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrs'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fzrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='la57'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='movdir64b'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='movdiri'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='psdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='serialize'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ss'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='taa-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xfd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='SierraForest'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-ne-convert'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-vnni-int8'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='cmpccxadd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fbsdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrs'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='mcdt-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pbrsb-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='psdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='serialize'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='SierraForest-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-ifma'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-ne-convert'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-vnni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx-vnni-int8'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='cmpccxadd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fbsdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='fsrs'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ibrs-all'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='mcdt-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pbrsb-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='psdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='serialize'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vaes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Client'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Client-IBRS'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Client-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Client-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Client-v3'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Client-v4'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Server'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Server-IBRS'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Server-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Server-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='hle'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='rtm'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Server-v3'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Server-v4'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Skylake-Server-v5'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512bw'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512cd'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512dq'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512f'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='avx512vl'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='invpcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pcid'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='pku'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Snowridge'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='cldemote'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='core-capability'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='movdir64b'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='movdiri'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='mpx'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='split-lock-detect'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Snowridge-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='cldemote'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='core-capability'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='movdir64b'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='movdiri'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='mpx'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='split-lock-detect'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Snowridge-v2'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='cldemote'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='core-capability'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='movdir64b'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='movdiri'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='split-lock-detect'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Snowridge-v3'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='cldemote'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='core-capability'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='movdir64b'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='movdiri'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='split-lock-detect'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='Snowridge-v4'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='cldemote'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='erms'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='gfni'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='movdir64b'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='movdiri'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='xsaves'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='athlon'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='3dnow'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='3dnowext'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='athlon-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='3dnow'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='3dnowext'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='core2duo'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ss'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='core2duo-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ss'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='coreduo'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ss'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='coreduo-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ss'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='n270'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ss'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='n270-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='ss'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='phenom'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='3dnow'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='3dnowext'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <blockers model='phenom-v1'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='3dnow'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <feature name='3dnowext'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </blockers>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </mode>
Dec  5 01:35:02 compute-0 nova_compute[348591]:  </cpu>
Dec  5 01:35:02 compute-0 nova_compute[348591]:  <memoryBacking supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <enum name='sourceType'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <value>file</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <value>anonymous</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <value>memfd</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:  </memoryBacking>
Dec  5 01:35:02 compute-0 nova_compute[348591]:  <devices>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <disk supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='diskDevice'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>disk</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>cdrom</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>floppy</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>lun</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='bus'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>fdc</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>scsi</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>virtio</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>usb</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>sata</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='model'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>virtio</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>virtio-transitional</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>virtio-non-transitional</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </disk>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <graphics supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='type'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>vnc</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>egl-headless</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>dbus</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </graphics>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <video supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='modelType'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>vga</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>cirrus</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>virtio</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>none</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>bochs</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>ramfb</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </video>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <hostdev supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='mode'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>subsystem</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='startupPolicy'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>default</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>mandatory</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>requisite</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>optional</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='subsysType'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>usb</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>pci</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>scsi</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='capsType'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='pciBackend'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </hostdev>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <rng supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='model'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>virtio</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>virtio-transitional</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>virtio-non-transitional</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='backendModel'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>random</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>egd</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>builtin</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </rng>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <filesystem supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='driverType'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>path</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>handle</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>virtiofs</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </filesystem>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <tpm supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='model'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>tpm-tis</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>tpm-crb</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='backendModel'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>emulator</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>external</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='backendVersion'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>2.0</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </tpm>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <redirdev supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='bus'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>usb</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </redirdev>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <channel supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='type'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>pty</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>unix</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </channel>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <crypto supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='model'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='type'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>qemu</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='backendModel'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>builtin</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </crypto>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <interface supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='backendType'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>default</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>passt</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </interface>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <panic supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='model'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>isa</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>hyperv</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </panic>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <console supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='type'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>null</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>vc</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>pty</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>dev</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>file</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>pipe</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>stdio</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>udp</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>tcp</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>unix</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>qemu-vdagent</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>dbus</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </console>
Dec  5 01:35:02 compute-0 nova_compute[348591]:  </devices>
Dec  5 01:35:02 compute-0 nova_compute[348591]:  <features>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <gic supported='no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <vmcoreinfo supported='yes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <genid supported='yes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <backingStoreInput supported='yes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <backup supported='yes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <async-teardown supported='yes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <ps2 supported='yes'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <sev supported='no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <sgx supported='no'/>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <hyperv supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='features'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>relaxed</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>vapic</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>spinlocks</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>vpindex</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>runtime</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>synic</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>stimer</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>reset</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>vendor_id</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>frequencies</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>reenlightenment</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>tlbflush</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>ipi</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>avic</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>emsr_bitmap</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>xmm_input</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <defaults>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <spinlocks>4095</spinlocks>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <stimer_direct>on</stimer_direct>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <tlbflush_direct>on</tlbflush_direct>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <tlbflush_extended>on</tlbflush_extended>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </defaults>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </hyperv>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    <launchSecurity supported='yes'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      <enum name='sectype'>
Dec  5 01:35:02 compute-0 nova_compute[348591]:        <value>tdx</value>
Dec  5 01:35:02 compute-0 nova_compute[348591]:      </enum>
Dec  5 01:35:02 compute-0 nova_compute[348591]:    </launchSecurity>
Dec  5 01:35:02 compute-0 nova_compute[348591]:  </features>
Dec  5 01:35:02 compute-0 nova_compute[348591]: </domainCapabilities>
Dec  5 01:35:02 compute-0 nova_compute[348591]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  5 01:35:02 compute-0 nova_compute[348591]: 2025-12-05 01:35:02.218 348595 DEBUG nova.virt.libvirt.host [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec  5 01:35:02 compute-0 nova_compute[348591]: 2025-12-05 01:35:02.219 348595 DEBUG nova.virt.libvirt.host [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec  5 01:35:02 compute-0 nova_compute[348591]: 2025-12-05 01:35:02.219 348595 DEBUG nova.virt.libvirt.host [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec  5 01:35:02 compute-0 nova_compute[348591]: 2025-12-05 01:35:02.220 348595 INFO nova.virt.libvirt.host [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Secure Boot support detected#033[00m
Dec  5 01:35:02 compute-0 nova_compute[348591]: 2025-12-05 01:35:02.222 348595 INFO nova.virt.libvirt.driver [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  5 01:35:02 compute-0 nova_compute[348591]: 2025-12-05 01:35:02.223 348595 INFO nova.virt.libvirt.driver [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  5 01:35:02 compute-0 nova_compute[348591]: 2025-12-05 01:35:02.238 348595 DEBUG nova.virt.libvirt.driver [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Dec  5 01:35:02 compute-0 nova_compute[348591]: 2025-12-05 01:35:02.341 348595 INFO nova.virt.node [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Determined node identity acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 from /var/lib/nova/compute_id#033[00m
Dec  5 01:35:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:35:02 compute-0 nova_compute[348591]: 2025-12-05 01:35:02.397 348595 WARNING nova.compute.manager [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Compute nodes ['acf26aa2-2fef-4a53-8a44-6cfa2eb15d17'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Dec  5 01:35:02 compute-0 python3.9[349272]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec  5 01:35:02 compute-0 nova_compute[348591]: 2025-12-05 01:35:02.448 348595 INFO nova.compute.manager [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Dec  5 01:35:02 compute-0 nova_compute[348591]: 2025-12-05 01:35:02.503 348595 WARNING nova.compute.manager [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec  5 01:35:02 compute-0 nova_compute[348591]: 2025-12-05 01:35:02.503 348595 DEBUG oslo_concurrency.lockutils [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:35:02 compute-0 nova_compute[348591]: 2025-12-05 01:35:02.503 348595 DEBUG oslo_concurrency.lockutils [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:35:02 compute-0 nova_compute[348591]: 2025-12-05 01:35:02.504 348595 DEBUG oslo_concurrency.lockutils [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:35:02 compute-0 nova_compute[348591]: 2025-12-05 01:35:02.504 348595 DEBUG nova.compute.resource_tracker [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 01:35:02 compute-0 nova_compute[348591]: 2025-12-05 01:35:02.504 348595 DEBUG oslo_concurrency.processutils [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:35:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v763: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:35:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:35:03 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1552625451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:35:03 compute-0 nova_compute[348591]: 2025-12-05 01:35:03.305 348595 DEBUG oslo_concurrency.processutils [None req-98a8894b-6cb1-42fe-abda-2a88d939a193 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.800s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:35:03 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Dec  5 01:35:03 compute-0 rsyslogd[188644]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  5 01:35:03 compute-0 systemd[1]: Started libvirt nodedev daemon.
Dec  5 01:35:03 compute-0 python3.9[349467]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  5 01:35:03 compute-0 systemd[1]: Stopping nova_compute container...
Dec  5 01:35:03 compute-0 nova_compute[348591]: 2025-12-05 01:35:03.833 348595 DEBUG oslo_concurrency.lockutils [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 01:35:03 compute-0 nova_compute[348591]: 2025-12-05 01:35:03.834 348595 DEBUG oslo_concurrency.lockutils [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 01:35:03 compute-0 nova_compute[348591]: 2025-12-05 01:35:03.834 348595 DEBUG oslo_concurrency.lockutils [None req-8abe25b1-e822-4379-baef-2e65fedf72c8 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 01:35:04 compute-0 virtqemud[138703]: End of file while reading data: Input/output error
Dec  5 01:35:04 compute-0 systemd[1]: libpod-7e4d1102a0626942d9f944e09cc1dcb68eba5a8bb8d27cbb786766a0e2d545b6.scope: Deactivated successfully.
Dec  5 01:35:04 compute-0 systemd[1]: libpod-7e4d1102a0626942d9f944e09cc1dcb68eba5a8bb8d27cbb786766a0e2d545b6.scope: Consumed 4.018s CPU time.
Dec  5 01:35:04 compute-0 podman[349493]: 2025-12-05 01:35:04.268562697 +0000 UTC m=+0.514251041 container died 7e4d1102a0626942d9f944e09cc1dcb68eba5a8bb8d27cbb786766a0e2d545b6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  5 01:35:04 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7e4d1102a0626942d9f944e09cc1dcb68eba5a8bb8d27cbb786766a0e2d545b6-userdata-shm.mount: Deactivated successfully.
Dec  5 01:35:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-eacfa37577497aeadf1a19b9a8d6f7d0cc735e29817b277f1b42b6463a7744c0-merged.mount: Deactivated successfully.
Dec  5 01:35:04 compute-0 podman[349493]: 2025-12-05 01:35:04.36611938 +0000 UTC m=+0.611807694 container cleanup 7e4d1102a0626942d9f944e09cc1dcb68eba5a8bb8d27cbb786766a0e2d545b6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3)
Dec  5 01:35:04 compute-0 podman[349493]: nova_compute
Dec  5 01:35:04 compute-0 podman[349523]: nova_compute
Dec  5 01:35:04 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Dec  5 01:35:04 compute-0 systemd[1]: Stopped nova_compute container.
Dec  5 01:35:04 compute-0 systemd[1]: edpm_nova_compute.service: Consumed 1.037s CPU time, 18.6M memory peak, read 0B from disk, written 116.0K to disk.
Dec  5 01:35:04 compute-0 systemd[1]: Starting nova_compute container...
Dec  5 01:35:04 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eacfa37577497aeadf1a19b9a8d6f7d0cc735e29817b277f1b42b6463a7744c0/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  5 01:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eacfa37577497aeadf1a19b9a8d6f7d0cc735e29817b277f1b42b6463a7744c0/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec  5 01:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eacfa37577497aeadf1a19b9a8d6f7d0cc735e29817b277f1b42b6463a7744c0/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  5 01:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eacfa37577497aeadf1a19b9a8d6f7d0cc735e29817b277f1b42b6463a7744c0/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  5 01:35:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eacfa37577497aeadf1a19b9a8d6f7d0cc735e29817b277f1b42b6463a7744c0/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec  5 01:35:04 compute-0 podman[349534]: 2025-12-05 01:35:04.672472729 +0000 UTC m=+0.171809906 container init 7e4d1102a0626942d9f944e09cc1dcb68eba5a8bb8d27cbb786766a0e2d545b6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  5 01:35:04 compute-0 podman[349534]: 2025-12-05 01:35:04.68540357 +0000 UTC m=+0.184740747 container start 7e4d1102a0626942d9f944e09cc1dcb68eba5a8bb8d27cbb786766a0e2d545b6 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  5 01:35:04 compute-0 podman[349534]: nova_compute
Dec  5 01:35:04 compute-0 nova_compute[349548]: + sudo -E kolla_set_configs
Dec  5 01:35:04 compute-0 systemd[1]: Started nova_compute container.
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Validating config file
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Copying service configuration files
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Deleting /etc/ceph
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Creating directory /etc/ceph
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Setting permission for /etc/ceph
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Writing out command to execute
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  5 01:35:04 compute-0 nova_compute[349548]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  5 01:35:04 compute-0 nova_compute[349548]: ++ cat /run_command
Dec  5 01:35:04 compute-0 nova_compute[349548]: + CMD=nova-compute
Dec  5 01:35:04 compute-0 nova_compute[349548]: + ARGS=
Dec  5 01:35:04 compute-0 nova_compute[349548]: + sudo kolla_copy_cacerts
Dec  5 01:35:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v764: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:35:04 compute-0 nova_compute[349548]: + [[ ! -n '' ]]
Dec  5 01:35:04 compute-0 nova_compute[349548]: + . kolla_extend_start
Dec  5 01:35:04 compute-0 nova_compute[349548]: + echo 'Running command: '\''nova-compute'\'''
Dec  5 01:35:04 compute-0 nova_compute[349548]: Running command: 'nova-compute'
Dec  5 01:35:04 compute-0 nova_compute[349548]: + umask 0022
Dec  5 01:35:04 compute-0 nova_compute[349548]: + exec nova-compute
Dec  5 01:35:05 compute-0 python3.9[349711]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec  5 01:35:06 compute-0 systemd[1]: Started libpod-conmon-4d8938d8db32fcae4f45945a49d34b745f8e8c75a9d36333a9dd0778cc2dcac2.scope.
Dec  5 01:35:06 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:35:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0254dc70324eb2caac3f834ec1798536e6121356d6feb9bd233f8d75726b53fd/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Dec  5 01:35:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0254dc70324eb2caac3f834ec1798536e6121356d6feb9bd233f8d75726b53fd/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  5 01:35:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0254dc70324eb2caac3f834ec1798536e6121356d6feb9bd233f8d75726b53fd/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Dec  5 01:35:06 compute-0 podman[349736]: 2025-12-05 01:35:06.21955037 +0000 UTC m=+0.194631852 container init 4d8938d8db32fcae4f45945a49d34b745f8e8c75a9d36333a9dd0778cc2dcac2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  5 01:35:06 compute-0 podman[349736]: 2025-12-05 01:35:06.244330881 +0000 UTC m=+0.219412393 container start 4d8938d8db32fcae4f45945a49d34b745f8e8c75a9d36333a9dd0778cc2dcac2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=nova_compute_init, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible)
Dec  5 01:35:06 compute-0 python3.9[349711]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Dec  5 01:35:06 compute-0 nova_compute_init[349757]: INFO:nova_statedir:Applying nova statedir ownership
Dec  5 01:35:06 compute-0 nova_compute_init[349757]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Dec  5 01:35:06 compute-0 nova_compute_init[349757]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Dec  5 01:35:06 compute-0 nova_compute_init[349757]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Dec  5 01:35:06 compute-0 nova_compute_init[349757]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Dec  5 01:35:06 compute-0 nova_compute_init[349757]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Dec  5 01:35:06 compute-0 nova_compute_init[349757]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Dec  5 01:35:06 compute-0 nova_compute_init[349757]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Dec  5 01:35:06 compute-0 nova_compute_init[349757]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Dec  5 01:35:06 compute-0 nova_compute_init[349757]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Dec  5 01:35:06 compute-0 nova_compute_init[349757]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Dec  5 01:35:06 compute-0 nova_compute_init[349757]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Dec  5 01:35:06 compute-0 nova_compute_init[349757]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Dec  5 01:35:06 compute-0 nova_compute_init[349757]: INFO:nova_statedir:Nova statedir ownership complete
Dec  5 01:35:06 compute-0 systemd[1]: libpod-4d8938d8db32fcae4f45945a49d34b745f8e8c75a9d36333a9dd0778cc2dcac2.scope: Deactivated successfully.
Dec  5 01:35:06 compute-0 podman[349758]: 2025-12-05 01:35:06.355222026 +0000 UTC m=+0.060985163 container died 4d8938d8db32fcae4f45945a49d34b745f8e8c75a9d36333a9dd0778cc2dcac2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  5 01:35:06 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4d8938d8db32fcae4f45945a49d34b745f8e8c75a9d36333a9dd0778cc2dcac2-userdata-shm.mount: Deactivated successfully.
Dec  5 01:35:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-0254dc70324eb2caac3f834ec1798536e6121356d6feb9bd233f8d75726b53fd-merged.mount: Deactivated successfully.
Dec  5 01:35:06 compute-0 podman[349766]: 2025-12-05 01:35:06.410635542 +0000 UTC m=+0.074663884 container cleanup 4d8938d8db32fcae4f45945a49d34b745f8e8c75a9d36333a9dd0778cc2dcac2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  5 01:35:06 compute-0 systemd[1]: libpod-conmon-4d8938d8db32fcae4f45945a49d34b745f8e8c75a9d36333a9dd0778cc2dcac2.scope: Deactivated successfully.
Dec  5 01:35:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v765: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:35:06 compute-0 nova_compute[349548]: 2025-12-05 01:35:06.899 349552 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  5 01:35:06 compute-0 nova_compute[349548]: 2025-12-05 01:35:06.900 349552 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  5 01:35:06 compute-0 nova_compute[349548]: 2025-12-05 01:35:06.900 349552 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  5 01:35:06 compute-0 nova_compute[349548]: 2025-12-05 01:35:06.900 349552 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.027 349552 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.056 349552 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.057 349552 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Dec  5 01:35:07 compute-0 systemd[1]: session-56.scope: Deactivated successfully.
Dec  5 01:35:07 compute-0 systemd[1]: session-56.scope: Consumed 3min 55.336s CPU time.
Dec  5 01:35:07 compute-0 systemd-logind[792]: Session 56 logged out. Waiting for processes to exit.
Dec  5 01:35:07 compute-0 systemd-logind[792]: Removed session 56.
Dec  5 01:35:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.621 349552 INFO nova.virt.driver [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.747 349552 INFO nova.compute.provider_config [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.782 349552 DEBUG oslo_concurrency.lockutils [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.783 349552 DEBUG oslo_concurrency.lockutils [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.783 349552 DEBUG oslo_concurrency.lockutils [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.783 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.783 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.784 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.784 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.784 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.784 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.784 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.784 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.785 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.785 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.785 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.785 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.785 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.785 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.785 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.786 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.786 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.786 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.786 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.786 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.786 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.787 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.787 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.787 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.787 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.787 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.788 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.788 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.788 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.788 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.788 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.789 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.789 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.789 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.789 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.789 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.789 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.790 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.790 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.790 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.790 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.790 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.790 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.791 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.791 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.791 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.791 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.791 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.791 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.792 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.792 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.792 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.792 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.792 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.792 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.792 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.793 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.793 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.793 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.793 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.793 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.793 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.794 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.794 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.794 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.794 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.794 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.794 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.794 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.795 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.795 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.795 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.795 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.795 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.795 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.795 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.796 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.796 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.796 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.796 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.796 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.796 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.797 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.797 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.797 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.797 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.797 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.797 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.797 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.798 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.798 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.798 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.798 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.798 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.798 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.799 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.799 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.799 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.799 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.799 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.799 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.800 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.800 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.800 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.800 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.800 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.800 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.800 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.801 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.801 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.801 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.801 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.801 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.801 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.802 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.802 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.802 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.802 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.802 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.802 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.802 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.803 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.803 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.803 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.803 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.803 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.803 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.804 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.804 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.804 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.804 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.804 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.804 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.804 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.805 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.805 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.805 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.805 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.805 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.805 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.806 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.806 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.806 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.806 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.806 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.806 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.806 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.807 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.807 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.807 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.807 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.807 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.807 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.808 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.808 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.808 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.808 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.808 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.808 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.808 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.809 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.809 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.809 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.809 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.809 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.809 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.810 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.810 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.810 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.810 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.810 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.810 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.810 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.811 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.811 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.811 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.811 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.811 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.812 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.812 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.812 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.812 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.812 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.812 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.813 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.813 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.813 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.813 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.813 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.813 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.813 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.814 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.814 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.814 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.814 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.814 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.814 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.814 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.815 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.815 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.815 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.815 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.815 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.815 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.815 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.816 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.816 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.816 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.816 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.816 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.816 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.816 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.817 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.817 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.817 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.817 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.817 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.817 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.818 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.818 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.818 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.818 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.818 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.818 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.818 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.819 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.819 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.819 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.819 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.819 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.820 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.820 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.820 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.820 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.820 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.820 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.820 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.821 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.821 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.821 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.821 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.821 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.821 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.822 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.822 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.822 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.822 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.822 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.823 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.823 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.823 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.823 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.823 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.823 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.823 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.824 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.824 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.824 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.824 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.824 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.824 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.824 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.825 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.825 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.825 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.825 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.825 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.825 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.826 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.826 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.826 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.826 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.826 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.826 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.826 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.827 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.827 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.827 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.827 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.827 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.827 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.827 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.828 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.828 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.828 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.828 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.828 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.828 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.828 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.829 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.829 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.829 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.829 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.829 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.829 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.829 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.830 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.830 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.830 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.830 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.830 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.830 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.830 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.831 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.831 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.831 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.831 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.831 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.831 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.832 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.832 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.832 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.832 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.832 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.832 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.833 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.833 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.833 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.833 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.833 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.834 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.834 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.834 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.834 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.834 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.835 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.835 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.835 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.835 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.835 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.835 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.836 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.836 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.836 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.836 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.836 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.836 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.837 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.837 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.837 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.837 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.837 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.837 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.837 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.838 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.838 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.838 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.838 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.838 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.838 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.839 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.839 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.839 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.839 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.839 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.839 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.840 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.840 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.840 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.840 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.840 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.840 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.841 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.841 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.841 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.841 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.841 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.841 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.842 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.842 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.842 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.842 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.842 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.842 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.842 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.843 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.843 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.843 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.843 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.843 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.843 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.843 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.844 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.844 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.844 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.844 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.844 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.844 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.845 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.845 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.845 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.845 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.845 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.846 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.846 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.846 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.846 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.846 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.846 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.846 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.847 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.847 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.847 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.847 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.847 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.847 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.847 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.848 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.848 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.848 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.848 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.848 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.848 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.849 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.849 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.849 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.849 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.849 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.849 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.850 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.850 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.850 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.850 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.850 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.850 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.850 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.851 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.851 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.851 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.851 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.851 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.851 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.852 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.852 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.852 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.852 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.852 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.852 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.853 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.853 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.853 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.853 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.853 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.853 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.854 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.854 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.854 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.854 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.854 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.855 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.855 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.855 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.855 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.855 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.855 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.856 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.856 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.856 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.856 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.856 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.857 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.857 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.857 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.857 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.857 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.857 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.857 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.858 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.858 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.858 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.858 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.859 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.859 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.859 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.859 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.859 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.860 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.860 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.860 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.860 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.860 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.860 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.861 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.861 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.861 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.861 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.861 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.862 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.862 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.862 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.862 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.863 349552 WARNING oslo_config.cfg [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec  5 01:35:07 compute-0 nova_compute[349548]: live_migration_uri is deprecated for removal in favor of two other options that
Dec  5 01:35:07 compute-0 nova_compute[349548]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec  5 01:35:07 compute-0 nova_compute[349548]: and ``live_migration_inbound_addr`` respectively.
Dec  5 01:35:07 compute-0 nova_compute[349548]: ).  Its value may be silently ignored in the future.#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.863 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.863 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.863 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.863 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.863 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.864 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.864 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.864 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.864 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.864 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.864 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.865 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.865 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.865 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.865 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.865 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.865 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.866 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.866 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.rbd_secret_uuid        = cbd280d3-cbd8-528b-ace6-2b3a887cdcee log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.866 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.866 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.866 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.866 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.867 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.867 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.867 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.867 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.867 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.867 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.868 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.868 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.868 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.868 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.868 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.868 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.869 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.869 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.869 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.869 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.869 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.870 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.870 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.870 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.870 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.870 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.870 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.870 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.871 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.871 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.871 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.871 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.871 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.871 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.872 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.872 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.872 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.872 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.872 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.872 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.872 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.873 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.873 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.873 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.873 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.873 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.874 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.874 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.874 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.874 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.874 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.874 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.875 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.875 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.875 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.875 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.875 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.875 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.875 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.876 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.876 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.876 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.876 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.876 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.876 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.876 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.877 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.877 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.877 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.877 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.877 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.877 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.878 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.878 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.878 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.878 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.878 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.878 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.879 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.879 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.879 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.879 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.879 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.879 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.880 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.880 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.880 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.880 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.880 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.880 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.880 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.881 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.881 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.881 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.881 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.882 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.882 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.882 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.882 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.882 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.882 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.882 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.883 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.883 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.883 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.883 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.883 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.884 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.884 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.884 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.885 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.885 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.885 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.885 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.885 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.886 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.886 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.886 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.886 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.887 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.887 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.887 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.887 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.887 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.888 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.888 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.888 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.888 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.888 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.888 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.889 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.889 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.889 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.889 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.889 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.889 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.889 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.890 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.890 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.890 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.890 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.890 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.890 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.891 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.891 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.891 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.891 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.891 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.891 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.891 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.892 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.892 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.892 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.892 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.892 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.892 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.892 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.893 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.893 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.893 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.893 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.893 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.894 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.894 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.894 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.894 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.894 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.895 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.895 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.895 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.895 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.895 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.895 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.895 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.896 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.896 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.896 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.896 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.896 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.897 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.897 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.897 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.897 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.897 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.897 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.897 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.898 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.898 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.898 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.898 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.898 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.898 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.899 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.899 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.899 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.899 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.899 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.899 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.900 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.900 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.900 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.900 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.900 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.900 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.900 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.901 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.901 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.901 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.901 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.901 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.902 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.902 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.902 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.902 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.902 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.903 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.903 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.903 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.903 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.903 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.903 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.904 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.904 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.904 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.904 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.904 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.904 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.904 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.905 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.905 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.905 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.905 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.905 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.906 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.906 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.906 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.906 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.906 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.907 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.907 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.907 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.907 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.907 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.908 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.908 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.908 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.908 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.908 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.908 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.909 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.909 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.909 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.909 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.909 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.909 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.910 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.910 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.910 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.910 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.910 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.910 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.911 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.911 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.911 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.911 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.912 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.912 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.912 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.912 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.912 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.912 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.913 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.913 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.913 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.913 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.913 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.914 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.914 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.914 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.914 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.915 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.915 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.915 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.915 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.915 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.916 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.916 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.916 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.916 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.916 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.917 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.917 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.917 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.917 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.917 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.918 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.918 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.918 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.918 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.918 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.919 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.919 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.919 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.919 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.919 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.919 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.920 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.920 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.920 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.920 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.920 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.920 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.921 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.921 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.921 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.921 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.921 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.921 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.922 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.922 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.922 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.922 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.922 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.923 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.923 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.923 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.923 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.923 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.923 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.923 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.924 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.924 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.924 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.924 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.924 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.924 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.925 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.925 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.925 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.925 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.925 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.925 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.925 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.926 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.926 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.926 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.926 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.926 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.926 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.927 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.927 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.927 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.927 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.927 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.927 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.927 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.928 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.928 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.928 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.928 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.928 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.928 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.929 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.929 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.929 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.929 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.929 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.929 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.929 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.930 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.930 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.930 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.930 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.930 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.930 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.931 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.931 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.931 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.931 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.931 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.931 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.931 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.932 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.932 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.932 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.932 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.932 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.932 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.933 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.933 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.933 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.933 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.933 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.933 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.934 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.934 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.934 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.934 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.934 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.934 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.935 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.935 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.935 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.935 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.935 349552 DEBUG oslo_service.service [None req-8de77f70-ec17-4140-abf2-4182d217dfdb - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.936 349552 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.993 349552 INFO nova.virt.node [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Determined node identity acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 from /var/lib/nova/compute_id#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.994 349552 DEBUG nova.virt.libvirt.host [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.995 349552 DEBUG nova.virt.libvirt.host [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.995 349552 DEBUG nova.virt.libvirt.host [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Dec  5 01:35:07 compute-0 nova_compute[349548]: 2025-12-05 01:35:07.995 349552 DEBUG nova.virt.libvirt.host [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Dec  5 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.015 349552 DEBUG nova.virt.libvirt.host [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fb5d61f9a60> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Dec  5 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.021 349552 DEBUG nova.virt.libvirt.host [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fb5d61f9a60> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Dec  5 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.023 349552 INFO nova.virt.libvirt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Connection event '1' reason 'None'#033[00m
Dec  5 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.037 349552 INFO nova.virt.libvirt.host [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Libvirt host capabilities <capabilities>
Dec  5 01:35:08 compute-0 nova_compute[349548]: 
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <host>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <uuid>6c9ead2d-8495-4e2b-9845-f862956e441e</uuid>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <cpu>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <arch>x86_64</arch>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model>EPYC-Rome-v4</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <vendor>AMD</vendor>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <microcode version='16777317'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <signature family='23' model='49' stepping='0'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <maxphysaddr mode='emulate' bits='40'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature name='x2apic'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature name='tsc-deadline'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature name='osxsave'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature name='hypervisor'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature name='tsc_adjust'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature name='spec-ctrl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature name='stibp'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature name='arch-capabilities'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature name='ssbd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature name='cmp_legacy'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature name='topoext'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature name='virt-ssbd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature name='lbrv'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature name='tsc-scale'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature name='vmcb-clean'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature name='pause-filter'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature name='pfthreshold'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature name='svme-addr-chk'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature name='rdctl-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature name='skip-l1dfl-vmentry'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature name='mds-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature name='pschange-mc-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <pages unit='KiB' size='4'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <pages unit='KiB' size='2048'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <pages unit='KiB' size='1048576'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </cpu>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <power_management>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <suspend_mem/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </power_management>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <iommu support='no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <migration_features>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <live/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <uri_transports>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <uri_transport>tcp</uri_transport>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <uri_transport>rdma</uri_transport>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </uri_transports>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </migration_features>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <topology>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <cells num='1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <cell id='0'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:          <memory unit='KiB'>7864320</memory>
Dec  5 01:35:08 compute-0 nova_compute[349548]:          <pages unit='KiB' size='4'>1966080</pages>
Dec  5 01:35:08 compute-0 nova_compute[349548]:          <pages unit='KiB' size='2048'>0</pages>
Dec  5 01:35:08 compute-0 nova_compute[349548]:          <pages unit='KiB' size='1048576'>0</pages>
Dec  5 01:35:08 compute-0 nova_compute[349548]:          <distances>
Dec  5 01:35:08 compute-0 nova_compute[349548]:            <sibling id='0' value='10'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:          </distances>
Dec  5 01:35:08 compute-0 nova_compute[349548]:          <cpus num='8'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:          </cpus>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        </cell>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </cells>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </topology>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <cache>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </cache>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <secmodel>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model>selinux</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <doi>0</doi>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </secmodel>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <secmodel>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model>dac</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <doi>0</doi>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <baselabel type='kvm'>+107:+107</baselabel>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <baselabel type='qemu'>+107:+107</baselabel>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </secmodel>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  </host>
Dec  5 01:35:08 compute-0 nova_compute[349548]: 
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <guest>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <os_type>hvm</os_type>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <arch name='i686'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <wordsize>32</wordsize>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <domain type='qemu'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <domain type='kvm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </arch>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <features>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <pae/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <nonpae/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <acpi default='on' toggle='yes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <apic default='on' toggle='no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <cpuselection/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <deviceboot/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <disksnapshot default='on' toggle='no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <externalSnapshot/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </features>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  </guest>
Dec  5 01:35:08 compute-0 nova_compute[349548]: 
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <guest>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <os_type>hvm</os_type>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <arch name='x86_64'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <wordsize>64</wordsize>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <domain type='qemu'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <domain type='kvm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </arch>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <features>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <acpi default='on' toggle='yes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <apic default='on' toggle='no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <cpuselection/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <deviceboot/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <disksnapshot default='on' toggle='no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <externalSnapshot/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </features>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  </guest>
Dec  5 01:35:08 compute-0 nova_compute[349548]: 
Dec  5 01:35:08 compute-0 nova_compute[349548]: </capabilities>
Dec  5 01:35:08 compute-0 nova_compute[349548]: #033[00m
Dec  5 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.045 349552 DEBUG nova.virt.libvirt.host [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  5 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.051 349552 DEBUG nova.virt.libvirt.host [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec  5 01:35:08 compute-0 nova_compute[349548]: <domainCapabilities>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <path>/usr/libexec/qemu-kvm</path>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <domain>kvm</domain>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <arch>i686</arch>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <vcpu max='4096'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <iothreads supported='yes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <os supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <enum name='firmware'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <loader supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='type'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>rom</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>pflash</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='readonly'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>yes</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>no</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='secure'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>no</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </loader>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  </os>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <cpu>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <mode name='host-passthrough' supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='hostPassthroughMigratable'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>on</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>off</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </mode>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <mode name='maximum' supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='maximumMigratable'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>on</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>off</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </mode>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <mode name='host-model' supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <vendor>AMD</vendor>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='x2apic'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='tsc-deadline'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='hypervisor'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='tsc_adjust'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='spec-ctrl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='stibp'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='ssbd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='cmp_legacy'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='overflow-recov'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='succor'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='ibrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='amd-ssbd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='virt-ssbd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='lbrv'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='tsc-scale'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='vmcb-clean'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='flushbyasid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='pause-filter'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='pfthreshold'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='svme-addr-chk'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='disable' name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </mode>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <mode name='custom' supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Broadwell'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Broadwell-IBRS'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Broadwell-noTSX'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Broadwell-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Broadwell-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Broadwell-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Broadwell-v4'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Cascadelake-Server'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Cascadelake-Server-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Cascadelake-Server-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Cascadelake-Server-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Cascadelake-Server-v4'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Cascadelake-Server-v5'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Cooperlake'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Cooperlake-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Cooperlake-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Denverton'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='mpx'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Denverton-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='mpx'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Denverton-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Denverton-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Dhyana-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-Genoa'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amd-psfd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='auto-ibrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='no-nested-data-bp'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='null-sel-clr-base'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='stibp-always-on'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-Genoa-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amd-psfd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='auto-ibrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='no-nested-data-bp'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='null-sel-clr-base'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='stibp-always-on'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-Milan'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-Milan-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-Milan-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amd-psfd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='no-nested-data-bp'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='null-sel-clr-base'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='stibp-always-on'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-Rome'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-Rome-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-Rome-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-Rome-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-v4'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='GraniteRapids'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-fp16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-int8'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-tile'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-fp16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fbsdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrc'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fzrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='mcdt-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pbrsb-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='prefetchiti'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='psdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='serialize'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xfd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='GraniteRapids-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-fp16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-int8'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-tile'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-fp16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fbsdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrc'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fzrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='mcdt-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pbrsb-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='prefetchiti'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='psdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='serialize'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xfd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='GraniteRapids-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-fp16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-int8'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-tile'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx10'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx10-128'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx10-256'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx10-512'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-fp16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='cldemote'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fbsdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrc'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fzrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='mcdt-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdir64b'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdiri'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pbrsb-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='prefetchiti'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='psdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='serialize'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ss'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xfd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Haswell'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Haswell-IBRS'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Haswell-noTSX'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Haswell-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Haswell-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Haswell-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Haswell-v4'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Icelake-Server'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Icelake-Server-noTSX'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Icelake-Server-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Icelake-Server-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Icelake-Server-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Icelake-Server-v4'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Icelake-Server-v5'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Icelake-Server-v6'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Icelake-Server-v7'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='IvyBridge'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='IvyBridge-IBRS'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='IvyBridge-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='IvyBridge-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='KnightsMill'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-4fmaps'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-4vnniw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512er'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512pf'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ss'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='KnightsMill-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-4fmaps'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-4vnniw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512er'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512pf'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ss'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Opteron_G4'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fma4'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xop'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Opteron_G4-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fma4'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xop'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Opteron_G5'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fma4'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='tbm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xop'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Opteron_G5-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fma4'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='tbm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xop'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='SapphireRapids'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-int8'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-tile'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-fp16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrc'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fzrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='serialize'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xfd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='SapphireRapids-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-int8'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-tile'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-fp16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrc'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fzrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='serialize'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xfd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='SapphireRapids-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-int8'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-tile'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-fp16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fbsdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrc'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fzrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='psdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='serialize'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xfd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='SapphireRapids-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-int8'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-tile'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-fp16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='cldemote'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fbsdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrc'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fzrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdir64b'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdiri'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='psdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='serialize'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ss'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xfd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='SierraForest'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-ne-convert'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni-int8'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='cmpccxadd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fbsdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='mcdt-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pbrsb-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='psdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='serialize'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='SierraForest-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-ne-convert'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni-int8'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='cmpccxadd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fbsdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='mcdt-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pbrsb-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='psdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='serialize'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Client'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Client-IBRS'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Client-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Client-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Client-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Client-v4'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Server'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Server-IBRS'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Server-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Server-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Server-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Server-v4'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Server-v5'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Snowridge'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='cldemote'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='core-capability'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdir64b'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdiri'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='mpx'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='split-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Snowridge-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='cldemote'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='core-capability'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdir64b'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdiri'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='mpx'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='split-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Snowridge-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='cldemote'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='core-capability'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdir64b'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdiri'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='split-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Snowridge-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='cldemote'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='core-capability'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdir64b'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdiri'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='split-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Snowridge-v4'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='cldemote'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdir64b'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdiri'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='athlon'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='3dnow'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='3dnowext'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='athlon-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='3dnow'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='3dnowext'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='core2duo'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ss'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='core2duo-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ss'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='coreduo'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ss'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='coreduo-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ss'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='n270'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ss'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='n270-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ss'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='phenom'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='3dnow'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='3dnowext'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='phenom-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='3dnow'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='3dnowext'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </mode>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  </cpu>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <memoryBacking supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <enum name='sourceType'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <value>file</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <value>anonymous</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <value>memfd</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  </memoryBacking>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <devices>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <disk supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='diskDevice'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>disk</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>cdrom</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>floppy</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>lun</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='bus'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>fdc</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>scsi</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>virtio</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>usb</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>sata</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='model'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>virtio</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>virtio-transitional</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>virtio-non-transitional</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </disk>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <graphics supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='type'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>vnc</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>egl-headless</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>dbus</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </graphics>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <video supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='modelType'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>vga</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>cirrus</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>virtio</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>none</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>bochs</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>ramfb</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </video>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <hostdev supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='mode'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>subsystem</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='startupPolicy'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>default</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>mandatory</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>requisite</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>optional</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='subsysType'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>usb</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>pci</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>scsi</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='capsType'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='pciBackend'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </hostdev>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <rng supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='model'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>virtio</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>virtio-transitional</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>virtio-non-transitional</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='backendModel'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>random</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>egd</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>builtin</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </rng>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <filesystem supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='driverType'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>path</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>handle</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>virtiofs</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </filesystem>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <tpm supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='model'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>tpm-tis</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>tpm-crb</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='backendModel'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>emulator</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>external</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='backendVersion'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>2.0</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </tpm>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <redirdev supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='bus'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>usb</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </redirdev>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <channel supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='type'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>pty</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>unix</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </channel>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <crypto supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='model'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='type'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>qemu</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='backendModel'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>builtin</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </crypto>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <interface supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='backendType'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>default</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>passt</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </interface>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <panic supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='model'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>isa</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>hyperv</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </panic>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <console supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='type'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>null</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>vc</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>pty</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>dev</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>file</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>pipe</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>stdio</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>udp</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>tcp</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>unix</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>qemu-vdagent</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>dbus</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </console>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  </devices>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <features>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <gic supported='no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <vmcoreinfo supported='yes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <genid supported='yes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <backingStoreInput supported='yes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <backup supported='yes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <async-teardown supported='yes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <ps2 supported='yes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <sev supported='no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <sgx supported='no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <hyperv supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='features'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>relaxed</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>vapic</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>spinlocks</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>vpindex</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>runtime</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>synic</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>stimer</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>reset</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>vendor_id</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>frequencies</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>reenlightenment</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>tlbflush</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>ipi</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>avic</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>emsr_bitmap</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>xmm_input</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <defaults>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <spinlocks>4095</spinlocks>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <stimer_direct>on</stimer_direct>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <tlbflush_direct>on</tlbflush_direct>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <tlbflush_extended>on</tlbflush_extended>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </defaults>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </hyperv>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <launchSecurity supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='sectype'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>tdx</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </launchSecurity>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  </features>
Dec  5 01:35:08 compute-0 nova_compute[349548]: </domainCapabilities>
Dec  5 01:35:08 compute-0 nova_compute[349548]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  5 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.061 349552 DEBUG nova.virt.libvirt.host [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec  5 01:35:08 compute-0 nova_compute[349548]: <domainCapabilities>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <path>/usr/libexec/qemu-kvm</path>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <domain>kvm</domain>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <arch>i686</arch>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <vcpu max='240'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <iothreads supported='yes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <os supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <enum name='firmware'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <loader supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='type'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>rom</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>pflash</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='readonly'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>yes</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>no</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='secure'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>no</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </loader>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  </os>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <cpu>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <mode name='host-passthrough' supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='hostPassthroughMigratable'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>on</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>off</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </mode>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <mode name='maximum' supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='maximumMigratable'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>on</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>off</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </mode>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <mode name='host-model' supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <vendor>AMD</vendor>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='x2apic'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='tsc-deadline'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='hypervisor'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='tsc_adjust'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='spec-ctrl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='stibp'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='ssbd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='cmp_legacy'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='overflow-recov'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='succor'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='ibrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='amd-ssbd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='virt-ssbd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='lbrv'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='tsc-scale'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='vmcb-clean'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='flushbyasid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='pause-filter'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='pfthreshold'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='svme-addr-chk'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='disable' name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </mode>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <mode name='custom' supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Broadwell'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Broadwell-IBRS'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Broadwell-noTSX'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Broadwell-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Broadwell-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Broadwell-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Broadwell-v4'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Cascadelake-Server'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Cascadelake-Server-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Cascadelake-Server-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Cascadelake-Server-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Cascadelake-Server-v4'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Cascadelake-Server-v5'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Cooperlake'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Cooperlake-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Cooperlake-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Denverton'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='mpx'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Denverton-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='mpx'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Denverton-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Denverton-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Dhyana-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-Genoa'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amd-psfd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='auto-ibrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='no-nested-data-bp'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='null-sel-clr-base'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='stibp-always-on'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-Genoa-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amd-psfd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='auto-ibrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='no-nested-data-bp'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='null-sel-clr-base'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='stibp-always-on'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-Milan'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-Milan-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-Milan-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amd-psfd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='no-nested-data-bp'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='null-sel-clr-base'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='stibp-always-on'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-Rome'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-Rome-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-Rome-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-Rome-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-v4'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='GraniteRapids'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-fp16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-int8'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-tile'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-fp16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fbsdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrc'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fzrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='mcdt-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pbrsb-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='prefetchiti'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='psdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='serialize'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xfd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='GraniteRapids-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-fp16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-int8'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-tile'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-fp16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fbsdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrc'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fzrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='mcdt-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pbrsb-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='prefetchiti'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='psdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='serialize'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xfd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='GraniteRapids-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-fp16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-int8'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-tile'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx10'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx10-128'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx10-256'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx10-512'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-fp16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='cldemote'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fbsdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrc'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fzrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='mcdt-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdir64b'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdiri'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pbrsb-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='prefetchiti'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='psdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='serialize'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ss'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xfd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Haswell'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Haswell-IBRS'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Haswell-noTSX'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Haswell-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Haswell-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Haswell-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Haswell-v4'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Icelake-Server'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Icelake-Server-noTSX'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Icelake-Server-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Icelake-Server-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Icelake-Server-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Icelake-Server-v4'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Icelake-Server-v5'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Icelake-Server-v6'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Icelake-Server-v7'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='IvyBridge'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='IvyBridge-IBRS'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='IvyBridge-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='IvyBridge-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='KnightsMill'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-4fmaps'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-4vnniw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512er'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512pf'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ss'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='KnightsMill-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-4fmaps'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-4vnniw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512er'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512pf'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ss'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Opteron_G4'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fma4'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xop'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Opteron_G4-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fma4'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xop'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Opteron_G5'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fma4'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='tbm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xop'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Opteron_G5-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fma4'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='tbm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xop'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='SapphireRapids'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-int8'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-tile'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-fp16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrc'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fzrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='serialize'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xfd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='SapphireRapids-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-int8'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-tile'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-fp16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrc'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fzrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='serialize'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xfd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='SapphireRapids-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-int8'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-tile'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-fp16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fbsdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrc'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fzrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='psdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='serialize'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xfd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='SapphireRapids-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-int8'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-tile'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-fp16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='cldemote'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fbsdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrc'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fzrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdir64b'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdiri'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='psdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='serialize'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ss'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xfd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='SierraForest'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-ne-convert'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni-int8'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='cmpccxadd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fbsdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='mcdt-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pbrsb-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='psdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='serialize'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='SierraForest-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-ne-convert'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni-int8'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='cmpccxadd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fbsdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='mcdt-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pbrsb-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='psdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='serialize'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Client'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Client-IBRS'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Client-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Client-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Client-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Client-v4'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Server'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Server-IBRS'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Server-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Server-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Server-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Server-v4'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Server-v5'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Snowridge'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='cldemote'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='core-capability'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdir64b'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdiri'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='mpx'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='split-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Snowridge-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='cldemote'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='core-capability'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdir64b'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdiri'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='mpx'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='split-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Snowridge-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='cldemote'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='core-capability'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdir64b'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdiri'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='split-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Snowridge-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='cldemote'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='core-capability'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdir64b'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdiri'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='split-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Snowridge-v4'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='cldemote'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdir64b'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdiri'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='athlon'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='3dnow'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='3dnowext'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='athlon-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='3dnow'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='3dnowext'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='core2duo'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ss'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='core2duo-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ss'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='coreduo'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ss'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='coreduo-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ss'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='n270'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ss'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='n270-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ss'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='phenom'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='3dnow'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='3dnowext'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='phenom-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='3dnow'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='3dnowext'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </mode>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  </cpu>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <memoryBacking supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <enum name='sourceType'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <value>file</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <value>anonymous</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <value>memfd</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  </memoryBacking>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <devices>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <disk supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='diskDevice'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>disk</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>cdrom</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>floppy</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>lun</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='bus'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>ide</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>fdc</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>scsi</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>virtio</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>usb</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>sata</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='model'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>virtio</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>virtio-transitional</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>virtio-non-transitional</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </disk>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <graphics supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='type'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>vnc</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>egl-headless</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>dbus</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </graphics>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <video supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='modelType'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>vga</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>cirrus</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>virtio</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>none</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>bochs</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>ramfb</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </video>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <hostdev supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='mode'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>subsystem</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='startupPolicy'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>default</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>mandatory</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>requisite</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>optional</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='subsysType'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>usb</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>pci</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>scsi</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='capsType'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='pciBackend'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </hostdev>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <rng supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='model'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>virtio</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>virtio-transitional</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>virtio-non-transitional</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='backendModel'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>random</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>egd</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>builtin</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </rng>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <filesystem supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='driverType'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>path</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>handle</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>virtiofs</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </filesystem>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <tpm supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='model'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>tpm-tis</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>tpm-crb</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='backendModel'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>emulator</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>external</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='backendVersion'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>2.0</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </tpm>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <redirdev supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='bus'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>usb</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </redirdev>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <channel supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='type'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>pty</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>unix</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </channel>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <crypto supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='model'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='type'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>qemu</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='backendModel'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>builtin</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </crypto>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <interface supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='backendType'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>default</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>passt</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </interface>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <panic supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='model'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>isa</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>hyperv</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </panic>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <console supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='type'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>null</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>vc</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>pty</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>dev</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>file</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>pipe</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>stdio</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>udp</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>tcp</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>unix</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>qemu-vdagent</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>dbus</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </console>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  </devices>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <features>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <gic supported='no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <vmcoreinfo supported='yes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <genid supported='yes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <backingStoreInput supported='yes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <backup supported='yes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <async-teardown supported='yes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <ps2 supported='yes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <sev supported='no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <sgx supported='no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <hyperv supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='features'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>relaxed</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>vapic</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>spinlocks</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>vpindex</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>runtime</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>synic</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>stimer</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>reset</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>vendor_id</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>frequencies</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>reenlightenment</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>tlbflush</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>ipi</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>avic</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>emsr_bitmap</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>xmm_input</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <defaults>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <spinlocks>4095</spinlocks>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <stimer_direct>on</stimer_direct>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <tlbflush_direct>on</tlbflush_direct>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <tlbflush_extended>on</tlbflush_extended>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </defaults>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </hyperv>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <launchSecurity supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='sectype'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>tdx</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </launchSecurity>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  </features>
Dec  5 01:35:08 compute-0 nova_compute[349548]: </domainCapabilities>
Dec  5 01:35:08 compute-0 nova_compute[349548]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  5 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.129 349552 DEBUG nova.virt.libvirt.host [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  5 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.130 349552 DEBUG nova.virt.libvirt.volume.mount [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Dec  5 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.135 349552 DEBUG nova.virt.libvirt.host [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec  5 01:35:08 compute-0 nova_compute[349548]: <domainCapabilities>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <path>/usr/libexec/qemu-kvm</path>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <domain>kvm</domain>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <arch>x86_64</arch>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <vcpu max='4096'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <iothreads supported='yes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <os supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <enum name='firmware'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <value>efi</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <loader supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='type'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>rom</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>pflash</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='readonly'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>yes</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>no</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='secure'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>yes</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>no</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </loader>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  </os>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <cpu>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <mode name='host-passthrough' supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='hostPassthroughMigratable'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>on</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>off</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </mode>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <mode name='maximum' supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='maximumMigratable'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>on</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>off</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </mode>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <mode name='host-model' supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <vendor>AMD</vendor>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='x2apic'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='tsc-deadline'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='hypervisor'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='tsc_adjust'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='spec-ctrl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='stibp'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='ssbd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='cmp_legacy'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='overflow-recov'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='succor'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='ibrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='amd-ssbd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='virt-ssbd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='lbrv'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='tsc-scale'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='vmcb-clean'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='flushbyasid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='pause-filter'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='pfthreshold'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='svme-addr-chk'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='disable' name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </mode>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <mode name='custom' supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Broadwell'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Broadwell-IBRS'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Broadwell-noTSX'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Broadwell-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Broadwell-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Broadwell-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Broadwell-v4'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Cascadelake-Server'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Cascadelake-Server-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Cascadelake-Server-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Cascadelake-Server-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Cascadelake-Server-v4'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Cascadelake-Server-v5'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Cooperlake'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Cooperlake-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Cooperlake-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Denverton'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='mpx'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Denverton-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='mpx'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Denverton-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Denverton-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Dhyana-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-Genoa'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amd-psfd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='auto-ibrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='no-nested-data-bp'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='null-sel-clr-base'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='stibp-always-on'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-Genoa-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amd-psfd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='auto-ibrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='no-nested-data-bp'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='null-sel-clr-base'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='stibp-always-on'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-Milan'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-Milan-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-Milan-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amd-psfd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='no-nested-data-bp'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='null-sel-clr-base'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='stibp-always-on'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-Rome'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-Rome-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-Rome-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-Rome-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-v4'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='GraniteRapids'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-fp16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-int8'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-tile'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-fp16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fbsdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrc'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fzrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='mcdt-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pbrsb-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='prefetchiti'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='psdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='serialize'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xfd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='GraniteRapids-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-fp16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-int8'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-tile'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-fp16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fbsdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrc'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fzrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='mcdt-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pbrsb-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='prefetchiti'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='psdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='serialize'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xfd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='GraniteRapids-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-fp16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-int8'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-tile'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx10'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx10-128'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx10-256'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx10-512'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-fp16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='cldemote'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fbsdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrc'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fzrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='mcdt-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdir64b'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdiri'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pbrsb-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='prefetchiti'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='psdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='serialize'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ss'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xfd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Haswell'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Haswell-IBRS'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Haswell-noTSX'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Haswell-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Haswell-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Haswell-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Haswell-v4'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Icelake-Server'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Icelake-Server-noTSX'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Icelake-Server-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Icelake-Server-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Icelake-Server-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Icelake-Server-v4'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Icelake-Server-v5'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Icelake-Server-v6'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Icelake-Server-v7'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='IvyBridge'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='IvyBridge-IBRS'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='IvyBridge-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='IvyBridge-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='KnightsMill'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-4fmaps'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-4vnniw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512er'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512pf'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ss'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='KnightsMill-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-4fmaps'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-4vnniw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512er'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512pf'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ss'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Opteron_G4'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fma4'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xop'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Opteron_G4-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fma4'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xop'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Opteron_G5'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fma4'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='tbm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xop'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Opteron_G5-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fma4'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='tbm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xop'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='SapphireRapids'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-int8'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-tile'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-fp16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrc'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fzrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='serialize'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xfd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='SapphireRapids-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-int8'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-tile'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-fp16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrc'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fzrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='serialize'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xfd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='SapphireRapids-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-int8'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-tile'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-fp16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fbsdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrc'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fzrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='psdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='serialize'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xfd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='SapphireRapids-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-int8'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-tile'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-fp16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='cldemote'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fbsdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrc'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fzrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdir64b'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdiri'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='psdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='serialize'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ss'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xfd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='SierraForest'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-ne-convert'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni-int8'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='cmpccxadd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fbsdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='mcdt-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pbrsb-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='psdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='serialize'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='SierraForest-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-ne-convert'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni-int8'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='cmpccxadd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fbsdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='mcdt-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pbrsb-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='psdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='serialize'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Client'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Client-IBRS'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Client-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Client-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Client-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Client-v4'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Server'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Server-IBRS'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Server-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Server-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Server-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Server-v4'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Server-v5'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Snowridge'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='cldemote'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='core-capability'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdir64b'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdiri'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='mpx'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='split-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Snowridge-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='cldemote'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='core-capability'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdir64b'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdiri'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='mpx'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='split-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Snowridge-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='cldemote'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='core-capability'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdir64b'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdiri'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='split-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Snowridge-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='cldemote'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='core-capability'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdir64b'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdiri'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='split-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Snowridge-v4'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='cldemote'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdir64b'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdiri'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='athlon'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='3dnow'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='3dnowext'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='athlon-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='3dnow'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='3dnowext'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='core2duo'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ss'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='core2duo-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ss'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='coreduo'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ss'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='coreduo-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ss'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='n270'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ss'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='n270-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ss'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='phenom'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='3dnow'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='3dnowext'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='phenom-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='3dnow'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='3dnowext'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </mode>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  </cpu>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <memoryBacking supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <enum name='sourceType'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <value>file</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <value>anonymous</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <value>memfd</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  </memoryBacking>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <devices>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <disk supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='diskDevice'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>disk</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>cdrom</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>floppy</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>lun</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='bus'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>fdc</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>scsi</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>virtio</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>usb</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>sata</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='model'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>virtio</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>virtio-transitional</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>virtio-non-transitional</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </disk>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <graphics supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='type'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>vnc</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>egl-headless</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>dbus</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </graphics>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <video supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='modelType'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>vga</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>cirrus</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>virtio</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>none</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>bochs</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>ramfb</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </video>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <hostdev supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='mode'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>subsystem</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='startupPolicy'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>default</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>mandatory</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>requisite</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>optional</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='subsysType'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>usb</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>pci</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>scsi</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='capsType'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='pciBackend'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </hostdev>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <rng supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='model'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>virtio</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>virtio-transitional</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>virtio-non-transitional</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='backendModel'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>random</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>egd</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>builtin</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </rng>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <filesystem supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='driverType'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>path</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>handle</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>virtiofs</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </filesystem>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <tpm supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='model'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>tpm-tis</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>tpm-crb</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='backendModel'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>emulator</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>external</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='backendVersion'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>2.0</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </tpm>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <redirdev supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='bus'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>usb</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </redirdev>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <channel supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='type'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>pty</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>unix</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </channel>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <crypto supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='model'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='type'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>qemu</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='backendModel'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>builtin</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </crypto>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <interface supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='backendType'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>default</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>passt</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </interface>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <panic supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='model'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>isa</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>hyperv</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </panic>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <console supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='type'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>null</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>vc</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>pty</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>dev</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>file</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>pipe</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>stdio</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>udp</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>tcp</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>unix</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>qemu-vdagent</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>dbus</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </console>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  </devices>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <features>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <gic supported='no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <vmcoreinfo supported='yes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <genid supported='yes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <backingStoreInput supported='yes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <backup supported='yes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <async-teardown supported='yes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <ps2 supported='yes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <sev supported='no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <sgx supported='no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <hyperv supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='features'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>relaxed</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>vapic</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>spinlocks</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>vpindex</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>runtime</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>synic</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>stimer</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>reset</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>vendor_id</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>frequencies</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>reenlightenment</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>tlbflush</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>ipi</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>avic</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>emsr_bitmap</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>xmm_input</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <defaults>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <spinlocks>4095</spinlocks>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <stimer_direct>on</stimer_direct>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <tlbflush_direct>on</tlbflush_direct>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <tlbflush_extended>on</tlbflush_extended>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </defaults>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </hyperv>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <launchSecurity supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='sectype'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>tdx</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </launchSecurity>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  </features>
Dec  5 01:35:08 compute-0 nova_compute[349548]: </domainCapabilities>
Dec  5 01:35:08 compute-0 nova_compute[349548]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  5 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.253 349552 DEBUG nova.virt.libvirt.host [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec  5 01:35:08 compute-0 nova_compute[349548]: <domainCapabilities>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <path>/usr/libexec/qemu-kvm</path>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <domain>kvm</domain>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <arch>x86_64</arch>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <vcpu max='240'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <iothreads supported='yes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <os supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <enum name='firmware'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <loader supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='type'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>rom</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>pflash</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='readonly'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>yes</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>no</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='secure'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>no</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </loader>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  </os>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <cpu>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <mode name='host-passthrough' supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='hostPassthroughMigratable'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>on</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>off</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </mode>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <mode name='maximum' supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='maximumMigratable'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>on</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>off</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </mode>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <mode name='host-model' supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <vendor>AMD</vendor>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='x2apic'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='tsc-deadline'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='hypervisor'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='tsc_adjust'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='spec-ctrl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='stibp'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='ssbd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='cmp_legacy'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='overflow-recov'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='succor'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='ibrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='amd-ssbd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='virt-ssbd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='lbrv'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='tsc-scale'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='vmcb-clean'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='flushbyasid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='pause-filter'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='pfthreshold'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='svme-addr-chk'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <feature policy='disable' name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </mode>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <mode name='custom' supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Broadwell'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Broadwell-IBRS'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Broadwell-noTSX'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Broadwell-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Broadwell-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Broadwell-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Broadwell-v4'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Cascadelake-Server'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Cascadelake-Server-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Cascadelake-Server-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Cascadelake-Server-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Cascadelake-Server-v4'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Cascadelake-Server-v5'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Cooperlake'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Cooperlake-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Cooperlake-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Denverton'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='mpx'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Denverton-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='mpx'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Denverton-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Denverton-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Dhyana-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-Genoa'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amd-psfd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='auto-ibrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='no-nested-data-bp'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='null-sel-clr-base'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='stibp-always-on'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-Genoa-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amd-psfd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='auto-ibrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='no-nested-data-bp'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='null-sel-clr-base'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='stibp-always-on'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-Milan'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-Milan-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-Milan-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amd-psfd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='no-nested-data-bp'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='null-sel-clr-base'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='stibp-always-on'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-Rome'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-Rome-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-Rome-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-Rome-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='EPYC-v4'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='GraniteRapids'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-fp16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-int8'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-tile'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-fp16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fbsdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrc'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fzrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='mcdt-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pbrsb-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='prefetchiti'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='psdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='serialize'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xfd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='GraniteRapids-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-fp16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-int8'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-tile'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-fp16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fbsdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrc'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fzrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='mcdt-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pbrsb-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='prefetchiti'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='psdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='serialize'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xfd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='GraniteRapids-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-fp16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-int8'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-tile'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx10'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx10-128'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx10-256'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx10-512'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-fp16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='cldemote'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fbsdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrc'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fzrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='mcdt-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdir64b'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdiri'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pbrsb-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='prefetchiti'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='psdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='serialize'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ss'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xfd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Haswell'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Haswell-IBRS'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Haswell-noTSX'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Haswell-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Haswell-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Haswell-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Haswell-v4'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Icelake-Server'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Icelake-Server-noTSX'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Icelake-Server-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Icelake-Server-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Icelake-Server-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Icelake-Server-v4'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Icelake-Server-v5'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Icelake-Server-v6'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Icelake-Server-v7'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='IvyBridge'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='IvyBridge-IBRS'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='IvyBridge-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='IvyBridge-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='KnightsMill'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-4fmaps'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-4vnniw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512er'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512pf'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ss'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='KnightsMill-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-4fmaps'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-4vnniw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512er'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512pf'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ss'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Opteron_G4'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fma4'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xop'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Opteron_G4-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fma4'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xop'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Opteron_G5'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fma4'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='tbm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xop'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Opteron_G5-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fma4'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='tbm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xop'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='SapphireRapids'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-int8'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-tile'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-fp16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrc'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fzrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='serialize'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xfd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='SapphireRapids-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-int8'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-tile'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-fp16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrc'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fzrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='serialize'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xfd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='SapphireRapids-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-int8'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-tile'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-fp16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fbsdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrc'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fzrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='psdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='serialize'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xfd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='SapphireRapids-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-int8'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='amx-tile'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-bf16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-fp16'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512-vpopcntdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bitalg'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vbmi2'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='cldemote'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fbsdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrc'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fzrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='la57'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdir64b'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdiri'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='psdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='serialize'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ss'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='taa-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='tsx-ldtrk'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xfd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='SierraForest'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-ne-convert'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni-int8'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='cmpccxadd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fbsdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='mcdt-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pbrsb-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='psdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='serialize'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='SierraForest-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-ifma'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-ne-convert'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx-vnni-int8'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='bus-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='cmpccxadd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fbsdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='fsrs'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ibrs-all'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='mcdt-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pbrsb-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='psdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='sbdr-ssdp-no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='serialize'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vaes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='vpclmulqdq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Client'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Client-IBRS'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Client-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Client-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Client-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Client-v4'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Server'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Server-IBRS'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Server-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Server-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='hle'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='rtm'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Server-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Server-v4'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Skylake-Server-v5'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512bw'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512cd'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512dq'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512f'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='avx512vl'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='invpcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pcid'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='pku'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Snowridge'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='cldemote'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='core-capability'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdir64b'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdiri'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='mpx'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='split-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Snowridge-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='cldemote'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='core-capability'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdir64b'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdiri'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='mpx'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='split-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Snowridge-v2'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='cldemote'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='core-capability'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdir64b'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdiri'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='split-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Snowridge-v3'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='cldemote'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='core-capability'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdir64b'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdiri'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='split-lock-detect'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='Snowridge-v4'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='cldemote'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='erms'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='gfni'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdir64b'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='movdiri'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='xsaves'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='athlon'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='3dnow'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='3dnowext'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='athlon-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='3dnow'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='3dnowext'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='core2duo'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ss'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='core2duo-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ss'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='coreduo'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ss'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='coreduo-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ss'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='n270'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ss'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='n270-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='ss'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='phenom'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='3dnow'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='3dnowext'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <blockers model='phenom-v1'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='3dnow'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <feature name='3dnowext'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </blockers>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </mode>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  </cpu>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <memoryBacking supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <enum name='sourceType'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <value>file</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <value>anonymous</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <value>memfd</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  </memoryBacking>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <devices>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <disk supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='diskDevice'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>disk</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>cdrom</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>floppy</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>lun</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='bus'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>ide</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>fdc</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>scsi</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>virtio</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>usb</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>sata</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='model'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>virtio</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>virtio-transitional</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>virtio-non-transitional</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </disk>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <graphics supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='type'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>vnc</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>egl-headless</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>dbus</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </graphics>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <video supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='modelType'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>vga</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>cirrus</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>virtio</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>none</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>bochs</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>ramfb</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </video>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <hostdev supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='mode'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>subsystem</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='startupPolicy'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>default</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>mandatory</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>requisite</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>optional</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='subsysType'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>usb</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>pci</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>scsi</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='capsType'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='pciBackend'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </hostdev>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <rng supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='model'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>virtio</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>virtio-transitional</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>virtio-non-transitional</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='backendModel'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>random</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>egd</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>builtin</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </rng>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <filesystem supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='driverType'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>path</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>handle</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>virtiofs</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </filesystem>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <tpm supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='model'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>tpm-tis</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>tpm-crb</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='backendModel'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>emulator</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>external</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='backendVersion'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>2.0</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </tpm>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <redirdev supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='bus'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>usb</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </redirdev>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <channel supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='type'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>pty</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>unix</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </channel>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <crypto supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='model'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='type'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>qemu</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='backendModel'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>builtin</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </crypto>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <interface supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='backendType'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>default</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>passt</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </interface>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <panic supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='model'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>isa</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>hyperv</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </panic>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <console supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='type'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>null</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>vc</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>pty</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>dev</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>file</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>pipe</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>stdio</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>udp</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>tcp</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>unix</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>qemu-vdagent</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>dbus</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </console>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  </devices>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  <features>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <gic supported='no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <vmcoreinfo supported='yes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <genid supported='yes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <backingStoreInput supported='yes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <backup supported='yes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <async-teardown supported='yes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <ps2 supported='yes'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <sev supported='no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <sgx supported='no'/>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <hyperv supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='features'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>relaxed</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>vapic</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>spinlocks</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>vpindex</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>runtime</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>synic</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>stimer</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>reset</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>vendor_id</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>frequencies</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>reenlightenment</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>tlbflush</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>ipi</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>avic</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>emsr_bitmap</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>xmm_input</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <defaults>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <spinlocks>4095</spinlocks>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <stimer_direct>on</stimer_direct>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <tlbflush_direct>on</tlbflush_direct>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <tlbflush_extended>on</tlbflush_extended>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </defaults>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </hyperv>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    <launchSecurity supported='yes'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      <enum name='sectype'>
Dec  5 01:35:08 compute-0 nova_compute[349548]:        <value>tdx</value>
Dec  5 01:35:08 compute-0 nova_compute[349548]:      </enum>
Dec  5 01:35:08 compute-0 nova_compute[349548]:    </launchSecurity>
Dec  5 01:35:08 compute-0 nova_compute[349548]:  </features>
Dec  5 01:35:08 compute-0 nova_compute[349548]: </domainCapabilities>
Dec  5 01:35:08 compute-0 nova_compute[349548]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  5 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.388 349552 DEBUG nova.virt.libvirt.host [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec  5 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.388 349552 INFO nova.virt.libvirt.host [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Secure Boot support detected#033[00m
Dec  5 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.393 349552 INFO nova.virt.libvirt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  5 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.394 349552 INFO nova.virt.libvirt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  5 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.420 349552 DEBUG nova.virt.libvirt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Dec  5 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.676 349552 INFO nova.virt.node [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Determined node identity acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 from /var/lib/nova/compute_id#033[00m
Dec  5 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.705 349552 WARNING nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Compute nodes ['acf26aa2-2fef-4a53-8a44-6cfa2eb15d17'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Dec  5 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.740 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Dec  5 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.770 349552 WARNING nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec  5 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.771 349552 DEBUG oslo_concurrency.lockutils [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.771 349552 DEBUG oslo_concurrency.lockutils [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.771 349552 DEBUG oslo_concurrency.lockutils [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.771 349552 DEBUG nova.compute.resource_tracker [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 01:35:08 compute-0 nova_compute[349548]: 2025-12-05 01:35:08.772 349552 DEBUG oslo_concurrency.processutils [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:35:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v766: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:35:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:35:09 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/781006965' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:35:09 compute-0 nova_compute[349548]: 2025-12-05 01:35:09.260 349552 DEBUG oslo_concurrency.processutils [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:35:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:35:09 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:35:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:35:09 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:35:09 compute-0 nova_compute[349548]: 2025-12-05 01:35:09.607 349552 WARNING nova.virt.libvirt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 01:35:09 compute-0 nova_compute[349548]: 2025-12-05 01:35:09.608 349552 DEBUG nova.compute.resource_tracker [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4540MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 01:35:09 compute-0 nova_compute[349548]: 2025-12-05 01:35:09.609 349552 DEBUG oslo_concurrency.lockutils [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:35:09 compute-0 nova_compute[349548]: 2025-12-05 01:35:09.609 349552 DEBUG oslo_concurrency.lockutils [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:35:09 compute-0 nova_compute[349548]: 2025-12-05 01:35:09.628 349552 WARNING nova.compute.resource_tracker [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] No compute node record for compute-0.ctlplane.example.com:acf26aa2-2fef-4a53-8a44-6cfa2eb15d17: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 could not be found.#033[00m
Dec  5 01:35:09 compute-0 nova_compute[349548]: 2025-12-05 01:35:09.650 349552 INFO nova.compute.resource_tracker [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17#033[00m
Dec  5 01:35:09 compute-0 nova_compute[349548]: 2025-12-05 01:35:09.742 349552 DEBUG nova.compute.resource_tracker [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 01:35:09 compute-0 nova_compute[349548]: 2025-12-05 01:35:09.743 349552 DEBUG nova.compute.resource_tracker [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 01:35:09 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:35:09 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:35:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:35:10 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:35:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 01:35:10 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:35:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 01:35:10 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:35:10 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 447e25ae-32f6-433e-a5f0-e1c20a12f23c does not exist
Dec  5 01:35:10 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev f829ec47-0207-42bb-b5b9-ae6fc1004b7b does not exist
Dec  5 01:35:10 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 91037b98-9bd8-452a-9c69-71cbdad1ac2a does not exist
Dec  5 01:35:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 01:35:10 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 01:35:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 01:35:10 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:35:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:35:10 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:35:10 compute-0 nova_compute[349548]: 2025-12-05 01:35:10.625 349552 INFO nova.scheduler.client.report [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [req-c3e6641d-71e6-4c9e-9fd5-10cd0ee643b3] Created resource provider record via placement API for resource provider with UUID acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 and name compute-0.ctlplane.example.com.#033[00m
Dec  5 01:35:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v767: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:35:10 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:35:10 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:35:10 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:35:11 compute-0 nova_compute[349548]: 2025-12-05 01:35:11.032 349552 DEBUG oslo_concurrency.processutils [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:35:11 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:35:11 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1951102783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:35:11 compute-0 nova_compute[349548]: 2025-12-05 01:35:11.578 349552 DEBUG oslo_concurrency.processutils [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:35:11 compute-0 nova_compute[349548]: 2025-12-05 01:35:11.588 349552 DEBUG nova.virt.libvirt.host [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Dec  5 01:35:11 compute-0 nova_compute[349548]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Dec  5 01:35:11 compute-0 nova_compute[349548]: 2025-12-05 01:35:11.588 349552 INFO nova.virt.libvirt.host [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] kernel doesn't support AMD SEV#033[00m
Dec  5 01:35:11 compute-0 nova_compute[349548]: 2025-12-05 01:35:11.589 349552 DEBUG nova.compute.provider_tree [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Updating inventory in ProviderTree for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  5 01:35:11 compute-0 nova_compute[349548]: 2025-12-05 01:35:11.590 349552 DEBUG nova.virt.libvirt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  5 01:35:11 compute-0 podman[350275]: 2025-12-05 01:35:11.600158947 +0000 UTC m=+0.067554466 container create 7562f3dcf258a735ca297faa550f292e4feea2c05acccfa16066370ece7765a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Dec  5 01:35:11 compute-0 podman[350275]: 2025-12-05 01:35:11.56263175 +0000 UTC m=+0.030027239 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:35:11 compute-0 systemd[1]: Started libpod-conmon-7562f3dcf258a735ca297faa550f292e4feea2c05acccfa16066370ece7765a0.scope.
Dec  5 01:35:11 compute-0 nova_compute[349548]: 2025-12-05 01:35:11.664 349552 DEBUG nova.scheduler.client.report [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Updated inventory for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Dec  5 01:35:11 compute-0 nova_compute[349548]: 2025-12-05 01:35:11.665 349552 DEBUG nova.compute.provider_tree [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Updating resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Dec  5 01:35:11 compute-0 nova_compute[349548]: 2025-12-05 01:35:11.665 349552 DEBUG nova.compute.provider_tree [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Updating inventory in ProviderTree for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  5 01:35:11 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:35:11 compute-0 podman[350275]: 2025-12-05 01:35:11.733115597 +0000 UTC m=+0.200511166 container init 7562f3dcf258a735ca297faa550f292e4feea2c05acccfa16066370ece7765a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  5 01:35:11 compute-0 podman[350275]: 2025-12-05 01:35:11.746008377 +0000 UTC m=+0.213403886 container start 7562f3dcf258a735ca297faa550f292e4feea2c05acccfa16066370ece7765a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_payne, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:35:11 compute-0 podman[350275]: 2025-12-05 01:35:11.753553828 +0000 UTC m=+0.220949367 container attach 7562f3dcf258a735ca297faa550f292e4feea2c05acccfa16066370ece7765a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_payne, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:35:11 compute-0 cranky_payne[350292]: 167 167
Dec  5 01:35:11 compute-0 systemd[1]: libpod-7562f3dcf258a735ca297faa550f292e4feea2c05acccfa16066370ece7765a0.scope: Deactivated successfully.
Dec  5 01:35:11 compute-0 podman[350297]: 2025-12-05 01:35:11.827551243 +0000 UTC m=+0.052473246 container died 7562f3dcf258a735ca297faa550f292e4feea2c05acccfa16066370ece7765a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:35:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ae61881b73d619828450aaab5ec25ce0fd7c0fd2b09509d289e9a51dc1fadbf-merged.mount: Deactivated successfully.
Dec  5 01:35:11 compute-0 podman[350297]: 2025-12-05 01:35:11.895652013 +0000 UTC m=+0.120573966 container remove 7562f3dcf258a735ca297faa550f292e4feea2c05acccfa16066370ece7765a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  5 01:35:11 compute-0 nova_compute[349548]: 2025-12-05 01:35:11.894 349552 DEBUG nova.compute.provider_tree [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Updating resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Dec  5 01:35:11 compute-0 systemd[1]: libpod-conmon-7562f3dcf258a735ca297faa550f292e4feea2c05acccfa16066370ece7765a0.scope: Deactivated successfully.
Dec  5 01:35:11 compute-0 nova_compute[349548]: 2025-12-05 01:35:11.957 349552 DEBUG nova.compute.resource_tracker [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 01:35:11 compute-0 nova_compute[349548]: 2025-12-05 01:35:11.958 349552 DEBUG oslo_concurrency.lockutils [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.349s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:35:11 compute-0 nova_compute[349548]: 2025-12-05 01:35:11.960 349552 DEBUG nova.service [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Dec  5 01:35:12 compute-0 nova_compute[349548]: 2025-12-05 01:35:12.072 349552 DEBUG nova.service [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Dec  5 01:35:12 compute-0 nova_compute[349548]: 2025-12-05 01:35:12.072 349552 DEBUG nova.servicegroup.drivers.db [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Dec  5 01:35:12 compute-0 podman[350316]: 2025-12-05 01:35:12.202452434 +0000 UTC m=+0.090630430 container create d442690de1570d07f1af2792231dc73ab837134856d5022df3c91ac13709536d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:35:12 compute-0 podman[350316]: 2025-12-05 01:35:12.174108983 +0000 UTC m=+0.062287049 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:35:12 compute-0 systemd[1]: Started libpod-conmon-d442690de1570d07f1af2792231dc73ab837134856d5022df3c91ac13709536d.scope.
Dec  5 01:35:12 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:35:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e768cc5adb0312706e0141d8fb5b4801d2aea5da5d51fdd313c278449288e081/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:35:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e768cc5adb0312706e0141d8fb5b4801d2aea5da5d51fdd313c278449288e081/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:35:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e768cc5adb0312706e0141d8fb5b4801d2aea5da5d51fdd313c278449288e081/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:35:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e768cc5adb0312706e0141d8fb5b4801d2aea5da5d51fdd313c278449288e081/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:35:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e768cc5adb0312706e0141d8fb5b4801d2aea5da5d51fdd313c278449288e081/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:35:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:35:12 compute-0 podman[350316]: 2025-12-05 01:35:12.390347477 +0000 UTC m=+0.278525563 container init d442690de1570d07f1af2792231dc73ab837134856d5022df3c91ac13709536d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mahavira, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:35:12 compute-0 podman[350316]: 2025-12-05 01:35:12.409864542 +0000 UTC m=+0.298042538 container start d442690de1570d07f1af2792231dc73ab837134856d5022df3c91ac13709536d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  5 01:35:12 compute-0 podman[350316]: 2025-12-05 01:35:12.414877442 +0000 UTC m=+0.303055558 container attach d442690de1570d07f1af2792231dc73ab837134856d5022df3c91ac13709536d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  5 01:35:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v768: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:35:13 compute-0 systemd-logind[792]: New session 58 of user zuul.
Dec  5 01:35:13 compute-0 systemd[1]: Started Session 58 of User zuul.
Dec  5 01:35:13 compute-0 condescending_mahavira[350333]: --> passed data devices: 0 physical, 3 LVM
Dec  5 01:35:13 compute-0 condescending_mahavira[350333]: --> relative data size: 1.0
Dec  5 01:35:13 compute-0 condescending_mahavira[350333]: --> All data devices are unavailable
Dec  5 01:35:13 compute-0 systemd[1]: libpod-d442690de1570d07f1af2792231dc73ab837134856d5022df3c91ac13709536d.scope: Deactivated successfully.
Dec  5 01:35:13 compute-0 systemd[1]: libpod-d442690de1570d07f1af2792231dc73ab837134856d5022df3c91ac13709536d.scope: Consumed 1.341s CPU time.
Dec  5 01:35:13 compute-0 podman[350316]: 2025-12-05 01:35:13.827381938 +0000 UTC m=+1.715560024 container died d442690de1570d07f1af2792231dc73ab837134856d5022df3c91ac13709536d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  5 01:35:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-e768cc5adb0312706e0141d8fb5b4801d2aea5da5d51fdd313c278449288e081-merged.mount: Deactivated successfully.
Dec  5 01:35:13 compute-0 podman[350316]: 2025-12-05 01:35:13.934293332 +0000 UTC m=+1.822471328 container remove d442690de1570d07f1af2792231dc73ab837134856d5022df3c91ac13709536d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mahavira, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  5 01:35:13 compute-0 systemd[1]: libpod-conmon-d442690de1570d07f1af2792231dc73ab837134856d5022df3c91ac13709536d.scope: Deactivated successfully.
Dec  5 01:35:14 compute-0 podman[350636]: 2025-12-05 01:35:14.763031678 +0000 UTC m=+0.124660250 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Dec  5 01:35:14 compute-0 podman[350633]: 2025-12-05 01:35:14.789568558 +0000 UTC m=+0.122491569 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  5 01:35:14 compute-0 podman[350626]: 2025-12-05 01:35:14.803815846 +0000 UTC m=+0.187232676 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 01:35:14 compute-0 podman[350628]: 2025-12-05 01:35:14.810937785 +0000 UTC m=+0.157834046 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, tcib_managed=true)
Dec  5 01:35:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v769: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:35:14 compute-0 podman[350654]: 2025-12-05 01:35:14.863500362 +0000 UTC m=+0.175923661 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  5 01:35:14 compute-0 python3.9[350627]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  5 01:35:14 compute-0 rsyslogd[188644]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  5 01:35:14 compute-0 rsyslogd[188644]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  5 01:35:15 compute-0 podman[350761]: 2025-12-05 01:35:15.003599521 +0000 UTC m=+0.071605389 container create 0bc8061122653f78b3f09dbf591220f6ea6650a06d3fcd0a599abb6d1fa9ad25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_montalcini, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  5 01:35:15 compute-0 systemd[1]: Started libpod-conmon-0bc8061122653f78b3f09dbf591220f6ea6650a06d3fcd0a599abb6d1fa9ad25.scope.
Dec  5 01:35:15 compute-0 podman[350761]: 2025-12-05 01:35:14.977518273 +0000 UTC m=+0.045524171 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:35:15 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:35:15 compute-0 podman[350761]: 2025-12-05 01:35:15.122310624 +0000 UTC m=+0.190316582 container init 0bc8061122653f78b3f09dbf591220f6ea6650a06d3fcd0a599abb6d1fa9ad25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_montalcini, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:35:15 compute-0 podman[350761]: 2025-12-05 01:35:15.139432612 +0000 UTC m=+0.207438470 container start 0bc8061122653f78b3f09dbf591220f6ea6650a06d3fcd0a599abb6d1fa9ad25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_montalcini, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  5 01:35:15 compute-0 intelligent_montalcini[350782]: 167 167
Dec  5 01:35:15 compute-0 podman[350761]: 2025-12-05 01:35:15.148588227 +0000 UTC m=+0.216594125 container attach 0bc8061122653f78b3f09dbf591220f6ea6650a06d3fcd0a599abb6d1fa9ad25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_montalcini, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:35:15 compute-0 systemd[1]: libpod-0bc8061122653f78b3f09dbf591220f6ea6650a06d3fcd0a599abb6d1fa9ad25.scope: Deactivated successfully.
Dec  5 01:35:15 compute-0 podman[350761]: 2025-12-05 01:35:15.153145754 +0000 UTC m=+0.221151622 container died 0bc8061122653f78b3f09dbf591220f6ea6650a06d3fcd0a599abb6d1fa9ad25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:35:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-d06defcf4553b9eb207e833cae5ae291bea64d627ba968cc472fe4021ca41f60-merged.mount: Deactivated successfully.
Dec  5 01:35:15 compute-0 podman[350761]: 2025-12-05 01:35:15.23361875 +0000 UTC m=+0.301624608 container remove 0bc8061122653f78b3f09dbf591220f6ea6650a06d3fcd0a599abb6d1fa9ad25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  5 01:35:15 compute-0 systemd[1]: libpod-conmon-0bc8061122653f78b3f09dbf591220f6ea6650a06d3fcd0a599abb6d1fa9ad25.scope: Deactivated successfully.
Dec  5 01:35:15 compute-0 podman[350831]: 2025-12-05 01:35:15.445358659 +0000 UTC m=+0.064726098 container create 08ed0270e21b641be6212307d29595a434566c2752523a0ab111fbda593161fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_euler, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:35:15 compute-0 podman[350831]: 2025-12-05 01:35:15.413805788 +0000 UTC m=+0.033173217 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:35:15 compute-0 systemd[1]: Started libpod-conmon-08ed0270e21b641be6212307d29595a434566c2752523a0ab111fbda593161fe.scope.
Dec  5 01:35:15 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:35:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e04c63bf58d004a800cc8afa69f3156ec7f37be8d72c70f9e1d2d765f80069a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:35:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e04c63bf58d004a800cc8afa69f3156ec7f37be8d72c70f9e1d2d765f80069a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:35:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e04c63bf58d004a800cc8afa69f3156ec7f37be8d72c70f9e1d2d765f80069a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:35:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e04c63bf58d004a800cc8afa69f3156ec7f37be8d72c70f9e1d2d765f80069a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:35:15 compute-0 podman[350831]: 2025-12-05 01:35:15.587210307 +0000 UTC m=+0.206577716 container init 08ed0270e21b641be6212307d29595a434566c2752523a0ab111fbda593161fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_euler, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:35:15 compute-0 podman[350831]: 2025-12-05 01:35:15.608123501 +0000 UTC m=+0.227490940 container start 08ed0270e21b641be6212307d29595a434566c2752523a0ab111fbda593161fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_euler, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  5 01:35:15 compute-0 podman[350831]: 2025-12-05 01:35:15.614465728 +0000 UTC m=+0.233833167 container attach 08ed0270e21b641be6212307d29595a434566c2752523a0ab111fbda593161fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:35:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:35:16
Dec  5 01:35:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 01:35:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 01:35:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta', '.mgr', 'backups', 'default.rgw.control', '.rgw.root', 'images', 'cephfs.cephfs.data', 'default.rgw.meta']
Dec  5 01:35:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 01:35:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:35:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:35:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:35:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:35:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:35:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:35:16 compute-0 heuristic_euler[350846]: {
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:    "0": [
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:        {
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:            "devices": [
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:                "/dev/loop3"
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:            ],
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:            "lv_name": "ceph_lv0",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:            "lv_size": "21470642176",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:            "name": "ceph_lv0",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:            "tags": {
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:                "ceph.cluster_name": "ceph",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:                "ceph.crush_device_class": "",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:                "ceph.encrypted": "0",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:                "ceph.osd_id": "0",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:                "ceph.type": "block",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:                "ceph.vdo": "0"
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:            },
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:            "type": "block",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:            "vg_name": "ceph_vg0"
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:        }
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:    ],
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:    "1": [
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:        {
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:            "devices": [
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:                "/dev/loop4"
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:            ],
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:            "lv_name": "ceph_lv1",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:            "lv_size": "21470642176",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:            "name": "ceph_lv1",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:            "tags": {
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:                "ceph.cluster_name": "ceph",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:                "ceph.crush_device_class": "",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:                "ceph.encrypted": "0",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:                "ceph.osd_id": "1",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:                "ceph.type": "block",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:                "ceph.vdo": "0"
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:            },
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:            "type": "block",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:            "vg_name": "ceph_vg1"
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:        }
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:    ],
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:    "2": [
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:        {
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:            "devices": [
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:                "/dev/loop5"
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:            ],
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:            "lv_name": "ceph_lv2",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:            "lv_size": "21470642176",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:            "name": "ceph_lv2",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:            "tags": {
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:                "ceph.cluster_name": "ceph",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:                "ceph.crush_device_class": "",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:                "ceph.encrypted": "0",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:                "ceph.osd_id": "2",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:                "ceph.type": "block",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:                "ceph.vdo": "0"
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:            },
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:            "type": "block",
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:            "vg_name": "ceph_vg2"
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:        }
Dec  5 01:35:16 compute-0 heuristic_euler[350846]:    ]
Dec  5 01:35:16 compute-0 heuristic_euler[350846]: }
Dec  5 01:35:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 01:35:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:35:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 01:35:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:35:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:35:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:35:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:35:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:35:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:35:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:35:16 compute-0 systemd[1]: libpod-08ed0270e21b641be6212307d29595a434566c2752523a0ab111fbda593161fe.scope: Deactivated successfully.
Dec  5 01:35:16 compute-0 podman[350831]: 2025-12-05 01:35:16.519457881 +0000 UTC m=+1.138825320 container died 08ed0270e21b641be6212307d29595a434566c2752523a0ab111fbda593161fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_euler, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:35:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-e04c63bf58d004a800cc8afa69f3156ec7f37be8d72c70f9e1d2d765f80069a7-merged.mount: Deactivated successfully.
Dec  5 01:35:16 compute-0 podman[350831]: 2025-12-05 01:35:16.602446227 +0000 UTC m=+1.221813646 container remove 08ed0270e21b641be6212307d29595a434566c2752523a0ab111fbda593161fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_euler, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:35:16 compute-0 systemd[1]: libpod-conmon-08ed0270e21b641be6212307d29595a434566c2752523a0ab111fbda593161fe.scope: Deactivated successfully.
Dec  5 01:35:16 compute-0 podman[350954]: 2025-12-05 01:35:16.651197017 +0000 UTC m=+0.139415411 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., io.buildah.version=1.29.0, version=9.4, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, vcs-type=git)
Dec  5 01:35:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v770: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:35:16 compute-0 python3.9[351015]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  5 01:35:16 compute-0 systemd[1]: Reloading.
Dec  5 01:35:17 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:35:17 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:35:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:35:18 compute-0 podman[351265]: 2025-12-05 01:35:18.041160185 +0000 UTC m=+0.078079460 container create 96784894cabdc39f1060597630ec52c9017eb3c3ab7e673d46479362093c08b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  5 01:35:18 compute-0 podman[351265]: 2025-12-05 01:35:18.010832988 +0000 UTC m=+0.047752333 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:35:18 compute-0 systemd[1]: Started libpod-conmon-96784894cabdc39f1060597630ec52c9017eb3c3ab7e673d46479362093c08b7.scope.
Dec  5 01:35:18 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:35:18 compute-0 podman[351265]: 2025-12-05 01:35:18.186517211 +0000 UTC m=+0.223436546 container init 96784894cabdc39f1060597630ec52c9017eb3c3ab7e673d46479362093c08b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kapitsa, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  5 01:35:18 compute-0 podman[351265]: 2025-12-05 01:35:18.202419985 +0000 UTC m=+0.239339250 container start 96784894cabdc39f1060597630ec52c9017eb3c3ab7e673d46479362093c08b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:35:18 compute-0 podman[351265]: 2025-12-05 01:35:18.210037187 +0000 UTC m=+0.246956532 container attach 96784894cabdc39f1060597630ec52c9017eb3c3ab7e673d46479362093c08b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  5 01:35:18 compute-0 nostalgic_kapitsa[351304]: 167 167
Dec  5 01:35:18 compute-0 systemd[1]: libpod-96784894cabdc39f1060597630ec52c9017eb3c3ab7e673d46479362093c08b7.scope: Deactivated successfully.
Dec  5 01:35:18 compute-0 podman[351265]: 2025-12-05 01:35:18.214498452 +0000 UTC m=+0.251417737 container died 96784894cabdc39f1060597630ec52c9017eb3c3ab7e673d46479362093c08b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kapitsa, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:35:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-4aab5373b672a0897d673a1a7214d3c3c72b7b305ddac25f5c6eaf2b34104524-merged.mount: Deactivated successfully.
Dec  5 01:35:18 compute-0 podman[351265]: 2025-12-05 01:35:18.290100141 +0000 UTC m=+0.327019396 container remove 96784894cabdc39f1060597630ec52c9017eb3c3ab7e673d46479362093c08b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  5 01:35:18 compute-0 systemd[1]: libpod-conmon-96784894cabdc39f1060597630ec52c9017eb3c3ab7e673d46479362093c08b7.scope: Deactivated successfully.
Dec  5 01:35:18 compute-0 python3.9[351373]: ansible-ansible.builtin.service_facts Invoked
Dec  5 01:35:18 compute-0 podman[351379]: 2025-12-05 01:35:18.519546084 +0000 UTC m=+0.078161642 container create 63a2ef1c7486b44abf7a540ec77dee92a5e9bb6f8bf3b695df02b129314b19b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:35:18 compute-0 podman[351379]: 2025-12-05 01:35:18.487806559 +0000 UTC m=+0.046422167 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:35:18 compute-0 systemd[1]: Started libpod-conmon-63a2ef1c7486b44abf7a540ec77dee92a5e9bb6f8bf3b695df02b129314b19b3.scope.
Dec  5 01:35:18 compute-0 network[351414]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  5 01:35:18 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:35:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aecee5a8ff6ebe74c8125af1b7141ac60e22594edd6db35538e827f3ec6b421/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:35:18 compute-0 network[351415]: 'network-scripts' will be removed from distribution in near future.
Dec  5 01:35:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aecee5a8ff6ebe74c8125af1b7141ac60e22594edd6db35538e827f3ec6b421/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:35:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aecee5a8ff6ebe74c8125af1b7141ac60e22594edd6db35538e827f3ec6b421/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:35:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aecee5a8ff6ebe74c8125af1b7141ac60e22594edd6db35538e827f3ec6b421/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:35:18 compute-0 podman[351379]: 2025-12-05 01:35:18.670662481 +0000 UTC m=+0.229278059 container init 63a2ef1c7486b44abf7a540ec77dee92a5e9bb6f8bf3b695df02b129314b19b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kalam, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  5 01:35:18 compute-0 network[351417]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  5 01:35:18 compute-0 podman[351379]: 2025-12-05 01:35:18.698029715 +0000 UTC m=+0.256645253 container start 63a2ef1c7486b44abf7a540ec77dee92a5e9bb6f8bf3b695df02b129314b19b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Dec  5 01:35:18 compute-0 podman[351379]: 2025-12-05 01:35:18.704168576 +0000 UTC m=+0.262784234 container attach 63a2ef1c7486b44abf7a540ec77dee92a5e9bb6f8bf3b695df02b129314b19b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kalam, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:35:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v771: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:35:19 compute-0 quirky_kalam[351405]: {
Dec  5 01:35:19 compute-0 quirky_kalam[351405]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 01:35:19 compute-0 quirky_kalam[351405]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:35:19 compute-0 quirky_kalam[351405]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 01:35:19 compute-0 quirky_kalam[351405]:        "osd_id": 0,
Dec  5 01:35:19 compute-0 quirky_kalam[351405]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:35:19 compute-0 quirky_kalam[351405]:        "type": "bluestore"
Dec  5 01:35:19 compute-0 quirky_kalam[351405]:    },
Dec  5 01:35:19 compute-0 quirky_kalam[351405]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 01:35:19 compute-0 quirky_kalam[351405]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:35:19 compute-0 quirky_kalam[351405]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 01:35:19 compute-0 quirky_kalam[351405]:        "osd_id": 1,
Dec  5 01:35:19 compute-0 quirky_kalam[351405]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:35:19 compute-0 quirky_kalam[351405]:        "type": "bluestore"
Dec  5 01:35:19 compute-0 quirky_kalam[351405]:    },
Dec  5 01:35:19 compute-0 quirky_kalam[351405]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 01:35:19 compute-0 quirky_kalam[351405]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:35:19 compute-0 quirky_kalam[351405]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 01:35:19 compute-0 quirky_kalam[351405]:        "osd_id": 2,
Dec  5 01:35:19 compute-0 quirky_kalam[351405]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:35:19 compute-0 quirky_kalam[351405]:        "type": "bluestore"
Dec  5 01:35:19 compute-0 quirky_kalam[351405]:    }
Dec  5 01:35:19 compute-0 quirky_kalam[351405]: }
Dec  5 01:35:19 compute-0 systemd[1]: libpod-63a2ef1c7486b44abf7a540ec77dee92a5e9bb6f8bf3b695df02b129314b19b3.scope: Deactivated successfully.
Dec  5 01:35:19 compute-0 podman[351379]: 2025-12-05 01:35:19.913030759 +0000 UTC m=+1.471646357 container died 63a2ef1c7486b44abf7a540ec77dee92a5e9bb6f8bf3b695df02b129314b19b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kalam, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Dec  5 01:35:19 compute-0 systemd[1]: libpod-63a2ef1c7486b44abf7a540ec77dee92a5e9bb6f8bf3b695df02b129314b19b3.scope: Consumed 1.218s CPU time.
Dec  5 01:35:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-4aecee5a8ff6ebe74c8125af1b7141ac60e22594edd6db35538e827f3ec6b421-merged.mount: Deactivated successfully.
Dec  5 01:35:20 compute-0 podman[351379]: 2025-12-05 01:35:20.011475226 +0000 UTC m=+1.570090764 container remove 63a2ef1c7486b44abf7a540ec77dee92a5e9bb6f8bf3b695df02b129314b19b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec  5 01:35:20 compute-0 systemd[1]: libpod-conmon-63a2ef1c7486b44abf7a540ec77dee92a5e9bb6f8bf3b695df02b129314b19b3.scope: Deactivated successfully.
Dec  5 01:35:20 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:35:20 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:35:20 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:35:20 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:35:20 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 5782a631-b2fc-41df-bab4-dcedef2b4bbe does not exist
Dec  5 01:35:20 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 3d39be50-59d4-48ab-b446-3c4ebcb85de8 does not exist
Dec  5 01:35:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v772: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:35:21 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:35:21 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:35:21 compute-0 podman[351555]: 2025-12-05 01:35:21.924555722 +0000 UTC m=+0.130875183 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, release=1755695350, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  5 01:35:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:35:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v773: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:35:23 compute-0 podman[351626]: 2025-12-05 01:35:23.678541997 +0000 UTC m=+0.111484712 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  5 01:35:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v774: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:35:25 compute-0 python3.9[351823]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  5 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:35:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 01:35:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v775: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:35:27 compute-0 python3.9[351976]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:35:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:35:28 compute-0 python3.9[352128]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:35:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v776: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:35:29 compute-0 podman[158197]: time="2025-12-05T01:35:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:35:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:35:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec  5 01:35:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:35:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8126 "" "Go-http-client/1.1"
Dec  5 01:35:30 compute-0 python3.9[352280]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:35:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v777: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:35:31 compute-0 openstack_network_exporter[160350]: ERROR   01:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:35:31 compute-0 openstack_network_exporter[160350]: ERROR   01:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:35:31 compute-0 openstack_network_exporter[160350]: ERROR   01:35:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:35:31 compute-0 openstack_network_exporter[160350]: ERROR   01:35:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:35:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:35:31 compute-0 openstack_network_exporter[160350]: ERROR   01:35:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:35:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:35:31 compute-0 python3.9[352432]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  5 01:35:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:35:32 compute-0 podman[352556]: 2025-12-05 01:35:32.636297822 +0000 UTC m=+0.124357400 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  5 01:35:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v778: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:35:33 compute-0 python3.9[352600]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  5 01:35:33 compute-0 systemd[1]: Reloading.
Dec  5 01:35:33 compute-0 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  5 01:35:33 compute-0 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  5 01:35:34 compute-0 python3.9[352787]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:35:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v779: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:35:35 compute-0 python3.9[352940]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:35:36 compute-0 python3.9[353090]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  5 01:35:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v780: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:35:37 compute-0 nova_compute[349548]: 2025-12-05 01:35:37.074 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:35:37 compute-0 nova_compute[349548]: 2025-12-05 01:35:37.107 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:35:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:35:37 compute-0 python3.9[353242]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:35:38 compute-0 python3.9[353318]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:35:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v781: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:35:39 compute-0 python3.9[353470]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Dec  5 01:35:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v782: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:35:41 compute-0 python3.9[353622]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Dec  5 01:35:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:35:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v783: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:35:43 compute-0 python3.9[353773]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:35:43 compute-0 python3.9[353849]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/openstack/config/telemetry/ceilometer.conf _original_basename=ceilometer.conf recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:35:44 compute-0 python3.9[353999]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:35:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v784: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:35:44 compute-0 podman[354002]: 2025-12-05 01:35:44.94278846 +0000 UTC m=+0.114553373 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 01:35:44 compute-0 podman[354000]: 2025-12-05 01:35:44.967604198 +0000 UTC m=+0.136953993 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  5 01:35:45 compute-0 podman[354043]: 2025-12-05 01:35:45.046489878 +0000 UTC m=+0.096916968 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  5 01:35:45 compute-0 podman[354045]: 2025-12-05 01:35:45.08494899 +0000 UTC m=+0.118733042 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  5 01:35:45 compute-0 podman[354047]: 2025-12-05 01:35:45.094131448 +0000 UTC m=+0.138425565 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  5 01:35:45 compute-0 python3.9[354176]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/openstack/config/telemetry/polling.yaml _original_basename=polling.yaml recurse=False state=file path=/var/lib/openstack/config/telemetry/polling.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:35:46 compute-0 python3.9[354326]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:35:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:35:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:35:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:35:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:35:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:35:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:35:46 compute-0 python3.9[354402]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/openstack/config/telemetry/custom.conf _original_basename=custom.conf recurse=False state=file path=/var/lib/openstack/config/telemetry/custom.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:35:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v785: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:35:46 compute-0 podman[354403]: 2025-12-05 01:35:46.968280101 +0000 UTC m=+0.151279807 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, config_id=edpm, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, container_name=kepler, release-0.7.12=, distribution-scope=public, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc.)
Dec  5 01:35:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:35:47 compute-0 python3.9[354569]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  5 01:35:48 compute-0 python3.9[354721]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  5 01:35:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v786: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:35:49 compute-0 python3.9[354874]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:35:50 compute-0 python3.9[354950]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json _original_basename=ceilometer-agent-compute.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:35:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v787: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:35:51 compute-0 python3.9[355100]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:35:52 compute-0 podman[355101]: 2025-12-05 01:35:52.152128109 +0000 UTC m=+0.137597612 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=edpm, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., name=ubi9-minimal, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.tags=minimal rhel9, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  5 01:35:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:35:52 compute-0 python3.9[355196]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:35:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v788: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:35:53 compute-0 python3.9[355346]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:35:54 compute-0 podman[355347]: 2025-12-05 01:35:54.077734669 +0000 UTC m=+0.134730941 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  5 01:35:54 compute-0 python3.9[355444]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json _original_basename=ceilometer_agent_compute.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:35:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v789: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:35:55 compute-0 python3.9[355594]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:35:56 compute-0 python3.9[355670]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:35:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:35:56.162 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:35:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:35:56.163 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:35:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:35:56.163 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:35:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v790: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:35:57 compute-0 python3.9[355820]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:35:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:35:57 compute-0 python3.9[355896]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/firewall.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/firewall.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:35:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v791: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:35:58 compute-0 python3.9[356046]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:35:59 compute-0 python3.9[356122]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/node_exporter.json _original_basename=node_exporter.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:35:59 compute-0 podman[158197]: time="2025-12-05T01:35:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:35:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:35:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec  5 01:35:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:35:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8119 "" "Go-http-client/1.1"
Dec  5 01:36:00 compute-0 python3.9[356272]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:36:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v792: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:36:01 compute-0 python3.9[356348]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:36:01 compute-0 openstack_network_exporter[160350]: ERROR   01:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:36:01 compute-0 openstack_network_exporter[160350]: ERROR   01:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:36:01 compute-0 openstack_network_exporter[160350]: ERROR   01:36:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:36:01 compute-0 openstack_network_exporter[160350]: ERROR   01:36:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:36:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:36:01 compute-0 openstack_network_exporter[160350]: ERROR   01:36:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:36:01 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:36:02 compute-0 python3.9[356498]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:36:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:36:02 compute-0 python3.9[356574]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json _original_basename=openstack_network_exporter.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:36:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v793: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:36:03 compute-0 podman[356575]: 2025-12-05 01:36:03.004841571 +0000 UTC m=+0.116722785 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec  5 01:36:04 compute-0 python3.9[356743]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:36:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v794: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:36:05 compute-0 python3.9[356819]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml _original_basename=openstack_network_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:36:06 compute-0 python3.9[356969]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:36:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v795: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 9.5 KiB/s rd, 0 B/s wr, 15 op/s
Dec  5 01:36:07 compute-0 nova_compute[349548]: 2025-12-05 01:36:07.070 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:36:07 compute-0 nova_compute[349548]: 2025-12-05 01:36:07.070 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:36:07 compute-0 nova_compute[349548]: 2025-12-05 01:36:07.071 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 01:36:07 compute-0 nova_compute[349548]: 2025-12-05 01:36:07.071 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  5 01:36:07 compute-0 nova_compute[349548]: 2025-12-05 01:36:07.123 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  5 01:36:07 compute-0 nova_compute[349548]: 2025-12-05 01:36:07.124 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:36:07 compute-0 nova_compute[349548]: 2025-12-05 01:36:07.125 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:36:07 compute-0 nova_compute[349548]: 2025-12-05 01:36:07.126 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:36:07 compute-0 nova_compute[349548]: 2025-12-05 01:36:07.126 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:36:07 compute-0 nova_compute[349548]: 2025-12-05 01:36:07.127 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:36:07 compute-0 nova_compute[349548]: 2025-12-05 01:36:07.128 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:36:07 compute-0 nova_compute[349548]: 2025-12-05 01:36:07.129 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 01:36:07 compute-0 nova_compute[349548]: 2025-12-05 01:36:07.129 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:36:07 compute-0 nova_compute[349548]: 2025-12-05 01:36:07.188 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:36:07 compute-0 nova_compute[349548]: 2025-12-05 01:36:07.189 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:36:07 compute-0 nova_compute[349548]: 2025-12-05 01:36:07.190 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:36:07 compute-0 nova_compute[349548]: 2025-12-05 01:36:07.190 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 01:36:07 compute-0 nova_compute[349548]: 2025-12-05 01:36:07.191 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:36:07 compute-0 python3.9[357045]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/podman_exporter.json _original_basename=podman_exporter.json.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:36:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:36:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:36:07 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1802048944' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:36:07 compute-0 nova_compute[349548]: 2025-12-05 01:36:07.752 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:36:08 compute-0 nova_compute[349548]: 2025-12-05 01:36:08.281 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 01:36:08 compute-0 nova_compute[349548]: 2025-12-05 01:36:08.282 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4542MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 01:36:08 compute-0 nova_compute[349548]: 2025-12-05 01:36:08.283 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:36:08 compute-0 nova_compute[349548]: 2025-12-05 01:36:08.283 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:36:08 compute-0 python3.9[357217]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:36:08 compute-0 nova_compute[349548]: 2025-12-05 01:36:08.382 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 01:36:08 compute-0 nova_compute[349548]: 2025-12-05 01:36:08.382 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 01:36:08 compute-0 nova_compute[349548]: 2025-12-05 01:36:08.410 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:36:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v796: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Dec  5 01:36:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:36:08 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2717515268' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:36:08 compute-0 nova_compute[349548]: 2025-12-05 01:36:08.936 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:36:08 compute-0 nova_compute[349548]: 2025-12-05 01:36:08.945 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 01:36:08 compute-0 nova_compute[349548]: 2025-12-05 01:36:08.967 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 01:36:08 compute-0 nova_compute[349548]: 2025-12-05 01:36:08.971 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 01:36:08 compute-0 nova_compute[349548]: 2025-12-05 01:36:08.972 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:36:09 compute-0 python3.9[357313]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:36:10 compute-0 python3.9[357465]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:36:10 compute-0 python3.9[357541]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:36:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v797: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  5 01:36:11 compute-0 python3.9[357691]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:36:12 compute-0 python3.9[357767]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:36:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:36:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v798: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  5 01:36:13 compute-0 python3.9[357917]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:36:14 compute-0 python3.9[357993]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:36:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v799: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  5 01:36:15 compute-0 podman[358145]: 2025-12-05 01:36:15.198081714 +0000 UTC m=+0.111784186 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 01:36:15 compute-0 podman[358147]: 2025-12-05 01:36:15.230476835 +0000 UTC m=+0.125219463 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true)
Dec  5 01:36:15 compute-0 podman[358146]: 2025-12-05 01:36:15.232826381 +0000 UTC m=+0.133361122 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  5 01:36:15 compute-0 python3.9[358151]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:36:15 compute-0 podman[358206]: 2025-12-05 01:36:15.311406082 +0000 UTC m=+0.079581210 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  5 01:36:15 compute-0 podman[358207]: 2025-12-05 01:36:15.357213141 +0000 UTC m=+0.122938200 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  5 01:36:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:36:16
Dec  5 01:36:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 01:36:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 01:36:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'volumes', '.rgw.root', '.mgr', 'vms', 'images', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'default.rgw.control']
Dec  5 01:36:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 01:36:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:36:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:36:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:36:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:36:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:36:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:36:16 compute-0 python3.9[358403]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:36:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 01:36:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:36:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 01:36:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:36:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:36:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:36:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:36:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:36:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:36:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:36:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v800: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  5 01:36:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:36:17 compute-0 podman[358480]: 2025-12-05 01:36:17.746263537 +0000 UTC m=+0.146654216 container health_status de56270197aa3402c446edbe4964f5579a9f95402ae29e9dba8cc4c1c95afd91 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, architecture=x86_64, com.redhat.component=ubi9-container)
Dec  5 01:36:18 compute-0 python3.9[358574]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:36:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v801: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 43 op/s
Dec  5 01:36:20 compute-0 python3.9[358727]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  5 01:36:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v802: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 0 B/s wr, 6 op/s
Dec  5 01:36:21 compute-0 python3.9[358988]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:36:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec  5 01:36:22 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  5 01:36:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:36:22 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:36:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 01:36:22 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:36:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 01:36:22 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:36:22 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev f08f4fed-4f98-4588-86e4-d5389892212b does not exist
Dec  5 01:36:22 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev bba1a3e2-c0cf-4923-8a18-577f5518622d does not exist
Dec  5 01:36:22 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 4d46dd09-cbb4-4240-8f4c-fa2e88bbdde4 does not exist
Dec  5 01:36:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 01:36:22 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 01:36:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 01:36:22 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:36:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:36:22 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:36:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  5 01:36:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:36:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:36:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:36:22 compute-0 podman[359139]: 2025-12-05 01:36:22.3572254 +0000 UTC m=+0.096853995 container health_status 348b9a073e3d37f194328bf5fa6db7fcd3452a0dfb49c1c610499053be935a88 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, name=ubi9-minimal, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, build-date=2025-08-20T13:12:41)
Dec  5 01:36:22 compute-0 python3.9[359126]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:36:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:36:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v803: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:36:22 compute-0 python3.9[359311]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:36:22 compute-0 podman[359326]: 2025-12-05 01:36:22.987612514 +0000 UTC m=+0.083465539 container create 898abba783c53f951778f0696cb92d7acbe5c7058aad661e52c3fc34719c3f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:36:23 compute-0 podman[359326]: 2025-12-05 01:36:22.957641211 +0000 UTC m=+0.053494286 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:36:23 compute-0 systemd[1]: Started libpod-conmon-898abba783c53f951778f0696cb92d7acbe5c7058aad661e52c3fc34719c3f06.scope.
Dec  5 01:36:23 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:36:23 compute-0 podman[359326]: 2025-12-05 01:36:23.137367027 +0000 UTC m=+0.233220152 container init 898abba783c53f951778f0696cb92d7acbe5c7058aad661e52c3fc34719c3f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  5 01:36:23 compute-0 podman[359326]: 2025-12-05 01:36:23.153141441 +0000 UTC m=+0.248994476 container start 898abba783c53f951778f0696cb92d7acbe5c7058aad661e52c3fc34719c3f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  5 01:36:23 compute-0 podman[359326]: 2025-12-05 01:36:23.161072354 +0000 UTC m=+0.256925469 container attach 898abba783c53f951778f0696cb92d7acbe5c7058aad661e52c3fc34719c3f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_greider, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:36:23 compute-0 zen_greider[359356]: 167 167
Dec  5 01:36:23 compute-0 podman[359326]: 2025-12-05 01:36:23.168647567 +0000 UTC m=+0.264500602 container died 898abba783c53f951778f0696cb92d7acbe5c7058aad661e52c3fc34719c3f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  5 01:36:23 compute-0 systemd[1]: libpod-898abba783c53f951778f0696cb92d7acbe5c7058aad661e52c3fc34719c3f06.scope: Deactivated successfully.
Dec  5 01:36:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-65c224bfde926a51a049d740796ef69aeae3823c258cc3b598175fcc27e4503d-merged.mount: Deactivated successfully.
Dec  5 01:36:23 compute-0 podman[359326]: 2025-12-05 01:36:23.248103782 +0000 UTC m=+0.343956847 container remove 898abba783c53f951778f0696cb92d7acbe5c7058aad661e52c3fc34719c3f06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_greider, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:36:23 compute-0 systemd[1]: libpod-conmon-898abba783c53f951778f0696cb92d7acbe5c7058aad661e52c3fc34719c3f06.scope: Deactivated successfully.
Dec  5 01:36:23 compute-0 podman[359444]: 2025-12-05 01:36:23.514102275 +0000 UTC m=+0.084320983 container create 3fb39641a19f0a0390f5f0e1f8c511d7bb39dd79978940cbaebb8906968b3e94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  5 01:36:23 compute-0 podman[359444]: 2025-12-05 01:36:23.478495973 +0000 UTC m=+0.048714731 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:36:23 compute-0 systemd[1]: Started libpod-conmon-3fb39641a19f0a0390f5f0e1f8c511d7bb39dd79978940cbaebb8906968b3e94.scope.
Dec  5 01:36:23 compute-0 python3.9[359442]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ _original_basename=healthcheck.future recurse=False state=file path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:36:23 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:36:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fa177136a5cf2c0fd14568aa01a880aeffc020ab8c8aee460e53d9c53269857/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:36:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fa177136a5cf2c0fd14568aa01a880aeffc020ab8c8aee460e53d9c53269857/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:36:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fa177136a5cf2c0fd14568aa01a880aeffc020ab8c8aee460e53d9c53269857/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:36:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fa177136a5cf2c0fd14568aa01a880aeffc020ab8c8aee460e53d9c53269857/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:36:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fa177136a5cf2c0fd14568aa01a880aeffc020ab8c8aee460e53d9c53269857/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:36:23 compute-0 podman[359444]: 2025-12-05 01:36:23.703071041 +0000 UTC m=+0.273289799 container init 3fb39641a19f0a0390f5f0e1f8c511d7bb39dd79978940cbaebb8906968b3e94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hofstadter, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  5 01:36:23 compute-0 podman[359444]: 2025-12-05 01:36:23.727873008 +0000 UTC m=+0.298091686 container start 3fb39641a19f0a0390f5f0e1f8c511d7bb39dd79978940cbaebb8906968b3e94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hofstadter, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  5 01:36:23 compute-0 podman[359444]: 2025-12-05 01:36:23.733377842 +0000 UTC m=+0.303596610 container attach 3fb39641a19f0a0390f5f0e1f8c511d7bb39dd79978940cbaebb8906968b3e94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hofstadter, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  5 01:36:24 compute-0 podman[359571]: 2025-12-05 01:36:24.731607924 +0000 UTC m=+0.143254591 container health_status 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 01:36:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v804: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:36:24 compute-0 musing_hofstadter[359459]: --> passed data devices: 0 physical, 3 LVM
Dec  5 01:36:24 compute-0 musing_hofstadter[359459]: --> relative data size: 1.0
Dec  5 01:36:24 compute-0 musing_hofstadter[359459]: --> All data devices are unavailable
Dec  5 01:36:25 compute-0 systemd[1]: libpod-3fb39641a19f0a0390f5f0e1f8c511d7bb39dd79978940cbaebb8906968b3e94.scope: Deactivated successfully.
Dec  5 01:36:25 compute-0 systemd[1]: libpod-3fb39641a19f0a0390f5f0e1f8c511d7bb39dd79978940cbaebb8906968b3e94.scope: Consumed 1.222s CPU time.
Dec  5 01:36:25 compute-0 conmon[359459]: conmon 3fb39641a19f0a0390f5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3fb39641a19f0a0390f5f0e1f8c511d7bb39dd79978940cbaebb8906968b3e94.scope/container/memory.events
Dec  5 01:36:25 compute-0 podman[359444]: 2025-12-05 01:36:25.011031385 +0000 UTC m=+1.581250053 container died 3fb39641a19f0a0390f5f0e1f8c511d7bb39dd79978940cbaebb8906968b3e94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hofstadter, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Dec  5 01:36:25 compute-0 python3.9[359657]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False
Dec  5 01:36:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fa177136a5cf2c0fd14568aa01a880aeffc020ab8c8aee460e53d9c53269857-merged.mount: Deactivated successfully.
Dec  5 01:36:25 compute-0 podman[359444]: 2025-12-05 01:36:25.121075261 +0000 UTC m=+1.691293929 container remove 3fb39641a19f0a0390f5f0e1f8c511d7bb39dd79978940cbaebb8906968b3e94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Dec  5 01:36:25 compute-0 systemd[1]: libpod-conmon-3fb39641a19f0a0390f5f0e1f8c511d7bb39dd79978940cbaebb8906968b3e94.scope: Deactivated successfully.
Dec  5 01:36:26 compute-0 podman[359969]: 2025-12-05 01:36:26.264874848 +0000 UTC m=+0.075052683 container create 421691e592d493159a1b9183cc47020ad6a3dfb051db35ac24e776e5a594b8d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_wing, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  5 01:36:26 compute-0 podman[359969]: 2025-12-05 01:36:26.232668202 +0000 UTC m=+0.042846117 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:36:26 compute-0 systemd[1]: Started libpod-conmon-421691e592d493159a1b9183cc47020ad6a3dfb051db35ac24e776e5a594b8d2.scope.
Dec  5 01:36:26 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:36:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 01:36:26 compute-0 podman[359969]: 2025-12-05 01:36:26.404364062 +0000 UTC m=+0.214541967 container init 421691e592d493159a1b9183cc47020ad6a3dfb051db35ac24e776e5a594b8d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_wing, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:36:26 compute-0 python3.9[359962]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  5 01:36:26 compute-0 podman[359969]: 2025-12-05 01:36:26.415063583 +0000 UTC m=+0.225241458 container start 421691e592d493159a1b9183cc47020ad6a3dfb051db35ac24e776e5a594b8d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  5 01:36:26 compute-0 nostalgic_wing[359985]: 167 167
Dec  5 01:36:26 compute-0 systemd[1]: libpod-421691e592d493159a1b9183cc47020ad6a3dfb051db35ac24e776e5a594b8d2.scope: Deactivated successfully.
Dec  5 01:36:26 compute-0 conmon[359985]: conmon 421691e592d493159a1b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-421691e592d493159a1b9183cc47020ad6a3dfb051db35ac24e776e5a594b8d2.scope/container/memory.events
Dec  5 01:36:26 compute-0 podman[359969]: 2025-12-05 01:36:26.424330363 +0000 UTC m=+0.234508228 container attach 421691e592d493159a1b9183cc47020ad6a3dfb051db35ac24e776e5a594b8d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  5 01:36:26 compute-0 podman[359969]: 2025-12-05 01:36:26.424733115 +0000 UTC m=+0.234910970 container died 421691e592d493159a1b9183cc47020ad6a3dfb051db35ac24e776e5a594b8d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_wing, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:36:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0b1c7bfd312d9cbee8ad177aa53628e8cd342f1c76e9716b8b3dc0c4dc5adeb-merged.mount: Deactivated successfully.
Dec  5 01:36:26 compute-0 podman[359969]: 2025-12-05 01:36:26.489088385 +0000 UTC m=+0.299266210 container remove 421691e592d493159a1b9183cc47020ad6a3dfb051db35ac24e776e5a594b8d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_wing, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:36:26 compute-0 systemd[1]: libpod-conmon-421691e592d493159a1b9183cc47020ad6a3dfb051db35ac24e776e5a594b8d2.scope: Deactivated successfully.
Dec  5 01:36:26 compute-0 podman[360032]: 2025-12-05 01:36:26.762824826 +0000 UTC m=+0.083647794 container create 0c10d8e9db13f7651adab9c121579142e978fa801ce28747821c75434330c10c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_haibt, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  5 01:36:26 compute-0 podman[360032]: 2025-12-05 01:36:26.729121638 +0000 UTC m=+0.049944666 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:36:26 compute-0 systemd[1]: Started libpod-conmon-0c10d8e9db13f7651adab9c121579142e978fa801ce28747821c75434330c10c.scope.
Dec  5 01:36:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v805: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:36:26 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:36:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6adb5e79013aae27cd2a7f0d558db66b7ff190389c4d7cf3b50b072cdc163e03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:36:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6adb5e79013aae27cd2a7f0d558db66b7ff190389c4d7cf3b50b072cdc163e03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:36:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6adb5e79013aae27cd2a7f0d558db66b7ff190389c4d7cf3b50b072cdc163e03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:36:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6adb5e79013aae27cd2a7f0d558db66b7ff190389c4d7cf3b50b072cdc163e03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:36:26 compute-0 podman[360032]: 2025-12-05 01:36:26.971590049 +0000 UTC m=+0.292413077 container init 0c10d8e9db13f7651adab9c121579142e978fa801ce28747821c75434330c10c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_haibt, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  5 01:36:27 compute-0 podman[360032]: 2025-12-05 01:36:26.999951417 +0000 UTC m=+0.320774385 container start 0c10d8e9db13f7651adab9c121579142e978fa801ce28747821c75434330c10c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_haibt, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:36:27 compute-0 podman[360032]: 2025-12-05 01:36:27.006283065 +0000 UTC m=+0.327106033 container attach 0c10d8e9db13f7651adab9c121579142e978fa801ce28747821c75434330c10c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_haibt, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:36:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]: {
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:    "0": [
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:        {
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:            "devices": [
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:                "/dev/loop3"
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:            ],
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:            "lv_name": "ceph_lv0",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:            "lv_size": "21470642176",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:            "name": "ceph_lv0",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:            "tags": {
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:                "ceph.cluster_name": "ceph",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:                "ceph.crush_device_class": "",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:                "ceph.encrypted": "0",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:                "ceph.osd_id": "0",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:                "ceph.type": "block",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:                "ceph.vdo": "0"
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:            },
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:            "type": "block",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:            "vg_name": "ceph_vg0"
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:        }
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:    ],
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:    "1": [
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:        {
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:            "devices": [
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:                "/dev/loop4"
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:            ],
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:            "lv_name": "ceph_lv1",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:            "lv_size": "21470642176",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:            "name": "ceph_lv1",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:            "tags": {
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:                "ceph.cluster_name": "ceph",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:                "ceph.crush_device_class": "",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:                "ceph.encrypted": "0",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:                "ceph.osd_id": "1",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:                "ceph.type": "block",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:                "ceph.vdo": "0"
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:            },
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:            "type": "block",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:            "vg_name": "ceph_vg1"
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:        }
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:    ],
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:    "2": [
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:        {
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:            "devices": [
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:                "/dev/loop5"
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:            ],
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:            "lv_name": "ceph_lv2",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:            "lv_size": "21470642176",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:            "name": "ceph_lv2",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:            "tags": {
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:                "ceph.cluster_name": "ceph",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:                "ceph.crush_device_class": "",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:                "ceph.encrypted": "0",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:                "ceph.osd_id": "2",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:                "ceph.type": "block",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:                "ceph.vdo": "0"
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:            },
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:            "type": "block",
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:            "vg_name": "ceph_vg2"
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:        }
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]:    ]
Dec  5 01:36:27 compute-0 sleepy_haibt[360070]: }
Dec  5 01:36:27 compute-0 systemd[1]: libpod-0c10d8e9db13f7651adab9c121579142e978fa801ce28747821c75434330c10c.scope: Deactivated successfully.
Dec  5 01:36:27 compute-0 podman[360032]: 2025-12-05 01:36:27.791636437 +0000 UTC m=+1.112459425 container died 0c10d8e9db13f7651adab9c121579142e978fa801ce28747821c75434330c10c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  5 01:36:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-6adb5e79013aae27cd2a7f0d558db66b7ff190389c4d7cf3b50b072cdc163e03-merged.mount: Deactivated successfully.
Dec  5 01:36:27 compute-0 podman[360032]: 2025-12-05 01:36:27.902730892 +0000 UTC m=+1.223553840 container remove 0c10d8e9db13f7651adab9c121579142e978fa801ce28747821c75434330c10c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_haibt, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:36:27 compute-0 systemd[1]: libpod-conmon-0c10d8e9db13f7651adab9c121579142e978fa801ce28747821c75434330c10c.scope: Deactivated successfully.
Dec  5 01:36:28 compute-0 python3[360184]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec  5 01:36:28 compute-0 python3[360184]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012     {#012          "Id": "b1b6d71b432c07886b3bae74df4dc9841d1f26407d5f96d6c1e400b0154d9a3d",#012          "Digest": "sha256:1810de77f8d2f3059c7cc377072be9f22a136bfbd0a3ad4f08539090d9469fac",#012          "RepoTags": [#012               "quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested"#012          ],#012          "RepoDigests": [#012               "quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute@sha256:1810de77f8d2f3059c7cc377072be9f22a136bfbd0a3ad4f08539090d9469fac"#012          ],#012          "Parent": "",#012          "Comment": "",#012          "Created": "2025-12-01T05:11:05.921630712Z",#012          "Config": {#012               "User": "root",#012               "Env": [#012                    "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",#012                    "LANG=en_US.UTF-8",#012                    "TZ=UTC",#012                    "container=oci"#012               ],#012               "Entrypoint": [#012                    "dumb-init",#012                    "--single-child",#012                    "--"#012               ],#012               "Cmd": [#012                    "kolla_start"#012               ],#012               "Labels": {#012                    "io.buildah.version": "1.41.4",#012                    "maintainer": "OpenStack Kubernetes Operator team",#012                    "org.label-schema.build-date": "20251125",#012                    "org.label-schema.license": "GPLv2",#012                    "org.label-schema.name": "CentOS Stream 10 Base Image",#012                    "org.label-schema.schema-version": "1.0",#012                    "org.label-schema.vendor": "CentOS",#012                    "tcib_build_tag": "3a7876c5b6a4ff2e2bc50e11e9db5f42",#012                    "tcib_managed": "true"#012               },#012               "StopSignal": "SIGTERM"#012          },#012          "Version": "",#012          "Author": "",#012          "Architecture": "amd64",#012          "Os": "linux",#012          "Size": 601995467,#012          "VirtualSize": 601995467,#012          "GraphDriver": {#012               "Name": "overlay",#012               "Data": {#012                    "LowerDir": "/var/lib/containers/storage/overlay/586629c35ab12bf3c21aa8405321e52ee8dc3eb91fe319ec2e2bcffcf2f07750/diff:/var/lib/containers/storage/overlay/b726b38a9994fb8597c31b02de6a7067e1e6010e18192135f063d07cbad1efce/diff:/var/lib/containers/storage/overlay/816b6cf07292074c7d459b3269e12ec5823a680369545863b4ff246f9cf897b1/diff:/var/lib/containers/storage/overlay/9cbc2db18be2b6332ac66757d2050c04af51f422021105d6d3edc0bda0b8515c/diff",#012                    "UpperDir": "/var/lib/containers/storage/overlay/d27b7d7dfa077a19fa71a8e66da1979beb59cc810756e543817991e757a42a46/diff",#012                    "WorkDir": "/var/lib/containers/storage/overlay/d27b7d7dfa077a19fa71a8e66da1979beb59cc810756e543817991e757a42a46/work"#012               }#012          },#012          "RootFS": {#012               "Type": "layers",#012               "Layers": [#012                    "sha256:9cbc2db18be2b6332ac66757d2050c04af51f422021105d6d3edc0bda0b8515c",#012                    "sha256:4b40c712f1bd18fdb2c50c6adb38e6952f9d174873260f311696915f181f9947",#012                    "sha256:eaeeda82071109aa7bb6c3500cc7a126797ce0a53bc0f8828831aba88104203b",#012                    "sha256:c58c65fadb00ed08655f756d68fed13f115faec2bc2384f51ce46e18334fe2ae",#012                    "sha256:2f6d51b7d12dca1a77173f044cfb4b6a796a560f1015e515fa8ee8a14f36c103"#012               ]#012          },#012          "Labels": {#012               "io.buildah.version": "1.41.4",#012               "maintainer": "OpenStack Kubernetes Operator team",#012               "org.label-schema.build-date": "20251125",#012               "org.label-schema.license": "GPLv2",#012               "org.label-schema.name": "CentOS Stream 10 Base Image",#012               "org.label-schema.schema-version": "1.0",#012               "org.label-schema.vendor": "CentOS",#012               "tcib_build_tag": "3a7876c5b6a4ff2e2bc50e11e9db5f42",#012               "tcib_managed": "true"#012          },#012          "Annotations": {},#012          "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012          "User": "root",#012          "History": [#012               {#012                    "created": "2025-11-25T03:00:15.634483436Z",#012                    "created_by": "/bin/sh -c #(nop) ADD file:c435edaaf9833341bf9650d5dcfda033191519e1d9c91ecfa082699fd3e149e4 in / ",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-25T03:00:15.634561379Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 10 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251125\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-11-25T03:00:18.392267297Z",#012                    "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"#012               },#012               {#012                    "created": "2025-12-01T05:03:54.682983025Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",#012                    "comment": "FROM quay.io/centos/centos:stream10",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T05:03:54.683002525Z",#012                    "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T05:03:54.683016626Z",#012                    "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T05:03:54.683029656Z",#012                    "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T05:03:54.683039096Z",#012                    "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T05:03:54.683051027Z",#012                    "created_by": "/bin/sh -c #(nop) USER root",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T05:03:55.032223959Z",#012                    "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T05:03:55.512889527Z",#012                    "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/centos.repo\" ]; then rm -f /etc/yum.repos.d/centos*.repo && dnf clean all && rm -rf /var/cache/dnf; fi",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2025-12-01T05:04:06.648921904Z",#012                    "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && cr
Dec  5 01:36:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v806: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:36:29 compute-0 podman[360440]: 2025-12-05 01:36:29.010441894 +0000 UTC m=+0.095212730 container create 34137d9ac5655de1be5c0759791220fda0a49f51055f7eac6ba43b0cd4ffd661 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_moser, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  5 01:36:29 compute-0 podman[360440]: 2025-12-05 01:36:28.966497288 +0000 UTC m=+0.051268174 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:36:29 compute-0 systemd[1]: Started libpod-conmon-34137d9ac5655de1be5c0759791220fda0a49f51055f7eac6ba43b0cd4ffd661.scope.
Dec  5 01:36:29 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:36:29 compute-0 podman[360440]: 2025-12-05 01:36:29.156998987 +0000 UTC m=+0.241769833 container init 34137d9ac5655de1be5c0759791220fda0a49f51055f7eac6ba43b0cd4ffd661 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_moser, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:36:29 compute-0 podman[360440]: 2025-12-05 01:36:29.178459051 +0000 UTC m=+0.263229897 container start 34137d9ac5655de1be5c0759791220fda0a49f51055f7eac6ba43b0cd4ffd661 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  5 01:36:29 compute-0 clever_moser[360487]: 167 167
Dec  5 01:36:29 compute-0 podman[360440]: 2025-12-05 01:36:29.186073305 +0000 UTC m=+0.270844201 container attach 34137d9ac5655de1be5c0759791220fda0a49f51055f7eac6ba43b0cd4ffd661 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_moser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  5 01:36:29 compute-0 systemd[1]: libpod-34137d9ac5655de1be5c0759791220fda0a49f51055f7eac6ba43b0cd4ffd661.scope: Deactivated successfully.
Dec  5 01:36:29 compute-0 podman[360440]: 2025-12-05 01:36:29.187627619 +0000 UTC m=+0.272398475 container died 34137d9ac5655de1be5c0759791220fda0a49f51055f7eac6ba43b0cd4ffd661 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_moser, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:36:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5a55e35b0437a5c5ccaedb5a86f6ab20b02d4b06b80a6f6aeccf376399260de-merged.mount: Deactivated successfully.
Dec  5 01:36:29 compute-0 podman[360440]: 2025-12-05 01:36:29.248656205 +0000 UTC m=+0.333427011 container remove 34137d9ac5655de1be5c0759791220fda0a49f51055f7eac6ba43b0cd4ffd661 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_moser, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:36:29 compute-0 systemd[1]: libpod-conmon-34137d9ac5655de1be5c0759791220fda0a49f51055f7eac6ba43b0cd4ffd661.scope: Deactivated successfully.
Dec  5 01:36:29 compute-0 podman[360579]: 2025-12-05 01:36:29.465570968 +0000 UTC m=+0.071429501 container create b07d2b2c3a67165c377fd48ca207eb8e097f1e273dce5d41e593c3168a8ebfcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_austin, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  5 01:36:29 compute-0 podman[360579]: 2025-12-05 01:36:29.446436619 +0000 UTC m=+0.052295162 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:36:29 compute-0 systemd[1]: Started libpod-conmon-b07d2b2c3a67165c377fd48ca207eb8e097f1e273dce5d41e593c3168a8ebfcb.scope.
Dec  5 01:36:29 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:36:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6057a51f4b430411fda0ae1b53263d4e6b76d1e06311128a35c0e9d3440190a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:36:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6057a51f4b430411fda0ae1b53263d4e6b76d1e06311128a35c0e9d3440190a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:36:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6057a51f4b430411fda0ae1b53263d4e6b76d1e06311128a35c0e9d3440190a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:36:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6057a51f4b430411fda0ae1b53263d4e6b76d1e06311128a35c0e9d3440190a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:36:29 compute-0 python3.9[360589]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  5 01:36:29 compute-0 podman[360579]: 2025-12-05 01:36:29.627667728 +0000 UTC m=+0.233526341 container init b07d2b2c3a67165c377fd48ca207eb8e097f1e273dce5d41e593c3168a8ebfcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_austin, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  5 01:36:29 compute-0 podman[360579]: 2025-12-05 01:36:29.65087336 +0000 UTC m=+0.256731933 container start b07d2b2c3a67165c377fd48ca207eb8e097f1e273dce5d41e593c3168a8ebfcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_austin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  5 01:36:29 compute-0 podman[360579]: 2025-12-05 01:36:29.657527128 +0000 UTC m=+0.263385691 container attach b07d2b2c3a67165c377fd48ca207eb8e097f1e273dce5d41e593c3168a8ebfcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_austin, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:36:29 compute-0 podman[158197]: time="2025-12-05T01:36:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:36:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:36:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 44147 "" "Go-http-client/1.1"
Dec  5 01:36:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:36:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8532 "" "Go-http-client/1.1"
Dec  5 01:36:30 compute-0 happy_austin[360598]: {
Dec  5 01:36:30 compute-0 happy_austin[360598]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 01:36:30 compute-0 happy_austin[360598]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:36:30 compute-0 happy_austin[360598]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 01:36:30 compute-0 happy_austin[360598]:        "osd_id": 0,
Dec  5 01:36:30 compute-0 happy_austin[360598]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:36:30 compute-0 happy_austin[360598]:        "type": "bluestore"
Dec  5 01:36:30 compute-0 happy_austin[360598]:    },
Dec  5 01:36:30 compute-0 happy_austin[360598]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 01:36:30 compute-0 happy_austin[360598]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:36:30 compute-0 happy_austin[360598]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 01:36:30 compute-0 happy_austin[360598]:        "osd_id": 1,
Dec  5 01:36:30 compute-0 happy_austin[360598]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:36:30 compute-0 happy_austin[360598]:        "type": "bluestore"
Dec  5 01:36:30 compute-0 happy_austin[360598]:    },
Dec  5 01:36:30 compute-0 happy_austin[360598]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 01:36:30 compute-0 happy_austin[360598]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:36:30 compute-0 happy_austin[360598]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 01:36:30 compute-0 happy_austin[360598]:        "osd_id": 2,
Dec  5 01:36:30 compute-0 happy_austin[360598]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:36:30 compute-0 happy_austin[360598]:        "type": "bluestore"
Dec  5 01:36:30 compute-0 happy_austin[360598]:    }
Dec  5 01:36:30 compute-0 happy_austin[360598]: }
Dec  5 01:36:30 compute-0 systemd[1]: libpod-b07d2b2c3a67165c377fd48ca207eb8e097f1e273dce5d41e593c3168a8ebfcb.scope: Deactivated successfully.
Dec  5 01:36:30 compute-0 podman[360579]: 2025-12-05 01:36:30.821564854 +0000 UTC m=+1.427423427 container died b07d2b2c3a67165c377fd48ca207eb8e097f1e273dce5d41e593c3168a8ebfcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_austin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:36:30 compute-0 systemd[1]: libpod-b07d2b2c3a67165c377fd48ca207eb8e097f1e273dce5d41e593c3168a8ebfcb.scope: Consumed 1.152s CPU time.
Dec  5 01:36:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v807: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:36:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-6057a51f4b430411fda0ae1b53263d4e6b76d1e06311128a35c0e9d3440190a7-merged.mount: Deactivated successfully.
Dec  5 01:36:30 compute-0 podman[360579]: 2025-12-05 01:36:30.922615596 +0000 UTC m=+1.528474139 container remove b07d2b2c3a67165c377fd48ca207eb8e097f1e273dce5d41e593c3168a8ebfcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_austin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Dec  5 01:36:30 compute-0 systemd[1]: libpod-conmon-b07d2b2c3a67165c377fd48ca207eb8e097f1e273dce5d41e593c3168a8ebfcb.scope: Deactivated successfully.
Dec  5 01:36:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:36:30 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:36:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:36:30 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:36:30 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 49845ac3-b80a-4ab5-9614-e1be53bc0c77 does not exist
Dec  5 01:36:30 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev af177979-a210-4622-8444-eb627b119605 does not exist
Dec  5 01:36:31 compute-0 python3.9[360834]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:36:31 compute-0 openstack_network_exporter[160350]: ERROR   01:36:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:36:31 compute-0 openstack_network_exporter[160350]: ERROR   01:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:36:31 compute-0 openstack_network_exporter[160350]: ERROR   01:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:36:31 compute-0 openstack_network_exporter[160350]: ERROR   01:36:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:36:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:36:31 compute-0 openstack_network_exporter[160350]: ERROR   01:36:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:36:31 compute-0 openstack_network_exporter[160350]: 
Dec  5 01:36:31 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:36:31 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:36:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:36:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v808: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:36:33 compute-0 python3.9[360998]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764898591.4651084-484-229724654608342/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:36:33 compute-0 podman[360999]: 2025-12-05 01:36:33.714322331 +0000 UTC m=+0.113477573 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  5 01:36:34 compute-0 python3.9[361091]: ansible-systemd Invoked with state=started name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  5 01:36:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v809: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:36:35 compute-0 python3.9[361245]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  5 01:36:35 compute-0 systemd[1]: Stopping ceilometer_agent_compute container...
Dec  5 01:36:35 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:36:35.742 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Dec  5 01:36:35 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:36:35.844 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:319
Dec  5 01:36:35 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:36:35.845 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:323
Dec  5 01:36:35 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:36:35.845 14 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [14]
Dec  5 01:36:35 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:36:35.845 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentHeartBeatManager(0) [12]
Dec  5 01:36:35 compute-0 virtqemud[138703]: End of file while reading data: Input/output error
Dec  5 01:36:35 compute-0 ceilometer_agent_compute[154702]: 2025-12-05 01:36:35.860 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:335
Dec  5 01:36:35 compute-0 virtqemud[138703]: End of file while reading data: Input/output error
Dec  5 01:36:36 compute-0 systemd[1]: libpod-01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424.scope: Deactivated successfully.
Dec  5 01:36:36 compute-0 systemd[1]: libpod-01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424.scope: Consumed 4.004s CPU time.
Dec  5 01:36:36 compute-0 podman[361249]: 2025-12-05 01:36:36.076046659 +0000 UTC m=+0.427590750 container died 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec  5 01:36:36 compute-0 systemd[1]: 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424-57d4f94636a0dba8.timer: Deactivated successfully.
Dec  5 01:36:36 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424.
Dec  5 01:36:36 compute-0 systemd[1]: 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424-57d4f94636a0dba8.service: Failed to open /run/systemd/transient/01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424-57d4f94636a0dba8.service: No such file or directory
Dec  5 01:36:36 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424-userdata-shm.mount: Deactivated successfully.
Dec  5 01:36:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-e108ee4bc9d8514f675f957a6e3d541692d2b8ecf712c616f7574cf48c93d1e1-merged.mount: Deactivated successfully.
Dec  5 01:36:36 compute-0 podman[361249]: 2025-12-05 01:36:36.164359564 +0000 UTC m=+0.515903615 container cleanup 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute)
Dec  5 01:36:36 compute-0 podman[361249]: ceilometer_agent_compute
Dec  5 01:36:36 compute-0 podman[361277]: ceilometer_agent_compute
Dec  5 01:36:36 compute-0 systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully.
Dec  5 01:36:36 compute-0 systemd[1]: Stopped ceilometer_agent_compute container.
Dec  5 01:36:36 compute-0 systemd[1]: Starting ceilometer_agent_compute container...
Dec  5 01:36:36 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:36:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e108ee4bc9d8514f675f957a6e3d541692d2b8ecf712c616f7574cf48c93d1e1/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  5 01:36:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e108ee4bc9d8514f675f957a6e3d541692d2b8ecf712c616f7574cf48c93d1e1/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec  5 01:36:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e108ee4bc9d8514f675f957a6e3d541692d2b8ecf712c616f7574cf48c93d1e1/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec  5 01:36:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e108ee4bc9d8514f675f957a6e3d541692d2b8ecf712c616f7574cf48c93d1e1/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec  5 01:36:36 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424.
Dec  5 01:36:36 compute-0 podman[361288]: 2025-12-05 01:36:36.571812746 +0000 UTC m=+0.248724218 container init 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  5 01:36:36 compute-0 ceilometer_agent_compute[361302]: + sudo -E kolla_set_configs
Dec  5 01:36:36 compute-0 podman[361288]: 2025-12-05 01:36:36.62278959 +0000 UTC m=+0.299701002 container start 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  5 01:36:36 compute-0 podman[361288]: ceilometer_agent_compute
Dec  5 01:36:36 compute-0 systemd[1]: Started ceilometer_agent_compute container.
Dec  5 01:36:36 compute-0 ceilometer_agent_compute[361302]: sudo: unable to send audit message: Operation not permitted
Dec  5 01:36:36 compute-0 podman[361309]: 2025-12-05 01:36:36.747620112 +0000 UTC m=+0.108083042 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec  5 01:36:36 compute-0 ceilometer_agent_compute[361302]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  5 01:36:36 compute-0 ceilometer_agent_compute[361302]: INFO:__main__:Validating config file
Dec  5 01:36:36 compute-0 ceilometer_agent_compute[361302]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  5 01:36:36 compute-0 ceilometer_agent_compute[361302]: INFO:__main__:Copying service configuration files
Dec  5 01:36:36 compute-0 ceilometer_agent_compute[361302]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec  5 01:36:36 compute-0 ceilometer_agent_compute[361302]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec  5 01:36:36 compute-0 ceilometer_agent_compute[361302]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec  5 01:36:36 compute-0 ceilometer_agent_compute[361302]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec  5 01:36:36 compute-0 ceilometer_agent_compute[361302]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec  5 01:36:36 compute-0 ceilometer_agent_compute[361302]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec  5 01:36:36 compute-0 ceilometer_agent_compute[361302]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  5 01:36:36 compute-0 ceilometer_agent_compute[361302]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  5 01:36:36 compute-0 ceilometer_agent_compute[361302]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  5 01:36:36 compute-0 ceilometer_agent_compute[361302]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  5 01:36:36 compute-0 ceilometer_agent_compute[361302]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  5 01:36:36 compute-0 ceilometer_agent_compute[361302]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  5 01:36:36 compute-0 ceilometer_agent_compute[361302]: INFO:__main__:Writing out command to execute
Dec  5 01:36:36 compute-0 systemd[1]: 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424-1e73c017c9b80aa1.service: Main process exited, code=exited, status=1/FAILURE
Dec  5 01:36:36 compute-0 systemd[1]: 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424-1e73c017c9b80aa1.service: Failed with result 'exit-code'.
Dec  5 01:36:36 compute-0 ceilometer_agent_compute[361302]: ++ cat /run_command
Dec  5 01:36:36 compute-0 ceilometer_agent_compute[361302]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec  5 01:36:36 compute-0 ceilometer_agent_compute[361302]: + ARGS=
Dec  5 01:36:36 compute-0 ceilometer_agent_compute[361302]: + sudo kolla_copy_cacerts
Dec  5 01:36:36 compute-0 ceilometer_agent_compute[361302]: sudo: unable to send audit message: Operation not permitted
Dec  5 01:36:36 compute-0 ceilometer_agent_compute[361302]: + [[ ! -n '' ]]
Dec  5 01:36:36 compute-0 ceilometer_agent_compute[361302]: + . kolla_extend_start
Dec  5 01:36:36 compute-0 ceilometer_agent_compute[361302]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec  5 01:36:36 compute-0 ceilometer_agent_compute[361302]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Dec  5 01:36:36 compute-0 ceilometer_agent_compute[361302]: + umask 0022
Dec  5 01:36:36 compute-0 ceilometer_agent_compute[361302]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Dec  5 01:36:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v810: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:36:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:36:37 compute-0 python3.9[361483]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.967 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.967 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.967 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.967 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.967 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.967 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.967 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.968 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.968 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.968 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.968 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.968 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.968 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.968 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.968 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.968 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.968 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.968 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.968 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.969 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.969 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.969 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.969 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.969 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.969 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.969 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.969 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.969 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.970 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.970 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.970 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.970 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.970 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.970 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.970 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.970 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.970 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.970 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.971 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.971 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.971 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.971 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.971 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.971 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.971 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.971 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.971 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.971 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.971 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.971 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.971 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.972 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.972 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.972 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.972 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.972 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.972 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.972 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.972 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.972 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.972 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.972 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.972 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.972 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.972 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.973 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.973 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.973 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.973 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.973 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.973 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.973 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.973 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.973 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.973 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.973 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.973 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.973 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.974 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.974 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.974 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.974 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.974 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.974 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.974 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.974 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.974 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.974 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.974 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.974 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.974 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.975 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.975 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.975 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.975 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.975 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.975 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.975 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.975 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.975 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.975 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.975 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.975 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.975 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.976 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.976 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.976 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.976 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.976 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.976 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.976 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.976 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.976 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.976 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.976 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.977 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.977 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.977 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.977 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.977 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.977 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.977 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.977 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.977 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.977 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.977 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.977 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.977 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.978 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.978 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.978 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.978 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.978 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.978 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.978 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.978 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.978 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.978 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.978 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.978 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.978 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.979 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.979 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.979 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.979 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.979 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.979 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.979 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.979 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.979 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.979 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.980 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:37 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:37.980 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.006 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.007 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.007 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.007 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.007 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.008 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.008 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.008 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.008 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.009 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.009 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.009 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.009 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.009 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.009 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.010 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.010 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.010 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.010 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.010 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.010 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.010 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.011 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.011 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.011 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.011 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.011 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.011 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.011 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.012 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.012 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.012 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.012 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.012 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.012 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.012 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.013 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.013 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.013 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.013 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.013 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.013 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.014 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.014 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.014 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.014 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.014 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.014 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.014 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.014 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.015 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.015 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.015 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.015 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.015 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.015 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.015 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.016 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.016 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.016 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.016 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.016 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.016 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.016 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.017 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.017 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.017 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.017 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.017 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.018 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.018 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.018 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.018 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.018 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.018 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.018 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.018 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.018 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.018 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.018 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.018 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.019 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.019 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.019 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.019 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.019 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.019 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.019 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.019 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.019 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.019 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.019 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.019 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.019 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.019 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.019 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.020 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.020 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.020 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.020 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.020 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.020 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.020 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.020 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.020 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.020 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.020 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.020 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.020 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.020 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.020 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.021 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.021 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.021 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.021 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.021 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.021 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.021 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.021 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.021 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.021 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.021 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.021 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.021 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.021 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.021 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.021 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.021 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.022 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.022 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.022 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.022 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.022 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.022 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.022 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.022 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.022 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.022 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.022 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.022 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.022 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.022 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.022 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.022 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.022 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.022 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.023 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.023 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.023 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.023 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.023 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.023 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.023 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.023 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.025 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.029 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.031 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.048 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.067 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.069 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.069 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.252 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.252 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.252 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.252 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.252 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.252 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.252 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.252 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.253 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.253 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.253 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.254 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.254 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.254 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.255 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.255 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.255 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.255 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.255 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.255 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.255 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.255 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.256 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.256 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.256 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.256 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.256 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.256 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.256 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.256 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.256 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.257 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.257 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.257 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.257 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.257 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.257 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.257 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.257 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.257 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.257 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.257 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.258 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.258 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.258 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.258 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.258 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.258 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.258 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.258 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.258 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.259 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.259 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.259 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.259 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.259 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.259 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.259 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.259 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.259 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.260 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.260 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.260 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.260 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.260 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.260 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.260 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.260 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.260 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.260 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.260 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.261 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.261 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.261 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.261 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.261 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.261 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.261 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.261 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.261 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.262 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.262 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.262 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.262 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.262 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.262 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.262 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.262 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.262 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.262 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.262 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.263 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.263 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.263 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.263 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.263 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.263 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.263 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.263 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.263 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.264 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.264 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.264 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.264 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.264 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.264 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.264 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.264 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.264 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.264 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.265 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.265 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.265 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.265 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.265 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.265 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.265 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.265 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.265 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.265 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.265 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.265 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.266 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.267 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.267 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.267 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.267 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.267 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.267 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.267 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.267 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.267 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.267 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.267 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.267 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.268 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.268 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.268 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.268 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.268 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.268 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.268 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.268 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.268 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.268 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.268 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.268 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.268 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.269 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.269 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.269 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.269 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.269 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.269 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.269 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.274 14 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.310 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.311 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.312 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.312 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.313 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.313 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.313 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.313 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.314 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.314 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.314 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.314 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.314 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.314 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.314 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.320 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.321 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.321 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.321 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.321 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.321 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.321 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.321 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.322 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.322 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.322 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.322 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.323 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.323 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.323 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.323 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.323 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.323 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.323 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.323 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.324 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.324 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.324 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.324 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.325 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.325 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.325 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.325 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.327 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.327 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.327 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.327 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.327 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.327 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:36:38.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:36:38 compute-0 python3.9[361574]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/node_exporter/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/node_exporter/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  5 01:36:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v811: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:36:39 compute-0 python3.9[361726]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=node_exporter.json debug=False
Dec  5 01:36:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v812: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:36:40 compute-0 python3.9[361878]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  5 01:36:42 compute-0 python3[362030]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=node_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec  5 01:36:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:36:42 compute-0 python3[362030]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [#012     {#012          "Id": "0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83",#012          "Digest": "sha256:fa8e5700b7762fffe0674e944762f44bb787a7e44d97569fe55348260453bf80",#012          "RepoTags": [#012               "quay.io/prometheus/node-exporter:v1.5.0"#012          ],#012          "RepoDigests": [#012               "quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c",#012               "quay.io/prometheus/node-exporter@sha256:fa8e5700b7762fffe0674e944762f44bb787a7e44d97569fe55348260453bf80"#012          ],#012          "Parent": "",#012          "Comment": "",#012          "Created": "2022-11-29T19:06:14.987394068Z",#012          "Config": {#012               "User": "nobody",#012               "ExposedPorts": {#012                    "9100/tcp": {}#012               },#012               "Env": [#012                    "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"#012               ],#012               "Entrypoint": [#012                    "/bin/node_exporter"#012               ],#012               "Labels": {#012                    "maintainer": "The Prometheus Authors <prometheus-developers@googlegroups.com>"#012               }#012          },#012          "Version": "19.03.8",#012          "Author": "",#012          "Architecture": "amd64",#012          "Os": "linux",#012          "Size": 23851788,#012          "VirtualSize": 23851788,#012          "GraphDriver": {#012               "Name": "overlay",#012               "Data": {#012                    "LowerDir": "/var/lib/containers/storage/overlay/a1185e7325783fe8cba63270bc6e59299386d7c73e4bc34c560a1fbc9e6d7e2c/diff:/var/lib/containers/storage/overlay/0438ade5aeea533b00cd75095bec75fbc2b307bace4c89bb39b75d428637bcd8/diff",#012                    "UpperDir": "/var/lib/containers/storage/overlay/2cd9444c84550fbd551e3826a8110fcc009757858b99e84f1119041f2325189b/diff",#012                    "WorkDir": "/var/lib/containers/storage/overlay/2cd9444c84550fbd551e3826a8110fcc009757858b99e84f1119041f2325189b/work"#012               }#012          },#012          "RootFS": {#012               "Type": "layers",#012               "Layers": [#012                    "sha256:0438ade5aeea533b00cd75095bec75fbc2b307bace4c89bb39b75d428637bcd8",#012                    "sha256:9f2d25037e3e722ca7f4ca9c7a885f19a2ce11140592ee0acb323dec3b26640d",#012                    "sha256:76857a93cd03e12817c36c667cc3263d58886232cad116327e55d79036e5977d"#012               ]#012          },#012          "Labels": {#012               "maintainer": "The Prometheus Authors <prometheus-developers@googlegroups.com>"#012          },#012          "Annotations": {},#012          "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",#012          "User": "nobody",#012          "History": [#012               {#012                    "created": "2022-10-26T06:30:33.700079457Z",#012                    "created_by": "/bin/sh -c #(nop) ADD file:5e991de3200129dc05c3130f7a64bebb5704486b4f773bfcaa6b13165d6c2416 in / "#012               },#012               {#012                    "created": "2022-10-26T06:30:33.794221299Z",#012                    "created_by": "/bin/sh -c #(nop)  CMD [\"sh\"]",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2022-11-15T10:54:54.845364304Z",#012                    "created_by": "/bin/sh -c #(nop)  MAINTAINER The Prometheus Authors <prometheus-developers@googlegroups.com>",#012                    "author": "The Prometheus Authors <prometheus-developers@googlegroups.com>",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2022-11-15T10:54:55.54866664Z",#012                    "created_by": "/bin/sh -c #(nop) COPY dir:02c961e21531be78a67ed9bad67d03391cfedcead8b0a35cfb9171346636f11a in / ",#012                    "author": "The Prometheus Authors <prometheus-developers@googlegroups.com>"#012               },#012               {#012                    "created": "2022-11-29T19:06:13.622645057Z",#012                    "created_by": "/bin/sh -c #(nop)  LABEL maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2022-11-29T19:06:13.810765105Z",#012                    "created_by": "/bin/sh -c #(nop)  ARG ARCH=amd64",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2022-11-29T19:06:13.990897895Z",#012                    "created_by": "/bin/sh -c #(nop)  ARG OS=linux",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2022-11-29T19:06:14.358293759Z",#012                    "created_by": "/bin/sh -c #(nop) COPY file:3ef20dd145817033186947b860c3b6f7bb06d4c435257258c0e5df15f6e51eb7 in /bin/node_exporter "#012               },#012               {#012                    "created": "2022-11-29T19:06:14.630644274Z",#012                    "created_by": "/bin/sh -c #(nop)  EXPOSE 9100",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2022-11-29T19:06:14.79596292Z",#012                    "created_by": "/bin/sh -c #(nop)  USER nobody",#012                    "empty_layer": true#012               },#012               {#012                    "created": "2022-11-29T19:06:14.987394068Z",#012                    "created_by": "/bin/sh -c #(nop)  ENTRYPOINT [\"/bin/node_exporter\"]",#012                    "empty_layer": true#012               }#012          ],#012          "NamesHistory": [#012               "quay.io/prometheus/node-exporter:v1.5.0"#012          ]#012     }#012]#012: quay.io/prometheus/node-exporter:v1.5.0
Dec  5 01:36:42 compute-0 systemd[1]: libpod-6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a.scope: Deactivated successfully.
Dec  5 01:36:42 compute-0 systemd[1]: libpod-6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a.scope: Consumed 6.104s CPU time.
Dec  5 01:36:42 compute-0 podman[362076]: 2025-12-05 01:36:42.67264058 +0000 UTC m=+0.101457445 container died 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 01:36:42 compute-0 systemd[1]: 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a-5a9c9f0539e84b33.timer: Deactivated successfully.
Dec  5 01:36:42 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a.
Dec  5 01:36:42 compute-0 systemd[1]: 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a-5a9c9f0539e84b33.service: Failed to open /run/systemd/transient/6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a-5a9c9f0539e84b33.service: No such file or directory
Dec  5 01:36:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a-userdata-shm.mount: Deactivated successfully.
Dec  5 01:36:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae76d3462a5826074750f1233391fe337ca691f19a9e669132d737b113b57717-merged.mount: Deactivated successfully.
Dec  5 01:36:42 compute-0 podman[362076]: 2025-12-05 01:36:42.75864768 +0000 UTC m=+0.187464515 container cleanup 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 01:36:42 compute-0 python3[362030]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman stop node_exporter
Dec  5 01:36:42 compute-0 systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec  5 01:36:42 compute-0 systemd[1]: 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a-5a9c9f0539e84b33.timer: Failed to open /run/systemd/transient/6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a-5a9c9f0539e84b33.timer: No such file or directory
Dec  5 01:36:42 compute-0 systemd[1]: 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a-5a9c9f0539e84b33.service: Failed to open /run/systemd/transient/6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a-5a9c9f0539e84b33.service: No such file or directory
Dec  5 01:36:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v813: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:36:42 compute-0 podman[362103]: 2025-12-05 01:36:42.898134454 +0000 UTC m=+0.104080579 container remove 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 01:36:42 compute-0 podman[362104]: Error: no container with ID 6a524fdcd26759dbaa32bfe41943a8f71e7c50475256632e652d75b33fd4ed1a found in database: no such container
Dec  5 01:36:42 compute-0 systemd[1]: edpm_node_exporter.service: Control process exited, code=exited, status=125/n/a
Dec  5 01:36:42 compute-0 systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'.
Dec  5 01:36:42 compute-0 python3[362030]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force node_exporter
Dec  5 01:36:43 compute-0 podman[362123]: 2025-12-05 01:36:43.016113593 +0000 UTC m=+0.083266564 container create 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  5 01:36:43 compute-0 podman[362123]: 2025-12-05 01:36:42.977709522 +0000 UTC m=+0.044862523 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Dec  5 01:36:43 compute-0 python3[362030]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck node_exporter --label config_id=edpm --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter:v1.5.0 --web.config.file=/etc/node_exporter/node_exporter.yaml --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Dec  5 01:36:43 compute-0 systemd[1]: edpm_node_exporter.service: Scheduled restart job, restart counter is at 1.
Dec  5 01:36:43 compute-0 systemd[1]: Stopped node_exporter container.
Dec  5 01:36:43 compute-0 systemd[1]: Starting node_exporter container...
Dec  5 01:36:43 compute-0 systemd[1]: Started libpod-conmon-602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9.scope.
Dec  5 01:36:43 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:36:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf15be80c0dd292fd0f4a8782079e2ad181dc6be2900ae4af343360a4fced505/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  5 01:36:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf15be80c0dd292fd0f4a8782079e2ad181dc6be2900ae4af343360a4fced505/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  5 01:36:43 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9.
Dec  5 01:36:43 compute-0 podman[362133]: 2025-12-05 01:36:43.277801835 +0000 UTC m=+0.226746050 container init 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.310Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.310Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.310Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.312Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.312Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.312Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=arp
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=bcache
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=bonding
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=btrfs
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=conntrack
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=cpu
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=diskstats
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=edac
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=filefd
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=filesystem
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=infiniband
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=ipvs
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=loadavg
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=mdadm
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=meminfo
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=netclass
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=netdev
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=netstat
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=nfs
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=nfsd
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=nvme
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=schedstat
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=sockstat
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=softnet
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=systemd
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=tapestats
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=vmstat
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=xfs
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.313Z caller=node_exporter.go:117 level=info collector=zfs
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.315Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Dec  5 01:36:43 compute-0 node_exporter[362161]: ts=2025-12-05T01:36:43.317Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Dec  5 01:36:43 compute-0 podman[362133]: 2025-12-05 01:36:43.323954203 +0000 UTC m=+0.272898378 container start 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 01:36:43 compute-0 python3[362030]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman start node_exporter
Dec  5 01:36:43 compute-0 podman[362139]: node_exporter
Dec  5 01:36:43 compute-0 systemd[1]: Started node_exporter container.
Dec  5 01:36:43 compute-0 podman[362170]: 2025-12-05 01:36:43.460556036 +0000 UTC m=+0.114375489 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  5 01:36:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v814: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:36:45 compute-0 python3.9[362366]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  5 01:36:45 compute-0 podman[362394]: 2025-12-05 01:36:45.698478181 +0000 UTC m=+0.108181194 container health_status 63ae8f56e869a556bcf15db49628ba1cd3e13e668521a261999fd7c5c211399e (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 01:36:45 compute-0 podman[362393]: 2025-12-05 01:36:45.699992374 +0000 UTC m=+0.114766680 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  5 01:36:45 compute-0 podman[362395]: 2025-12-05 01:36:45.748986202 +0000 UTC m=+0.147452919 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  5 01:36:45 compute-0 podman[362396]: 2025-12-05 01:36:45.780393286 +0000 UTC m=+0.172769012 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  5 01:36:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:36:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:36:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:36:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:36:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:36:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:36:46 compute-0 python3.9[362604]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:36:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v815: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:36:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:36:47 compute-0 python3.9[362755]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764898606.7420685-562-56574346602849/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:40:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v938: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:40:53 compute-0 rsyslogd[188644]: imjournal: 3625 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Dec  5 01:40:53 compute-0 podman[393951]: 2025-12-05 01:40:53.617802477 +0000 UTC m=+0.111547420 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 01:40:53 compute-0 podman[393952]: 2025-12-05 01:40:53.629198398 +0000 UTC m=+0.117771196 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, release=1755695350, architecture=x86_64, distribution-scope=public, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter)
Dec  5 01:40:53 compute-0 python3.9[394010]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  5 01:40:53 compute-0 systemd[1]: Started libpod-conmon-088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54.scope.
Dec  5 01:40:53 compute-0 podman[394022]: 2025-12-05 01:40:53.988072147 +0000 UTC m=+0.157107082 container exec 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, version=9.4, com.redhat.component=ubi9-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, config_id=edpm, vcs-type=git, managed_by=edpm_ansible, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, release-0.7.12=)
Dec  5 01:40:54 compute-0 podman[394022]: 2025-12-05 01:40:54.023498194 +0000 UTC m=+0.192533069 container exec_died 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, release-0.7.12=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, distribution-scope=public, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, name=ubi9, io.openshift.tags=base rhel9, architecture=x86_64, release=1214.1726694543, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., version=9.4, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  5 01:40:54 compute-0 systemd[1]: libpod-conmon-088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54.scope: Deactivated successfully.
Dec  5 01:40:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v939: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:40:55 compute-0 python3.9[394201]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:40:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:40:56.168 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:40:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:40:56.169 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:40:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:40:56.169 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:40:56 compute-0 python3.9[394353]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Dec  5 01:40:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v940: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:40:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:40:57 compute-0 python3.9[394518]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  5 01:40:58 compute-0 systemd[1]: Started libpod-conmon-33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638.scope.
Dec  5 01:40:58 compute-0 podman[394519]: 2025-12-05 01:40:58.098991273 +0000 UTC m=+0.163612015 container exec 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  5 01:40:58 compute-0 podman[394519]: 2025-12-05 01:40:58.114250152 +0000 UTC m=+0.178870924 container exec_died 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec  5 01:40:58 compute-0 systemd[1]: libpod-conmon-33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638.scope: Deactivated successfully.
Dec  5 01:40:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v941: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:40:59 compute-0 python3.9[394702]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  5 01:40:59 compute-0 systemd[1]: Started libpod-conmon-33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638.scope.
Dec  5 01:40:59 compute-0 podman[394703]: 2025-12-05 01:40:59.544808389 +0000 UTC m=+0.143146669 container exec 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  5 01:40:59 compute-0 podman[394703]: 2025-12-05 01:40:59.58071066 +0000 UTC m=+0.179048940 container exec_died 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  5 01:40:59 compute-0 systemd[1]: libpod-conmon-33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638.scope: Deactivated successfully.
Dec  5 01:40:59 compute-0 podman[158197]: time="2025-12-05T01:40:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:40:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:40:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42581 "" "Go-http-client/1.1"
Dec  5 01:40:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:40:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8088 "" "Go-http-client/1.1"
Dec  5 01:41:00 compute-0 python3.9[394884]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:41:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v942: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:41:01 compute-0 openstack_network_exporter[366555]: ERROR   01:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:41:01 compute-0 openstack_network_exporter[366555]: ERROR   01:41:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:41:01 compute-0 openstack_network_exporter[366555]: ERROR   01:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:41:01 compute-0 openstack_network_exporter[366555]: ERROR   01:41:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:41:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:41:01 compute-0 openstack_network_exporter[366555]: ERROR   01:41:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:41:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:41:02 compute-0 python3.9[395036]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Dec  5 01:41:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:41:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v943: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:41:03 compute-0 python3.9[395200]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  5 01:41:03 compute-0 systemd[1]: Started libpod-conmon-4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee.scope.
Dec  5 01:41:03 compute-0 podman[395201]: 2025-12-05 01:41:03.377317 +0000 UTC m=+0.160845497 container exec 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec  5 01:41:03 compute-0 podman[395201]: 2025-12-05 01:41:03.410470303 +0000 UTC m=+0.193998760 container exec_died 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec  5 01:41:03 compute-0 systemd[1]: libpod-conmon-4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee.scope: Deactivated successfully.
Dec  5 01:41:04 compute-0 python3.9[395381]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  5 01:41:04 compute-0 systemd[1]: Started libpod-conmon-4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee.scope.
Dec  5 01:41:04 compute-0 podman[395382]: 2025-12-05 01:41:04.688169529 +0000 UTC m=+0.141120692 container exec 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true)
Dec  5 01:41:04 compute-0 podman[395382]: 2025-12-05 01:41:04.72584758 +0000 UTC m=+0.178798793 container exec_died 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, tcib_managed=true)
Dec  5 01:41:04 compute-0 systemd[1]: libpod-conmon-4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee.scope: Deactivated successfully.
Dec  5 01:41:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v944: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:41:05 compute-0 python3.9[395562]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:41:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v945: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:41:07 compute-0 python3.9[395714]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:41:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:41:07 compute-0 podman[395889]: 2025-12-05 01:41:07.907575777 +0000 UTC m=+0.096755894 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  5 01:41:07 compute-0 podman[395894]: 2025-12-05 01:41:07.91585422 +0000 UTC m=+0.092683729 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 01:41:08 compute-0 python3.9[396004]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/kepler.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:41:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:41:08 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:41:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 01:41:08 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:41:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 01:41:08 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:41:08 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev dfaa656d-3482-4fb2-af8b-65e807081585 does not exist
Dec  5 01:41:08 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 243a0e70-f3aa-4e34-b39d-06927a047ecd does not exist
Dec  5 01:41:08 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev a8bb38bf-ddc5-4b32-8485-577ea62b8e69 does not exist
Dec  5 01:41:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 01:41:08 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 01:41:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 01:41:08 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:41:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:41:08 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:41:08 compute-0 python3.9[396116]: ansible-ansible.legacy.file Invoked with mode=0640 dest=/var/lib/edpm-config/firewall/kepler.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/kepler.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:41:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v946: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:41:09 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:41:09 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:41:09 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:41:09 compute-0 podman[396332]: 2025-12-05 01:41:09.662457193 +0000 UTC m=+0.075913628 container create 84527bea8d8c8a077f790366598af9a70d8bb4d6a623e97f59f8cd7d09d4d4ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dubinsky, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  5 01:41:09 compute-0 podman[396332]: 2025-12-05 01:41:09.626983374 +0000 UTC m=+0.040439889 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:41:09 compute-0 systemd[1]: Started libpod-conmon-84527bea8d8c8a077f790366598af9a70d8bb4d6a623e97f59f8cd7d09d4d4ed.scope.
Dec  5 01:41:09 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:41:09 compute-0 podman[396332]: 2025-12-05 01:41:09.790607159 +0000 UTC m=+0.204063644 container init 84527bea8d8c8a077f790366598af9a70d8bb4d6a623e97f59f8cd7d09d4d4ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:41:09 compute-0 podman[396332]: 2025-12-05 01:41:09.80876012 +0000 UTC m=+0.222216545 container start 84527bea8d8c8a077f790366598af9a70d8bb4d6a623e97f59f8cd7d09d4d4ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dubinsky, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:41:09 compute-0 podman[396332]: 2025-12-05 01:41:09.814660756 +0000 UTC m=+0.228117251 container attach 84527bea8d8c8a077f790366598af9a70d8bb4d6a623e97f59f8cd7d09d4d4ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  5 01:41:09 compute-0 wizardly_dubinsky[396384]: 167 167
Dec  5 01:41:09 compute-0 podman[396332]: 2025-12-05 01:41:09.819547274 +0000 UTC m=+0.233003729 container died 84527bea8d8c8a077f790366598af9a70d8bb4d6a623e97f59f8cd7d09d4d4ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dubinsky, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  5 01:41:09 compute-0 systemd[1]: libpod-84527bea8d8c8a077f790366598af9a70d8bb4d6a623e97f59f8cd7d09d4d4ed.scope: Deactivated successfully.
Dec  5 01:41:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa4bad5266519ae746d0d50a7d39137d3fc9f8714ca4eb387c6d0eaa14fc35ba-merged.mount: Deactivated successfully.
Dec  5 01:41:10 compute-0 podman[396332]: 2025-12-05 01:41:10.014367626 +0000 UTC m=+0.427824041 container remove 84527bea8d8c8a077f790366598af9a70d8bb4d6a623e97f59f8cd7d09d4d4ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_dubinsky, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  5 01:41:10 compute-0 systemd[1]: libpod-conmon-84527bea8d8c8a077f790366598af9a70d8bb4d6a623e97f59f8cd7d09d4d4ed.scope: Deactivated successfully.
Dec  5 01:41:10 compute-0 nova_compute[349548]: 2025-12-05 01:41:10.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:41:10 compute-0 nova_compute[349548]: 2025-12-05 01:41:10.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:41:10 compute-0 nova_compute[349548]: 2025-12-05 01:41:10.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 01:41:10 compute-0 nova_compute[349548]: 2025-12-05 01:41:10.068 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:41:10 compute-0 nova_compute[349548]: 2025-12-05 01:41:10.103 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:41:10 compute-0 nova_compute[349548]: 2025-12-05 01:41:10.104 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:41:10 compute-0 nova_compute[349548]: 2025-12-05 01:41:10.104 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:41:10 compute-0 nova_compute[349548]: 2025-12-05 01:41:10.105 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 01:41:10 compute-0 nova_compute[349548]: 2025-12-05 01:41:10.105 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:41:10 compute-0 python3.9[396435]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:41:10 compute-0 podman[396444]: 2025-12-05 01:41:10.29562596 +0000 UTC m=+0.089581572 container create fe5a7f1d4869f4df2adda982c0df37286669866820b8dfb1f30ea3685100ac9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_davinci, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  5 01:41:10 compute-0 podman[396444]: 2025-12-05 01:41:10.257190858 +0000 UTC m=+0.051146500 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:41:10 compute-0 systemd[1]: Started libpod-conmon-fe5a7f1d4869f4df2adda982c0df37286669866820b8dfb1f30ea3685100ac9c.scope.
Dec  5 01:41:10 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:41:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab5c05cedc3e78d08acef1290f0acd6ba33d267fd11447b9985e83f0a477f043/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:41:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab5c05cedc3e78d08acef1290f0acd6ba33d267fd11447b9985e83f0a477f043/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:41:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab5c05cedc3e78d08acef1290f0acd6ba33d267fd11447b9985e83f0a477f043/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:41:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab5c05cedc3e78d08acef1290f0acd6ba33d267fd11447b9985e83f0a477f043/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:41:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab5c05cedc3e78d08acef1290f0acd6ba33d267fd11447b9985e83f0a477f043/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:41:10 compute-0 podman[396444]: 2025-12-05 01:41:10.450418986 +0000 UTC m=+0.244374658 container init fe5a7f1d4869f4df2adda982c0df37286669866820b8dfb1f30ea3685100ac9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_davinci, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:41:10 compute-0 podman[396444]: 2025-12-05 01:41:10.4772084 +0000 UTC m=+0.271164002 container start fe5a7f1d4869f4df2adda982c0df37286669866820b8dfb1f30ea3685100ac9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_davinci, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  5 01:41:10 compute-0 podman[396444]: 2025-12-05 01:41:10.484396572 +0000 UTC m=+0.278352164 container attach fe5a7f1d4869f4df2adda982c0df37286669866820b8dfb1f30ea3685100ac9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:41:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:41:10 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4024983090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:41:10 compute-0 nova_compute[349548]: 2025-12-05 01:41:10.605 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:41:10 compute-0 podman[396560]: 2025-12-05 01:41:10.705565726 +0000 UTC m=+0.120285306 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125)
Dec  5 01:41:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v947: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:41:11 compute-0 nova_compute[349548]: 2025-12-05 01:41:11.085 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 01:41:11 compute-0 nova_compute[349548]: 2025-12-05 01:41:11.088 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4528MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 01:41:11 compute-0 nova_compute[349548]: 2025-12-05 01:41:11.088 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:41:11 compute-0 nova_compute[349548]: 2025-12-05 01:41:11.089 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:41:11 compute-0 python3.9[396657]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:41:11 compute-0 nova_compute[349548]: 2025-12-05 01:41:11.154 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 01:41:11 compute-0 nova_compute[349548]: 2025-12-05 01:41:11.155 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 01:41:11 compute-0 nova_compute[349548]: 2025-12-05 01:41:11.176 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:41:11 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:41:11 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2946123790' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:41:11 compute-0 nova_compute[349548]: 2025-12-05 01:41:11.706 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:41:11 compute-0 nova_compute[349548]: 2025-12-05 01:41:11.721 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 01:41:11 compute-0 focused_davinci[396503]: --> passed data devices: 0 physical, 3 LVM
Dec  5 01:41:11 compute-0 focused_davinci[396503]: --> relative data size: 1.0
Dec  5 01:41:11 compute-0 focused_davinci[396503]: --> All data devices are unavailable
Dec  5 01:41:11 compute-0 nova_compute[349548]: 2025-12-05 01:41:11.743 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 01:41:11 compute-0 nova_compute[349548]: 2025-12-05 01:41:11.747 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 01:41:11 compute-0 nova_compute[349548]: 2025-12-05 01:41:11.748 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:41:11 compute-0 systemd[1]: libpod-fe5a7f1d4869f4df2adda982c0df37286669866820b8dfb1f30ea3685100ac9c.scope: Deactivated successfully.
Dec  5 01:41:11 compute-0 podman[396444]: 2025-12-05 01:41:11.77778342 +0000 UTC m=+1.571739052 container died fe5a7f1d4869f4df2adda982c0df37286669866820b8dfb1f30ea3685100ac9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  5 01:41:11 compute-0 systemd[1]: libpod-fe5a7f1d4869f4df2adda982c0df37286669866820b8dfb1f30ea3685100ac9c.scope: Consumed 1.218s CPU time.
Dec  5 01:41:11 compute-0 python3.9[396773]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:41:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab5c05cedc3e78d08acef1290f0acd6ba33d267fd11447b9985e83f0a477f043-merged.mount: Deactivated successfully.
Dec  5 01:41:11 compute-0 podman[396444]: 2025-12-05 01:41:11.891338666 +0000 UTC m=+1.685294268 container remove fe5a7f1d4869f4df2adda982c0df37286669866820b8dfb1f30ea3685100ac9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_davinci, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  5 01:41:11 compute-0 systemd[1]: libpod-conmon-fe5a7f1d4869f4df2adda982c0df37286669866820b8dfb1f30ea3685100ac9c.scope: Deactivated successfully.
Dec  5 01:41:11 compute-0 podman[396783]: 2025-12-05 01:41:11.948576646 +0000 UTC m=+0.119060961 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125)
Dec  5 01:41:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:41:12 compute-0 nova_compute[349548]: 2025-12-05 01:41:12.746 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:41:12 compute-0 nova_compute[349548]: 2025-12-05 01:41:12.772 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:41:12 compute-0 nova_compute[349548]: 2025-12-05 01:41:12.773 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:41:12 compute-0 python3.9[397075]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:41:12 compute-0 podman[397105]: 2025-12-05 01:41:12.995405386 +0000 UTC m=+0.101693993 container create b5d1eeb848206a651a20427700be63fa15b64fd1126da165f0745e4cda9635ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_margulis, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:41:13 compute-0 podman[397105]: 2025-12-05 01:41:12.957707375 +0000 UTC m=+0.063996052 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:41:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v948: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:41:13 compute-0 nova_compute[349548]: 2025-12-05 01:41:13.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:41:13 compute-0 nova_compute[349548]: 2025-12-05 01:41:13.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:41:13 compute-0 systemd[1]: Started libpod-conmon-b5d1eeb848206a651a20427700be63fa15b64fd1126da165f0745e4cda9635ba.scope.
Dec  5 01:41:13 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:41:13 compute-0 podman[397105]: 2025-12-05 01:41:13.152953979 +0000 UTC m=+0.259242606 container init b5d1eeb848206a651a20427700be63fa15b64fd1126da165f0745e4cda9635ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  5 01:41:13 compute-0 podman[397105]: 2025-12-05 01:41:13.173646101 +0000 UTC m=+0.279934698 container start b5d1eeb848206a651a20427700be63fa15b64fd1126da165f0745e4cda9635ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:41:13 compute-0 podman[397105]: 2025-12-05 01:41:13.180426482 +0000 UTC m=+0.286715109 container attach b5d1eeb848206a651a20427700be63fa15b64fd1126da165f0745e4cda9635ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_margulis, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  5 01:41:13 compute-0 recursing_margulis[397157]: 167 167
Dec  5 01:41:13 compute-0 systemd[1]: libpod-b5d1eeb848206a651a20427700be63fa15b64fd1126da165f0745e4cda9635ba.scope: Deactivated successfully.
Dec  5 01:41:13 compute-0 podman[397105]: 2025-12-05 01:41:13.182437399 +0000 UTC m=+0.288725996 container died b5d1eeb848206a651a20427700be63fa15b64fd1126da165f0745e4cda9635ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_margulis, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:41:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5440e32f38bc00d345c25ed15a65c1bec27aaa1cf44ead532bf87162c45f25a-merged.mount: Deactivated successfully.
Dec  5 01:41:13 compute-0 podman[397105]: 2025-12-05 01:41:13.256859063 +0000 UTC m=+0.363147690 container remove b5d1eeb848206a651a20427700be63fa15b64fd1126da165f0745e4cda9635ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_margulis, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:41:13 compute-0 systemd[1]: libpod-conmon-b5d1eeb848206a651a20427700be63fa15b64fd1126da165f0745e4cda9635ba.scope: Deactivated successfully.
Dec  5 01:41:13 compute-0 podman[397219]: 2025-12-05 01:41:13.541674638 +0000 UTC m=+0.091066033 container create a8fd15e8e2a352e152600c6f94a5a469a2957c2d373891d420226910b2945a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_saha, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:41:13 compute-0 python3.9[397213]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.upaytczi recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:41:13 compute-0 podman[397219]: 2025-12-05 01:41:13.504805601 +0000 UTC m=+0.054197036 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:41:13 compute-0 systemd[1]: Started libpod-conmon-a8fd15e8e2a352e152600c6f94a5a469a2957c2d373891d420226910b2945a71.scope.
Dec  5 01:41:13 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:41:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9807186a11163dd2558bc19f06b8d37beb3fc5a9bb995a4d1a7f6c332150bf7b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:41:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9807186a11163dd2558bc19f06b8d37beb3fc5a9bb995a4d1a7f6c332150bf7b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:41:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9807186a11163dd2558bc19f06b8d37beb3fc5a9bb995a4d1a7f6c332150bf7b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:41:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9807186a11163dd2558bc19f06b8d37beb3fc5a9bb995a4d1a7f6c332150bf7b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:41:13 compute-0 podman[397219]: 2025-12-05 01:41:13.709216762 +0000 UTC m=+0.258608187 container init a8fd15e8e2a352e152600c6f94a5a469a2957c2d373891d420226910b2945a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_saha, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:41:13 compute-0 podman[397219]: 2025-12-05 01:41:13.733330401 +0000 UTC m=+0.282721776 container start a8fd15e8e2a352e152600c6f94a5a469a2957c2d373891d420226910b2945a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:41:13 compute-0 podman[397219]: 2025-12-05 01:41:13.739030131 +0000 UTC m=+0.288421586 container attach a8fd15e8e2a352e152600c6f94a5a469a2957c2d373891d420226910b2945a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:41:14 compute-0 nova_compute[349548]: 2025-12-05 01:41:14.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:41:14 compute-0 nova_compute[349548]: 2025-12-05 01:41:14.068 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 01:41:14 compute-0 nova_compute[349548]: 2025-12-05 01:41:14.068 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  5 01:41:14 compute-0 nova_compute[349548]: 2025-12-05 01:41:14.091 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  5 01:41:14 compute-0 nova_compute[349548]: 2025-12-05 01:41:14.092 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:41:14 compute-0 magical_saha[397240]: {
Dec  5 01:41:14 compute-0 magical_saha[397240]:    "0": [
Dec  5 01:41:14 compute-0 magical_saha[397240]:        {
Dec  5 01:41:14 compute-0 magical_saha[397240]:            "devices": [
Dec  5 01:41:14 compute-0 magical_saha[397240]:                "/dev/loop3"
Dec  5 01:41:14 compute-0 magical_saha[397240]:            ],
Dec  5 01:41:14 compute-0 magical_saha[397240]:            "lv_name": "ceph_lv0",
Dec  5 01:41:14 compute-0 magical_saha[397240]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:41:14 compute-0 magical_saha[397240]:            "lv_size": "21470642176",
Dec  5 01:41:14 compute-0 magical_saha[397240]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:41:14 compute-0 magical_saha[397240]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:41:14 compute-0 magical_saha[397240]:            "name": "ceph_lv0",
Dec  5 01:41:14 compute-0 magical_saha[397240]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:41:14 compute-0 magical_saha[397240]:            "tags": {
Dec  5 01:41:14 compute-0 magical_saha[397240]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:41:14 compute-0 magical_saha[397240]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:41:14 compute-0 magical_saha[397240]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:41:14 compute-0 magical_saha[397240]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:41:14 compute-0 magical_saha[397240]:                "ceph.cluster_name": "ceph",
Dec  5 01:41:14 compute-0 magical_saha[397240]:                "ceph.crush_device_class": "",
Dec  5 01:41:14 compute-0 magical_saha[397240]:                "ceph.encrypted": "0",
Dec  5 01:41:14 compute-0 magical_saha[397240]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:41:14 compute-0 magical_saha[397240]:                "ceph.osd_id": "0",
Dec  5 01:41:14 compute-0 magical_saha[397240]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:41:14 compute-0 magical_saha[397240]:                "ceph.type": "block",
Dec  5 01:41:14 compute-0 magical_saha[397240]:                "ceph.vdo": "0"
Dec  5 01:41:14 compute-0 magical_saha[397240]:            },
Dec  5 01:41:14 compute-0 magical_saha[397240]:            "type": "block",
Dec  5 01:41:14 compute-0 magical_saha[397240]:            "vg_name": "ceph_vg0"
Dec  5 01:41:14 compute-0 magical_saha[397240]:        }
Dec  5 01:41:14 compute-0 magical_saha[397240]:    ],
Dec  5 01:41:14 compute-0 magical_saha[397240]:    "1": [
Dec  5 01:41:14 compute-0 magical_saha[397240]:        {
Dec  5 01:41:14 compute-0 magical_saha[397240]:            "devices": [
Dec  5 01:41:14 compute-0 magical_saha[397240]:                "/dev/loop4"
Dec  5 01:41:14 compute-0 magical_saha[397240]:            ],
Dec  5 01:41:14 compute-0 magical_saha[397240]:            "lv_name": "ceph_lv1",
Dec  5 01:41:14 compute-0 magical_saha[397240]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:41:14 compute-0 magical_saha[397240]:            "lv_size": "21470642176",
Dec  5 01:41:14 compute-0 magical_saha[397240]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:41:14 compute-0 magical_saha[397240]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:41:14 compute-0 magical_saha[397240]:            "name": "ceph_lv1",
Dec  5 01:41:14 compute-0 magical_saha[397240]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:41:14 compute-0 magical_saha[397240]:            "tags": {
Dec  5 01:41:14 compute-0 magical_saha[397240]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:41:14 compute-0 magical_saha[397240]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:41:14 compute-0 magical_saha[397240]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:41:14 compute-0 magical_saha[397240]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:41:14 compute-0 magical_saha[397240]:                "ceph.cluster_name": "ceph",
Dec  5 01:41:14 compute-0 magical_saha[397240]:                "ceph.crush_device_class": "",
Dec  5 01:41:14 compute-0 magical_saha[397240]:                "ceph.encrypted": "0",
Dec  5 01:41:14 compute-0 magical_saha[397240]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:41:14 compute-0 magical_saha[397240]:                "ceph.osd_id": "1",
Dec  5 01:41:14 compute-0 magical_saha[397240]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:41:14 compute-0 magical_saha[397240]:                "ceph.type": "block",
Dec  5 01:41:14 compute-0 magical_saha[397240]:                "ceph.vdo": "0"
Dec  5 01:41:14 compute-0 magical_saha[397240]:            },
Dec  5 01:41:14 compute-0 magical_saha[397240]:            "type": "block",
Dec  5 01:41:14 compute-0 magical_saha[397240]:            "vg_name": "ceph_vg1"
Dec  5 01:41:14 compute-0 magical_saha[397240]:        }
Dec  5 01:41:14 compute-0 magical_saha[397240]:    ],
Dec  5 01:41:14 compute-0 magical_saha[397240]:    "2": [
Dec  5 01:41:14 compute-0 magical_saha[397240]:        {
Dec  5 01:41:14 compute-0 magical_saha[397240]:            "devices": [
Dec  5 01:41:14 compute-0 magical_saha[397240]:                "/dev/loop5"
Dec  5 01:41:14 compute-0 magical_saha[397240]:            ],
Dec  5 01:41:14 compute-0 magical_saha[397240]:            "lv_name": "ceph_lv2",
Dec  5 01:41:14 compute-0 magical_saha[397240]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:41:14 compute-0 magical_saha[397240]:            "lv_size": "21470642176",
Dec  5 01:41:14 compute-0 magical_saha[397240]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:41:14 compute-0 magical_saha[397240]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:41:14 compute-0 magical_saha[397240]:            "name": "ceph_lv2",
Dec  5 01:41:14 compute-0 magical_saha[397240]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:41:14 compute-0 magical_saha[397240]:            "tags": {
Dec  5 01:41:14 compute-0 magical_saha[397240]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:41:14 compute-0 magical_saha[397240]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:41:14 compute-0 magical_saha[397240]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:41:14 compute-0 magical_saha[397240]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:41:14 compute-0 magical_saha[397240]:                "ceph.cluster_name": "ceph",
Dec  5 01:41:14 compute-0 magical_saha[397240]:                "ceph.crush_device_class": "",
Dec  5 01:41:14 compute-0 magical_saha[397240]:                "ceph.encrypted": "0",
Dec  5 01:41:14 compute-0 magical_saha[397240]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:41:14 compute-0 magical_saha[397240]:                "ceph.osd_id": "2",
Dec  5 01:41:14 compute-0 magical_saha[397240]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:41:14 compute-0 magical_saha[397240]:                "ceph.type": "block",
Dec  5 01:41:14 compute-0 magical_saha[397240]:                "ceph.vdo": "0"
Dec  5 01:41:14 compute-0 magical_saha[397240]:            },
Dec  5 01:41:14 compute-0 magical_saha[397240]:            "type": "block",
Dec  5 01:41:14 compute-0 magical_saha[397240]:            "vg_name": "ceph_vg2"
Dec  5 01:41:14 compute-0 magical_saha[397240]:        }
Dec  5 01:41:14 compute-0 magical_saha[397240]:    ]
Dec  5 01:41:14 compute-0 magical_saha[397240]: }
Dec  5 01:41:14 compute-0 podman[397219]: 2025-12-05 01:41:14.63659437 +0000 UTC m=+1.185985765 container died a8fd15e8e2a352e152600c6f94a5a469a2957c2d373891d420226910b2945a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  5 01:41:14 compute-0 systemd[1]: libpod-a8fd15e8e2a352e152600c6f94a5a469a2957c2d373891d420226910b2945a71.scope: Deactivated successfully.
Dec  5 01:41:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-9807186a11163dd2558bc19f06b8d37beb3fc5a9bb995a4d1a7f6c332150bf7b-merged.mount: Deactivated successfully.
Dec  5 01:41:14 compute-0 podman[397321]: 2025-12-05 01:41:14.707504015 +0000 UTC m=+0.104196133 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.openshift.expose-services=, name=ubi9, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.buildah.version=1.29.0, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, version=9.4, config_id=edpm)
Dec  5 01:41:14 compute-0 podman[397219]: 2025-12-05 01:41:14.72755552 +0000 UTC m=+1.276946895 container remove a8fd15e8e2a352e152600c6f94a5a469a2957c2d373891d420226910b2945a71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_saha, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:41:14 compute-0 systemd[1]: libpod-conmon-a8fd15e8e2a352e152600c6f94a5a469a2957c2d373891d420226910b2945a71.scope: Deactivated successfully.
Dec  5 01:41:15 compute-0 python3.9[397451]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:41:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v949: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:41:15 compute-0 python3.9[397629]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:41:15 compute-0 podman[397647]: 2025-12-05 01:41:15.809941259 +0000 UTC m=+0.092610467 container create bb3d1b8ecd024015348b8c1e82b170ac5ed0608669e59eb2a0e9a9183de2acd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dhawan, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:41:15 compute-0 podman[397647]: 2025-12-05 01:41:15.779557814 +0000 UTC m=+0.062227082 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:41:15 compute-0 systemd[1]: Started libpod-conmon-bb3d1b8ecd024015348b8c1e82b170ac5ed0608669e59eb2a0e9a9183de2acd2.scope.
Dec  5 01:41:15 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:41:15 compute-0 podman[397647]: 2025-12-05 01:41:15.938572249 +0000 UTC m=+0.221241527 container init bb3d1b8ecd024015348b8c1e82b170ac5ed0608669e59eb2a0e9a9183de2acd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  5 01:41:15 compute-0 podman[397647]: 2025-12-05 01:41:15.952851671 +0000 UTC m=+0.235520899 container start bb3d1b8ecd024015348b8c1e82b170ac5ed0608669e59eb2a0e9a9183de2acd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dhawan, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:41:15 compute-0 hopeful_dhawan[397686]: 167 167
Dec  5 01:41:15 compute-0 systemd[1]: libpod-bb3d1b8ecd024015348b8c1e82b170ac5ed0608669e59eb2a0e9a9183de2acd2.scope: Deactivated successfully.
Dec  5 01:41:15 compute-0 podman[397647]: 2025-12-05 01:41:15.963748718 +0000 UTC m=+0.246418016 container attach bb3d1b8ecd024015348b8c1e82b170ac5ed0608669e59eb2a0e9a9183de2acd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dhawan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  5 01:41:15 compute-0 podman[397647]: 2025-12-05 01:41:15.964329034 +0000 UTC m=+0.246998262 container died bb3d1b8ecd024015348b8c1e82b170ac5ed0608669e59eb2a0e9a9183de2acd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Dec  5 01:41:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3830910a0eaf9fdd6d6958c45a785bc938f3be23dddbcdc9f202731da59ccfc-merged.mount: Deactivated successfully.
Dec  5 01:41:16 compute-0 podman[397647]: 2025-12-05 01:41:16.033944213 +0000 UTC m=+0.316613411 container remove bb3d1b8ecd024015348b8c1e82b170ac5ed0608669e59eb2a0e9a9183de2acd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:41:16 compute-0 systemd[1]: libpod-conmon-bb3d1b8ecd024015348b8c1e82b170ac5ed0608669e59eb2a0e9a9183de2acd2.scope: Deactivated successfully.
Dec  5 01:41:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:41:16
Dec  5 01:41:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 01:41:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 01:41:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['vms', '.rgw.root', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'backups', 'images', 'cephfs.cephfs.meta', '.mgr']
Dec  5 01:41:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 01:41:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:41:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:41:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:41:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:41:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:41:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:41:16 compute-0 podman[397755]: 2025-12-05 01:41:16.272289871 +0000 UTC m=+0.087754451 container create 1ee30458edb4fd341d743cad4a2e9f13f1ca675c7d6f930c2da08688dad8792a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ride, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:41:16 compute-0 podman[397755]: 2025-12-05 01:41:16.241033031 +0000 UTC m=+0.056497661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:41:16 compute-0 systemd[1]: Started libpod-conmon-1ee30458edb4fd341d743cad4a2e9f13f1ca675c7d6f930c2da08688dad8792a.scope.
Dec  5 01:41:16 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:41:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fbdac0c456561482c73c5847671ebdd4cb71ea133ca2dea24960cc2c2c4ed6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:41:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fbdac0c456561482c73c5847671ebdd4cb71ea133ca2dea24960cc2c2c4ed6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:41:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fbdac0c456561482c73c5847671ebdd4cb71ea133ca2dea24960cc2c2c4ed6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:41:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fbdac0c456561482c73c5847671ebdd4cb71ea133ca2dea24960cc2c2c4ed6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:41:16 compute-0 podman[397755]: 2025-12-05 01:41:16.450803964 +0000 UTC m=+0.266268544 container init 1ee30458edb4fd341d743cad4a2e9f13f1ca675c7d6f930c2da08688dad8792a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  5 01:41:16 compute-0 podman[397755]: 2025-12-05 01:41:16.475704975 +0000 UTC m=+0.291169565 container start 1ee30458edb4fd341d743cad4a2e9f13f1ca675c7d6f930c2da08688dad8792a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ride, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  5 01:41:16 compute-0 podman[397755]: 2025-12-05 01:41:16.48476686 +0000 UTC m=+0.300231500 container attach 1ee30458edb4fd341d743cad4a2e9f13f1ca675c7d6f930c2da08688dad8792a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ride, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:41:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 01:41:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 01:41:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:41:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:41:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:41:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:41:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:41:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:41:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:41:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:41:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v950: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:41:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:41:17 compute-0 python3.9[397876]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:41:17 compute-0 bold_ride[397802]: {
Dec  5 01:41:17 compute-0 bold_ride[397802]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 01:41:17 compute-0 bold_ride[397802]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:41:17 compute-0 bold_ride[397802]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 01:41:17 compute-0 bold_ride[397802]:        "osd_id": 0,
Dec  5 01:41:17 compute-0 bold_ride[397802]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:41:17 compute-0 bold_ride[397802]:        "type": "bluestore"
Dec  5 01:41:17 compute-0 bold_ride[397802]:    },
Dec  5 01:41:17 compute-0 bold_ride[397802]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 01:41:17 compute-0 bold_ride[397802]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:41:17 compute-0 bold_ride[397802]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 01:41:17 compute-0 bold_ride[397802]:        "osd_id": 1,
Dec  5 01:41:17 compute-0 bold_ride[397802]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:41:17 compute-0 bold_ride[397802]:        "type": "bluestore"
Dec  5 01:41:17 compute-0 bold_ride[397802]:    },
Dec  5 01:41:17 compute-0 bold_ride[397802]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 01:41:17 compute-0 bold_ride[397802]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:41:17 compute-0 bold_ride[397802]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 01:41:17 compute-0 bold_ride[397802]:        "osd_id": 2,
Dec  5 01:41:17 compute-0 bold_ride[397802]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:41:17 compute-0 bold_ride[397802]:        "type": "bluestore"
Dec  5 01:41:17 compute-0 bold_ride[397802]:    }
Dec  5 01:41:17 compute-0 bold_ride[397802]: }
Dec  5 01:41:17 compute-0 systemd[1]: libpod-1ee30458edb4fd341d743cad4a2e9f13f1ca675c7d6f930c2da08688dad8792a.scope: Deactivated successfully.
Dec  5 01:41:17 compute-0 podman[397755]: 2025-12-05 01:41:17.668939753 +0000 UTC m=+1.484404293 container died 1ee30458edb4fd341d743cad4a2e9f13f1ca675c7d6f930c2da08688dad8792a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:41:17 compute-0 systemd[1]: libpod-1ee30458edb4fd341d743cad4a2e9f13f1ca675c7d6f930c2da08688dad8792a.scope: Consumed 1.196s CPU time.
Dec  5 01:41:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fbdac0c456561482c73c5847671ebdd4cb71ea133ca2dea24960cc2c2c4ed6a-merged.mount: Deactivated successfully.
Dec  5 01:41:17 compute-0 podman[397755]: 2025-12-05 01:41:17.766868349 +0000 UTC m=+1.582332889 container remove 1ee30458edb4fd341d743cad4a2e9f13f1ca675c7d6f930c2da08688dad8792a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:41:17 compute-0 systemd[1]: libpod-conmon-1ee30458edb4fd341d743cad4a2e9f13f1ca675c7d6f930c2da08688dad8792a.scope: Deactivated successfully.
Dec  5 01:41:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:41:17 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:41:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:41:17 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:41:17 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 552e0275-a09b-4f59-a750-a2a8dd30cfae does not exist
Dec  5 01:41:17 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev fc8f7be7-f599-45e2-a321-364ef61b2a27 does not exist
Dec  5 01:41:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:41:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:41:19 compute-0 python3[398100]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  5 01:41:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v951: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:41:20 compute-0 python3.9[398253]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:41:20 compute-0 python3.9[398331]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:41:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v952: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:41:22 compute-0 python3.9[398483]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:41:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:41:22 compute-0 podman[398533]: 2025-12-05 01:41:22.602825669 +0000 UTC m=+0.146667929 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  5 01:41:22 compute-0 podman[398534]: 2025-12-05 01:41:22.640077537 +0000 UTC m=+0.182640451 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec  5 01:41:22 compute-0 python3.9[398600]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:41:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v953: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:41:23 compute-0 podman[398758]: 2025-12-05 01:41:23.83947618 +0000 UTC m=+0.122360895 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 01:41:23 compute-0 podman[398759]: 2025-12-05 01:41:23.874519756 +0000 UTC m=+0.152369089 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.openshift.expose-services=, container_name=openstack_network_exporter, release=1755695350, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  5 01:41:23 compute-0 python3.9[398760]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:41:24 compute-0 python3.9[398875]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:41:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v954: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:41:25 compute-0 python3.9[399027]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:41:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 01:41:26 compute-0 python3.9[399105]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:41:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v955: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:41:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:41:28 compute-0 python3.9[399257]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:41:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v956: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:41:29 compute-0 python3.9[399335]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:41:29 compute-0 podman[158197]: time="2025-12-05T01:41:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:41:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:41:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec  5 01:41:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:41:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8100 "" "Go-http-client/1.1"
Dec  5 01:41:30 compute-0 python3.9[399487]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:41:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v957: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:41:31 compute-0 openstack_network_exporter[366555]: ERROR   01:41:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:41:31 compute-0 openstack_network_exporter[366555]: ERROR   01:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:41:31 compute-0 openstack_network_exporter[366555]: ERROR   01:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:41:31 compute-0 openstack_network_exporter[366555]: ERROR   01:41:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:41:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:41:31 compute-0 openstack_network_exporter[366555]: ERROR   01:41:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:41:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:41:32 compute-0 python3.9[399642]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:41:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:41:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v958: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:41:33 compute-0 python3.9[399794]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:41:34 compute-0 python3.9[399947]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  5 01:41:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v959: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:41:35 compute-0 python3.9[400099]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:41:35 compute-0 systemd[1]: session-59.scope: Deactivated successfully.
Dec  5 01:41:35 compute-0 systemd[1]: session-59.scope: Consumed 2min 6.789s CPU time.
Dec  5 01:41:35 compute-0 systemd-logind[792]: Session 59 logged out. Waiting for processes to exit.
Dec  5 01:41:35 compute-0 systemd-logind[792]: Removed session 59.
Dec  5 01:41:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v960: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:41:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:41:38 compute-0 podman[400125]: 2025-12-05 01:41:38.730924211 +0000 UTC m=+0.126215652 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  5 01:41:38 compute-0 podman[400124]: 2025-12-05 01:41:38.748242868 +0000 UTC m=+0.147845930 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec  5 01:41:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v961: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:41:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v962: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:41:41 compute-0 systemd-logind[792]: New session 60 of user zuul.
Dec  5 01:41:41 compute-0 systemd[1]: Started Session 60 of User zuul.
Dec  5 01:41:41 compute-0 podman[400167]: 2025-12-05 01:41:41.253417747 +0000 UTC m=+0.143598852 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125)
Dec  5 01:41:42 compute-0 podman[400313]: 2025-12-05 01:41:42.321134573 +0000 UTC m=+0.111170208 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec  5 01:41:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:41:42 compute-0 python3.9[400357]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  5 01:41:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v963: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:41:44 compute-0 python3.9[400516]: ansible-ansible.builtin.systemd Invoked with name=rsyslog daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None masked=None
Dec  5 01:41:44 compute-0 podman[400558]: 2025-12-05 01:41:44.8851907 +0000 UTC m=+0.119216406 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, distribution-scope=public, release=1214.1726694543, release-0.7.12=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  5 01:41:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v964: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:41:45 compute-0 python3.9[400688]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  5 01:41:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:41:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:41:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:41:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:41:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:41:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:41:46 compute-0 python3.9[400772]: ansible-ansible.legacy.dnf Invoked with name=['rsyslog-openssl'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  5 01:41:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v965: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:41:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:41:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v966: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:41:49 compute-0 python3.9[400926]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/rsyslog/ca-openshift.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:41:50 compute-0 python3.9[401004]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/pki/rsyslog/ca-openshift.crt _original_basename=ca-openshift.crt recurse=False state=file path=/etc/pki/rsyslog/ca-openshift.crt force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:41:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v967: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:41:51 compute-0 python3.9[401156]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/rsyslog.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:41:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:41:52 compute-0 podman[401308]: 2025-12-05 01:41:52.900081269 +0000 UTC m=+0.128831647 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec  5 01:41:52 compute-0 python3.9[401310]: ansible-ansible.legacy.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  5 01:41:52 compute-0 podman[401309]: 2025-12-05 01:41:52.975239494 +0000 UTC m=+0.198955110 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 01:41:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v968: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:41:53 compute-0 python3.9[401429]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/rsyslog.d/10-telemetry.conf _original_basename=10-telemetry.conf recurse=False state=file path=/etc/rsyslog.d/10-telemetry.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  5 01:41:54 compute-0 systemd[1]: session-60.scope: Deactivated successfully.
Dec  5 01:41:54 compute-0 systemd[1]: session-60.scope: Consumed 10.370s CPU time.
Dec  5 01:41:54 compute-0 systemd-logind[792]: Session 60 logged out. Waiting for processes to exit.
Dec  5 01:41:54 compute-0 systemd-logind[792]: Removed session 60.
Dec  5 01:41:54 compute-0 podman[401454]: 2025-12-05 01:41:54.305729045 +0000 UTC m=+0.137355947 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  5 01:41:54 compute-0 podman[401455]: 2025-12-05 01:41:54.31658839 +0000 UTC m=+0.141243825 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_id=edpm, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, architecture=x86_64, name=ubi9-minimal, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., release=1755695350, version=9.6, managed_by=edpm_ansible, container_name=openstack_network_exporter)
Dec  5 01:41:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v969: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:41:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:41:56.169 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:41:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:41:56.169 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:41:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:41:56.169 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:41:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v970: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:41:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:41:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v971: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:41:59 compute-0 podman[158197]: time="2025-12-05T01:41:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:41:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:41:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec  5 01:41:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:41:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8097 "" "Go-http-client/1.1"
Dec  5 01:42:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v972: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:42:01 compute-0 openstack_network_exporter[366555]: ERROR   01:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:42:01 compute-0 openstack_network_exporter[366555]: ERROR   01:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:42:01 compute-0 openstack_network_exporter[366555]: ERROR   01:42:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:42:01 compute-0 openstack_network_exporter[366555]: ERROR   01:42:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:42:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:42:01 compute-0 openstack_network_exporter[366555]: ERROR   01:42:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:42:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:42:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:42:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v973: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:42:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v974: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:42:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v975: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:42:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:42:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v976: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:42:09 compute-0 podman[401499]: 2025-12-05 01:42:09.706869832 +0000 UTC m=+0.110820020 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 01:42:09 compute-0 podman[401498]: 2025-12-05 01:42:09.714406434 +0000 UTC m=+0.123758344 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  5 01:42:11 compute-0 nova_compute[349548]: 2025-12-05 01:42:11.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:42:11 compute-0 nova_compute[349548]: 2025-12-05 01:42:11.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:42:11 compute-0 nova_compute[349548]: 2025-12-05 01:42:11.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 01:42:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v977: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:42:11 compute-0 podman[401542]: 2025-12-05 01:42:11.734389758 +0000 UTC m=+0.143987773 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Dec  5 01:42:12 compute-0 nova_compute[349548]: 2025-12-05 01:42:12.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:42:12 compute-0 nova_compute[349548]: 2025-12-05 01:42:12.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:42:12 compute-0 nova_compute[349548]: 2025-12-05 01:42:12.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:42:12 compute-0 nova_compute[349548]: 2025-12-05 01:42:12.105 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:42:12 compute-0 nova_compute[349548]: 2025-12-05 01:42:12.105 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:42:12 compute-0 nova_compute[349548]: 2025-12-05 01:42:12.105 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:42:12 compute-0 nova_compute[349548]: 2025-12-05 01:42:12.106 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 01:42:12 compute-0 nova_compute[349548]: 2025-12-05 01:42:12.106 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:42:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:42:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:42:12 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/402253177' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:42:12 compute-0 nova_compute[349548]: 2025-12-05 01:42:12.623 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:42:12 compute-0 podman[401580]: 2025-12-05 01:42:12.718713068 +0000 UTC m=+0.124739642 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi)
Dec  5 01:42:13 compute-0 nova_compute[349548]: 2025-12-05 01:42:13.034 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 01:42:13 compute-0 nova_compute[349548]: 2025-12-05 01:42:13.036 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4552MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 01:42:13 compute-0 nova_compute[349548]: 2025-12-05 01:42:13.036 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:42:13 compute-0 nova_compute[349548]: 2025-12-05 01:42:13.037 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:42:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v978: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:42:13 compute-0 nova_compute[349548]: 2025-12-05 01:42:13.184 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 01:42:13 compute-0 nova_compute[349548]: 2025-12-05 01:42:13.185 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 01:42:13 compute-0 nova_compute[349548]: 2025-12-05 01:42:13.250 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:42:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:42:13 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/56398953' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:42:13 compute-0 nova_compute[349548]: 2025-12-05 01:42:13.770 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:42:13 compute-0 nova_compute[349548]: 2025-12-05 01:42:13.779 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 01:42:13 compute-0 nova_compute[349548]: 2025-12-05 01:42:13.810 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 01:42:13 compute-0 nova_compute[349548]: 2025-12-05 01:42:13.811 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 01:42:13 compute-0 nova_compute[349548]: 2025-12-05 01:42:13.812 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:42:14 compute-0 nova_compute[349548]: 2025-12-05 01:42:14.809 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:42:14 compute-0 nova_compute[349548]: 2025-12-05 01:42:14.809 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:42:14 compute-0 nova_compute[349548]: 2025-12-05 01:42:14.810 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 01:42:14 compute-0 nova_compute[349548]: 2025-12-05 01:42:14.810 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  5 01:42:14 compute-0 nova_compute[349548]: 2025-12-05 01:42:14.831 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  5 01:42:14 compute-0 nova_compute[349548]: 2025-12-05 01:42:14.831 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:42:15 compute-0 nova_compute[349548]: 2025-12-05 01:42:15.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:42:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v979: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:42:15 compute-0 podman[401625]: 2025-12-05 01:42:15.715159001 +0000 UTC m=+0.122961031 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., config_id=edpm, name=ubi9, release=1214.1726694543, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, maintainer=Red Hat, Inc., vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.openshift.tags=base rhel9, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  5 01:42:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:42:16
Dec  5 01:42:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 01:42:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 01:42:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'images', '.mgr', 'default.rgw.control', 'backups', 'volumes', 'default.rgw.meta', 'vms']
Dec  5 01:42:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 01:42:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:42:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:42:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:42:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:42:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:42:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:42:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 01:42:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:42:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 01:42:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:42:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:42:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:42:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:42:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:42:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:42:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:42:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v980: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:42:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:42:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v981: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:42:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:42:19 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:42:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 01:42:19 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:42:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 01:42:19 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:42:19 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 7866598c-3932-4f75-ac79-26ffe00f94a6 does not exist
Dec  5 01:42:19 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev c137c7ac-667a-4c66-825e-3a7d36b981a6 does not exist
Dec  5 01:42:19 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev b534d160-a796-4ea7-a72b-75e5fe0bde3e does not exist
Dec  5 01:42:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 01:42:19 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 01:42:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 01:42:19 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:42:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:42:19 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:42:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:42:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:42:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:42:20 compute-0 podman[401910]: 2025-12-05 01:42:20.761773798 +0000 UTC m=+0.091771673 container create a782cba6bd3de0444b5758a4a15bc6b1ac11fc701412165ba41731ea8300ce51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_gould, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:42:20 compute-0 podman[401910]: 2025-12-05 01:42:20.726119065 +0000 UTC m=+0.056116990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:42:20 compute-0 systemd[1]: Started libpod-conmon-a782cba6bd3de0444b5758a4a15bc6b1ac11fc701412165ba41731ea8300ce51.scope.
Dec  5 01:42:20 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:42:20 compute-0 podman[401910]: 2025-12-05 01:42:20.886603821 +0000 UTC m=+0.216601746 container init a782cba6bd3de0444b5758a4a15bc6b1ac11fc701412165ba41731ea8300ce51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:42:20 compute-0 podman[401910]: 2025-12-05 01:42:20.904243658 +0000 UTC m=+0.234241533 container start a782cba6bd3de0444b5758a4a15bc6b1ac11fc701412165ba41731ea8300ce51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_gould, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:42:20 compute-0 podman[401910]: 2025-12-05 01:42:20.910043591 +0000 UTC m=+0.240041506 container attach a782cba6bd3de0444b5758a4a15bc6b1ac11fc701412165ba41731ea8300ce51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  5 01:42:20 compute-0 nervous_gould[401926]: 167 167
Dec  5 01:42:20 compute-0 systemd[1]: libpod-a782cba6bd3de0444b5758a4a15bc6b1ac11fc701412165ba41731ea8300ce51.scope: Deactivated successfully.
Dec  5 01:42:20 compute-0 podman[401910]: 2025-12-05 01:42:20.913977702 +0000 UTC m=+0.243975577 container died a782cba6bd3de0444b5758a4a15bc6b1ac11fc701412165ba41731ea8300ce51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_gould, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:42:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c5b53bd30d132caa076ce4767fcc429906df76029065b032663a65911f4a5d4-merged.mount: Deactivated successfully.
Dec  5 01:42:20 compute-0 podman[401910]: 2025-12-05 01:42:20.982244933 +0000 UTC m=+0.312242768 container remove a782cba6bd3de0444b5758a4a15bc6b1ac11fc701412165ba41731ea8300ce51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_gould, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:42:21 compute-0 systemd[1]: libpod-conmon-a782cba6bd3de0444b5758a4a15bc6b1ac11fc701412165ba41731ea8300ce51.scope: Deactivated successfully.
Dec  5 01:42:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v982: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:42:21 compute-0 podman[401948]: 2025-12-05 01:42:21.270581977 +0000 UTC m=+0.088551123 container create ea2c387fd380383b5badeae77f04383d12804f8e2653bb8732a2daa1835b0a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_edison, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  5 01:42:21 compute-0 podman[401948]: 2025-12-05 01:42:21.243237818 +0000 UTC m=+0.061207044 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:42:21 compute-0 systemd[1]: Started libpod-conmon-ea2c387fd380383b5badeae77f04383d12804f8e2653bb8732a2daa1835b0a8b.scope.
Dec  5 01:42:21 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:42:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f40add8ce8b35f55fcebe894f2d3db40c294f8f4ff2acc0a6783ce0c87496396/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:42:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f40add8ce8b35f55fcebe894f2d3db40c294f8f4ff2acc0a6783ce0c87496396/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:42:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f40add8ce8b35f55fcebe894f2d3db40c294f8f4ff2acc0a6783ce0c87496396/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:42:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f40add8ce8b35f55fcebe894f2d3db40c294f8f4ff2acc0a6783ce0c87496396/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:42:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f40add8ce8b35f55fcebe894f2d3db40c294f8f4ff2acc0a6783ce0c87496396/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:42:21 compute-0 podman[401948]: 2025-12-05 01:42:21.426042392 +0000 UTC m=+0.244011568 container init ea2c387fd380383b5badeae77f04383d12804f8e2653bb8732a2daa1835b0a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  5 01:42:21 compute-0 podman[401948]: 2025-12-05 01:42:21.448235467 +0000 UTC m=+0.266204633 container start ea2c387fd380383b5badeae77f04383d12804f8e2653bb8732a2daa1835b0a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_edison, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:42:21 compute-0 podman[401948]: 2025-12-05 01:42:21.455639385 +0000 UTC m=+0.273608561 container attach ea2c387fd380383b5badeae77f04383d12804f8e2653bb8732a2daa1835b0a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  5 01:42:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 01:42:21 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/544959681' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 01:42:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 01:42:21 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/544959681' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 01:42:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 01:42:21 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2770189528' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 01:42:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 01:42:21 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2770189528' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 01:42:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 01:42:22 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2182166203' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 01:42:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 01:42:22 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2182166203' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 01:42:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:42:22 compute-0 jovial_edison[401964]: --> passed data devices: 0 physical, 3 LVM
Dec  5 01:42:22 compute-0 jovial_edison[401964]: --> relative data size: 1.0
Dec  5 01:42:22 compute-0 jovial_edison[401964]: --> All data devices are unavailable
Dec  5 01:42:22 compute-0 systemd[1]: libpod-ea2c387fd380383b5badeae77f04383d12804f8e2653bb8732a2daa1835b0a8b.scope: Deactivated successfully.
Dec  5 01:42:22 compute-0 podman[401948]: 2025-12-05 01:42:22.802675641 +0000 UTC m=+1.620644797 container died ea2c387fd380383b5badeae77f04383d12804f8e2653bb8732a2daa1835b0a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_edison, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  5 01:42:22 compute-0 systemd[1]: libpod-ea2c387fd380383b5badeae77f04383d12804f8e2653bb8732a2daa1835b0a8b.scope: Consumed 1.299s CPU time.
Dec  5 01:42:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-f40add8ce8b35f55fcebe894f2d3db40c294f8f4ff2acc0a6783ce0c87496396-merged.mount: Deactivated successfully.
Dec  5 01:42:22 compute-0 podman[401948]: 2025-12-05 01:42:22.908339135 +0000 UTC m=+1.726308311 container remove ea2c387fd380383b5badeae77f04383d12804f8e2653bb8732a2daa1835b0a8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_edison, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:42:22 compute-0 systemd[1]: libpod-conmon-ea2c387fd380383b5badeae77f04383d12804f8e2653bb8732a2daa1835b0a8b.scope: Deactivated successfully.
Dec  5 01:42:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v983: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:42:23 compute-0 podman[402005]: 2025-12-05 01:42:23.096670485 +0000 UTC m=+0.136306507 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec  5 01:42:23 compute-0 podman[402049]: 2025-12-05 01:42:23.300788809 +0000 UTC m=+0.178399532 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec  5 01:42:24 compute-0 podman[402189]: 2025-12-05 01:42:24.105120014 +0000 UTC m=+0.082050230 container create c8d7593e1307c339326a9f408bfb7d19820276b3daa0fbadafd7240afe118f2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_kapitsa, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:42:24 compute-0 podman[402189]: 2025-12-05 01:42:24.076835448 +0000 UTC m=+0.053765724 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:42:24 compute-0 systemd[1]: Started libpod-conmon-c8d7593e1307c339326a9f408bfb7d19820276b3daa0fbadafd7240afe118f2d.scope.
Dec  5 01:42:24 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:42:24 compute-0 podman[402189]: 2025-12-05 01:42:24.271691871 +0000 UTC m=+0.248622147 container init c8d7593e1307c339326a9f408bfb7d19820276b3daa0fbadafd7240afe118f2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_kapitsa, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  5 01:42:24 compute-0 podman[402189]: 2025-12-05 01:42:24.288014271 +0000 UTC m=+0.264944467 container start c8d7593e1307c339326a9f408bfb7d19820276b3daa0fbadafd7240afe118f2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_kapitsa, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  5 01:42:24 compute-0 podman[402189]: 2025-12-05 01:42:24.294783731 +0000 UTC m=+0.271714007 container attach c8d7593e1307c339326a9f408bfb7d19820276b3daa0fbadafd7240afe118f2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_kapitsa, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Dec  5 01:42:24 compute-0 elegant_kapitsa[402204]: 167 167
Dec  5 01:42:24 compute-0 systemd[1]: libpod-c8d7593e1307c339326a9f408bfb7d19820276b3daa0fbadafd7240afe118f2d.scope: Deactivated successfully.
Dec  5 01:42:24 compute-0 podman[402189]: 2025-12-05 01:42:24.301522911 +0000 UTC m=+0.278453157 container died c8d7593e1307c339326a9f408bfb7d19820276b3daa0fbadafd7240afe118f2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_kapitsa, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:42:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-25d83b777ed388b4fb31dcbd1984fe3fcfd7d5c4a90c7a9ac8f8bf0428364e0c-merged.mount: Deactivated successfully.
Dec  5 01:42:24 compute-0 podman[402189]: 2025-12-05 01:42:24.390343511 +0000 UTC m=+0.367273737 container remove c8d7593e1307c339326a9f408bfb7d19820276b3daa0fbadafd7240afe118f2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  5 01:42:24 compute-0 systemd[1]: libpod-conmon-c8d7593e1307c339326a9f408bfb7d19820276b3daa0fbadafd7240afe118f2d.scope: Deactivated successfully.
Dec  5 01:42:24 compute-0 podman[402219]: 2025-12-05 01:42:24.502792155 +0000 UTC m=+0.108576667 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, distribution-scope=public, config_id=edpm, managed_by=edpm_ansible, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.expose-services=)
Dec  5 01:42:24 compute-0 podman[402218]: 2025-12-05 01:42:24.514644429 +0000 UTC m=+0.118651241 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 01:42:24 compute-0 podman[402271]: 2025-12-05 01:42:24.620551299 +0000 UTC m=+0.059023742 container create d09219f17134eae9a337fedbeda9ee4c4be3fe1b9dc6a4d4a63b23e8cf94d653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:42:24 compute-0 systemd[1]: Started libpod-conmon-d09219f17134eae9a337fedbeda9ee4c4be3fe1b9dc6a4d4a63b23e8cf94d653.scope.
Dec  5 01:42:24 compute-0 podman[402271]: 2025-12-05 01:42:24.59927406 +0000 UTC m=+0.037746543 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:42:24 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:42:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f732ac60e432fc0f755ee6a5124417ef3bf48f88c35848666e478acf704b9b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:42:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f732ac60e432fc0f755ee6a5124417ef3bf48f88c35848666e478acf704b9b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:42:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f732ac60e432fc0f755ee6a5124417ef3bf48f88c35848666e478acf704b9b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:42:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f732ac60e432fc0f755ee6a5124417ef3bf48f88c35848666e478acf704b9b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:42:24 compute-0 podman[402271]: 2025-12-05 01:42:24.790569683 +0000 UTC m=+0.229042376 container init d09219f17134eae9a337fedbeda9ee4c4be3fe1b9dc6a4d4a63b23e8cf94d653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  5 01:42:24 compute-0 podman[402271]: 2025-12-05 01:42:24.81142196 +0000 UTC m=+0.249894443 container start d09219f17134eae9a337fedbeda9ee4c4be3fe1b9dc6a4d4a63b23e8cf94d653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:42:24 compute-0 podman[402271]: 2025-12-05 01:42:24.818225312 +0000 UTC m=+0.256697795 container attach d09219f17134eae9a337fedbeda9ee4c4be3fe1b9dc6a4d4a63b23e8cf94d653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:42:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v984: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:42:25 compute-0 vigilant_edison[402287]: {
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:    "0": [
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:        {
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:            "devices": [
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:                "/dev/loop3"
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:            ],
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:            "lv_name": "ceph_lv0",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:            "lv_size": "21470642176",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:            "name": "ceph_lv0",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:            "tags": {
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:                "ceph.cluster_name": "ceph",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:                "ceph.crush_device_class": "",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:                "ceph.encrypted": "0",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:                "ceph.osd_id": "0",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:                "ceph.type": "block",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:                "ceph.vdo": "0"
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:            },
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:            "type": "block",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:            "vg_name": "ceph_vg0"
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:        }
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:    ],
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:    "1": [
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:        {
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:            "devices": [
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:                "/dev/loop4"
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:            ],
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:            "lv_name": "ceph_lv1",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:            "lv_size": "21470642176",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:            "name": "ceph_lv1",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:            "tags": {
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:                "ceph.cluster_name": "ceph",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:                "ceph.crush_device_class": "",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:                "ceph.encrypted": "0",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:                "ceph.osd_id": "1",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:                "ceph.type": "block",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:                "ceph.vdo": "0"
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:            },
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:            "type": "block",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:            "vg_name": "ceph_vg1"
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:        }
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:    ],
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:    "2": [
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:        {
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:            "devices": [
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:                "/dev/loop5"
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:            ],
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:            "lv_name": "ceph_lv2",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:            "lv_size": "21470642176",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:            "name": "ceph_lv2",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:            "tags": {
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:                "ceph.cluster_name": "ceph",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:                "ceph.crush_device_class": "",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:                "ceph.encrypted": "0",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:                "ceph.osd_id": "2",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:                "ceph.type": "block",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:                "ceph.vdo": "0"
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:            },
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:            "type": "block",
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:            "vg_name": "ceph_vg2"
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:        }
Dec  5 01:42:25 compute-0 vigilant_edison[402287]:    ]
Dec  5 01:42:25 compute-0 vigilant_edison[402287]: }
Dec  5 01:42:25 compute-0 systemd[1]: libpod-d09219f17134eae9a337fedbeda9ee4c4be3fe1b9dc6a4d4a63b23e8cf94d653.scope: Deactivated successfully.
Dec  5 01:42:25 compute-0 podman[402271]: 2025-12-05 01:42:25.752545074 +0000 UTC m=+1.191017567 container died d09219f17134eae9a337fedbeda9ee4c4be3fe1b9dc6a4d4a63b23e8cf94d653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_edison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Dec  5 01:42:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f732ac60e432fc0f755ee6a5124417ef3bf48f88c35848666e478acf704b9b6-merged.mount: Deactivated successfully.
Dec  5 01:42:25 compute-0 podman[402271]: 2025-12-05 01:42:25.861628844 +0000 UTC m=+1.300101317 container remove d09219f17134eae9a337fedbeda9ee4c4be3fe1b9dc6a4d4a63b23e8cf94d653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_edison, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:42:25 compute-0 systemd[1]: libpod-conmon-d09219f17134eae9a337fedbeda9ee4c4be3fe1b9dc6a4d4a63b23e8cf94d653.scope: Deactivated successfully.
Dec  5 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:42:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 01:42:26 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 01:42:26 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 4606 writes, 20K keys, 4606 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 4606 writes, 4606 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1300 writes, 5648 keys, 1300 commit groups, 1.0 writes per commit group, ingest: 8.47 MB, 0.01 MB/s#012Interval WAL: 1300 writes, 1300 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    115.1      0.19              0.09        11    0.017       0      0       0.0       0.0#012  L6      1/0    6.60 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    138.9    113.8      0.61              0.30        10    0.061     42K   5270       0.0       0.0#012 Sum      1/0    6.60 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2    105.8    114.1      0.80              0.38        21    0.038     42K   5270       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.3    101.4    101.3      0.35              0.17         8    0.044     18K   2065       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    138.9    113.8      0.61              0.30        10    0.061     42K   5270       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    117.9      0.18              0.09        10    0.018       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.4      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.021, interval 0.006#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.09 GB write, 0.05 MB/s write, 0.08 GB read, 0.05 MB/s read, 0.8 seconds#012Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56463779d1f0#2 capacity: 308.00 MB usage: 6.42 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 9.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(412,6.07 MB,1.97008%) FilterBlock(22,125.17 KB,0.0396877%) IndexBlock(22,238.05 KB,0.0754765%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  5 01:42:27 compute-0 podman[402447]: 2025-12-05 01:42:27.021787482 +0000 UTC m=+0.080413024 container create 02a85b343108b6108dade61932ca2a5c2c5bbc67203d81d75a6d090eaba00372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:42:27 compute-0 podman[402447]: 2025-12-05 01:42:26.986117808 +0000 UTC m=+0.044743370 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:42:27 compute-0 systemd[1]: Started libpod-conmon-02a85b343108b6108dade61932ca2a5c2c5bbc67203d81d75a6d090eaba00372.scope.
Dec  5 01:42:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v985: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:42:27 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:42:27 compute-0 podman[402447]: 2025-12-05 01:42:27.164976462 +0000 UTC m=+0.223602074 container init 02a85b343108b6108dade61932ca2a5c2c5bbc67203d81d75a6d090eaba00372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:42:27 compute-0 podman[402447]: 2025-12-05 01:42:27.182214467 +0000 UTC m=+0.240840009 container start 02a85b343108b6108dade61932ca2a5c2c5bbc67203d81d75a6d090eaba00372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_antonelli, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  5 01:42:27 compute-0 podman[402447]: 2025-12-05 01:42:27.189857972 +0000 UTC m=+0.248483574 container attach 02a85b343108b6108dade61932ca2a5c2c5bbc67203d81d75a6d090eaba00372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  5 01:42:27 compute-0 vibrant_antonelli[402463]: 167 167
Dec  5 01:42:27 compute-0 systemd[1]: libpod-02a85b343108b6108dade61932ca2a5c2c5bbc67203d81d75a6d090eaba00372.scope: Deactivated successfully.
Dec  5 01:42:27 compute-0 podman[402447]: 2025-12-05 01:42:27.19724924 +0000 UTC m=+0.255874762 container died 02a85b343108b6108dade61932ca2a5c2c5bbc67203d81d75a6d090eaba00372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:42:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bd598d6fecda98660335e37ebdc7744cbbd808fafbc394a7e15906db4549f0e-merged.mount: Deactivated successfully.
Dec  5 01:42:27 compute-0 podman[402447]: 2025-12-05 01:42:27.284479935 +0000 UTC m=+0.343105487 container remove 02a85b343108b6108dade61932ca2a5c2c5bbc67203d81d75a6d090eaba00372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_antonelli, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  5 01:42:27 compute-0 systemd[1]: libpod-conmon-02a85b343108b6108dade61932ca2a5c2c5bbc67203d81d75a6d090eaba00372.scope: Deactivated successfully.
Dec  5 01:42:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:42:27 compute-0 podman[402487]: 2025-12-05 01:42:27.579087625 +0000 UTC m=+0.087707169 container create d3d94b8752091a6f2c47e096e271f47b9f5ec3a27d3e755a6305639b63e4a55b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hofstadter, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  5 01:42:27 compute-0 podman[402487]: 2025-12-05 01:42:27.542477035 +0000 UTC m=+0.051096629 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:42:27 compute-0 systemd[1]: Started libpod-conmon-d3d94b8752091a6f2c47e096e271f47b9f5ec3a27d3e755a6305639b63e4a55b.scope.
Dec  5 01:42:27 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:42:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c338f33aceff9dc65f2a12d78b9d4a9fd318e7d6c88f7d60373386c42bd2491/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:42:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c338f33aceff9dc65f2a12d78b9d4a9fd318e7d6c88f7d60373386c42bd2491/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:42:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c338f33aceff9dc65f2a12d78b9d4a9fd318e7d6c88f7d60373386c42bd2491/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:42:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c338f33aceff9dc65f2a12d78b9d4a9fd318e7d6c88f7d60373386c42bd2491/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:42:27 compute-0 podman[402487]: 2025-12-05 01:42:27.758539345 +0000 UTC m=+0.267158929 container init d3d94b8752091a6f2c47e096e271f47b9f5ec3a27d3e755a6305639b63e4a55b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  5 01:42:27 compute-0 podman[402487]: 2025-12-05 01:42:27.7871222 +0000 UTC m=+0.295741714 container start d3d94b8752091a6f2c47e096e271f47b9f5ec3a27d3e755a6305639b63e4a55b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:42:27 compute-0 podman[402487]: 2025-12-05 01:42:27.792916623 +0000 UTC m=+0.301536137 container attach d3d94b8752091a6f2c47e096e271f47b9f5ec3a27d3e755a6305639b63e4a55b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hofstadter, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:42:28 compute-0 practical_hofstadter[402503]: {
Dec  5 01:42:28 compute-0 practical_hofstadter[402503]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 01:42:28 compute-0 practical_hofstadter[402503]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:42:28 compute-0 practical_hofstadter[402503]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 01:42:28 compute-0 practical_hofstadter[402503]:        "osd_id": 0,
Dec  5 01:42:28 compute-0 practical_hofstadter[402503]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:42:28 compute-0 practical_hofstadter[402503]:        "type": "bluestore"
Dec  5 01:42:28 compute-0 practical_hofstadter[402503]:    },
Dec  5 01:42:28 compute-0 practical_hofstadter[402503]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 01:42:28 compute-0 practical_hofstadter[402503]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:42:28 compute-0 practical_hofstadter[402503]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 01:42:28 compute-0 practical_hofstadter[402503]:        "osd_id": 1,
Dec  5 01:42:28 compute-0 practical_hofstadter[402503]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:42:28 compute-0 practical_hofstadter[402503]:        "type": "bluestore"
Dec  5 01:42:28 compute-0 practical_hofstadter[402503]:    },
Dec  5 01:42:28 compute-0 practical_hofstadter[402503]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 01:42:28 compute-0 practical_hofstadter[402503]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:42:28 compute-0 practical_hofstadter[402503]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 01:42:28 compute-0 practical_hofstadter[402503]:        "osd_id": 2,
Dec  5 01:42:28 compute-0 practical_hofstadter[402503]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:42:28 compute-0 practical_hofstadter[402503]:        "type": "bluestore"
Dec  5 01:42:28 compute-0 practical_hofstadter[402503]:    }
Dec  5 01:42:28 compute-0 practical_hofstadter[402503]: }
Dec  5 01:42:28 compute-0 systemd[1]: libpod-d3d94b8752091a6f2c47e096e271f47b9f5ec3a27d3e755a6305639b63e4a55b.scope: Deactivated successfully.
Dec  5 01:42:28 compute-0 podman[402487]: 2025-12-05 01:42:28.895814719 +0000 UTC m=+1.404434263 container died d3d94b8752091a6f2c47e096e271f47b9f5ec3a27d3e755a6305639b63e4a55b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hofstadter, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  5 01:42:28 compute-0 systemd[1]: libpod-d3d94b8752091a6f2c47e096e271f47b9f5ec3a27d3e755a6305639b63e4a55b.scope: Consumed 1.117s CPU time.
Dec  5 01:42:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c338f33aceff9dc65f2a12d78b9d4a9fd318e7d6c88f7d60373386c42bd2491-merged.mount: Deactivated successfully.
Dec  5 01:42:28 compute-0 podman[402487]: 2025-12-05 01:42:28.982358964 +0000 UTC m=+1.490978518 container remove d3d94b8752091a6f2c47e096e271f47b9f5ec3a27d3e755a6305639b63e4a55b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  5 01:42:29 compute-0 systemd[1]: libpod-conmon-d3d94b8752091a6f2c47e096e271f47b9f5ec3a27d3e755a6305639b63e4a55b.scope: Deactivated successfully.
Dec  5 01:42:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:42:29 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:42:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:42:29 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:42:29 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev e612973b-b7ad-4afd-aec0-06d27cca8339 does not exist
Dec  5 01:42:29 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 1e9dc24a-a411-4d39-8025-8d1aa6c4d343 does not exist
Dec  5 01:42:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v986: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:42:29 compute-0 podman[158197]: time="2025-12-05T01:42:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:42:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:42:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec  5 01:42:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:42:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8105 "" "Go-http-client/1.1"
Dec  5 01:42:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:42:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:42:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v987: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:42:31 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Dec  5 01:42:31 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:42:31.105721) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  5 01:42:31 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Dec  5 01:42:31 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898951105772, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1503, "num_deletes": 251, "total_data_size": 2380549, "memory_usage": 2430160, "flush_reason": "Manual Compaction"}
Dec  5 01:42:31 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Dec  5 01:42:31 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898951135271, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 2336232, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19393, "largest_seqno": 20895, "table_properties": {"data_size": 2329238, "index_size": 4065, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14437, "raw_average_key_size": 19, "raw_value_size": 2315213, "raw_average_value_size": 3180, "num_data_blocks": 185, "num_entries": 728, "num_filter_entries": 728, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764898793, "oldest_key_time": 1764898793, "file_creation_time": 1764898951, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Dec  5 01:42:31 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 29624 microseconds, and 12389 cpu microseconds.
Dec  5 01:42:31 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 01:42:31 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:42:31.135350) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 2336232 bytes OK
Dec  5 01:42:31 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:42:31.135371) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Dec  5 01:42:31 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:42:31.137526) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Dec  5 01:42:31 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:42:31.137541) EVENT_LOG_v1 {"time_micros": 1764898951137537, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  5 01:42:31 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:42:31.137560) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  5 01:42:31 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2373953, prev total WAL file size 2373953, number of live WAL files 2.
Dec  5 01:42:31 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 01:42:31 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:42:31.138761) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Dec  5 01:42:31 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  5 01:42:31 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(2281KB)], [47(6760KB)]
Dec  5 01:42:31 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898951138858, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9258905, "oldest_snapshot_seqno": -1}
Dec  5 01:42:31 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4279 keys, 7482083 bytes, temperature: kUnknown
Dec  5 01:42:31 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898951213777, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7482083, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7452460, "index_size": 17801, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10757, "raw_key_size": 105759, "raw_average_key_size": 24, "raw_value_size": 7373873, "raw_average_value_size": 1723, "num_data_blocks": 747, "num_entries": 4279, "num_filter_entries": 4279, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764898951, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Dec  5 01:42:31 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 01:42:31 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:42:31.214238) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7482083 bytes
Dec  5 01:42:31 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:42:31.217157) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 123.2 rd, 99.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 6.6 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(7.2) write-amplify(3.2) OK, records in: 4793, records dropped: 514 output_compression: NoCompression
Dec  5 01:42:31 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:42:31.217187) EVENT_LOG_v1 {"time_micros": 1764898951217173, "job": 24, "event": "compaction_finished", "compaction_time_micros": 75153, "compaction_time_cpu_micros": 34683, "output_level": 6, "num_output_files": 1, "total_output_size": 7482083, "num_input_records": 4793, "num_output_records": 4279, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  5 01:42:31 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 01:42:31 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898951218104, "job": 24, "event": "table_file_deletion", "file_number": 49}
Dec  5 01:42:31 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 01:42:31 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764898951221370, "job": 24, "event": "table_file_deletion", "file_number": 47}
Dec  5 01:42:31 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:42:31.138593) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:42:31 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:42:31.221629) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:42:31 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:42:31.221637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:42:31 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:42:31.221640) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:42:31 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:42:31.221643) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:42:31 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:42:31.221646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:42:31 compute-0 openstack_network_exporter[366555]: ERROR   01:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:42:31 compute-0 openstack_network_exporter[366555]: ERROR   01:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:42:31 compute-0 openstack_network_exporter[366555]: ERROR   01:42:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:42:31 compute-0 openstack_network_exporter[366555]: ERROR   01:42:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:42:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:42:31 compute-0 openstack_network_exporter[366555]: ERROR   01:42:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:42:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:42:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:42:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v988: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:42:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v989: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:42:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v990: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:42:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.312 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.313 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.313 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.314 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.317 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.319 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.319 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.319 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.319 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.320 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.320 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.320 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.320 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.320 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.320 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.321 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.321 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.321 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.321 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.321 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.322 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.322 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.323 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.323 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.323 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.323 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.323 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.323 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.324 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.324 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.324 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.325 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.325 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.327 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.327 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.328 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.328 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.328 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:42:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:42:38.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:42:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v991: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:42:40 compute-0 podman[402606]: 2025-12-05 01:42:40.719795811 +0000 UTC m=+0.127429757 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec  5 01:42:40 compute-0 podman[402607]: 2025-12-05 01:42:40.725166092 +0000 UTC m=+0.129523066 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 01:42:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v992: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:42:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:42:42 compute-0 podman[402648]: 2025-12-05 01:42:42.739326613 +0000 UTC m=+0.148098088 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  5 01:42:42 compute-0 podman[402669]: 2025-12-05 01:42:42.879755755 +0000 UTC m=+0.109059320 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true)
Dec  5 01:42:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v993: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:42:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v994: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:42:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:42:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:42:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:42:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:42:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:42:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:42:46 compute-0 podman[402688]: 2025-12-05 01:42:46.700734002 +0000 UTC m=+0.118174277 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, com.redhat.component=ubi9-container, config_id=edpm, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, version=9.4, build-date=2024-09-18T21:23:30, release=1214.1726694543, io.buildah.version=1.29.0, architecture=x86_64, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git)
Dec  5 01:42:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v995: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:42:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:42:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v996: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:42:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v997: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:42:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:42:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v998: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:42:53 compute-0 podman[402707]: 2025-12-05 01:42:53.746628549 +0000 UTC m=+0.156366381 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  5 01:42:53 compute-0 podman[402708]: 2025-12-05 01:42:53.809649032 +0000 UTC m=+0.214106216 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec  5 01:42:54 compute-0 podman[402749]: 2025-12-05 01:42:54.735234919 +0000 UTC m=+0.136207204 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 01:42:54 compute-0 podman[402750]: 2025-12-05 01:42:54.737605975 +0000 UTC m=+0.133608550 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, distribution-scope=public, maintainer=Red Hat, Inc., version=9.6, io.buildah.version=1.33.7, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  5 01:42:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v999: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:42:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:42:56.169 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:42:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:42:56.170 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:42:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:42:56.170 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:42:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1000: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:42:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:42:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1001: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:42:59 compute-0 podman[158197]: time="2025-12-05T01:42:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:42:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:42:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec  5 01:42:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:42:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8117 "" "Go-http-client/1.1"
Dec  5 01:43:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1002: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:43:01 compute-0 openstack_network_exporter[366555]: ERROR   01:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:43:01 compute-0 openstack_network_exporter[366555]: ERROR   01:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:43:01 compute-0 openstack_network_exporter[366555]: ERROR   01:43:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:43:01 compute-0 openstack_network_exporter[366555]: ERROR   01:43:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:43:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:43:01 compute-0 openstack_network_exporter[366555]: ERROR   01:43:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:43:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:43:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:43:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1003: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:43:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Dec  5 01:43:04 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1876526718' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Dec  5 01:43:04 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14377 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec  5 01:43:04 compute-0 ceph-mgr[193209]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  5 01:43:04 compute-0 ceph-mgr[193209]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  5 01:43:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1004: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:43:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1005: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:43:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:43:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1006: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:43:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1007: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:43:11 compute-0 podman[402793]: 2025-12-05 01:43:11.715920396 +0000 UTC m=+0.110582253 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 01:43:11 compute-0 podman[402792]: 2025-12-05 01:43:11.728084668 +0000 UTC m=+0.123809915 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  5 01:43:12 compute-0 nova_compute[349548]: 2025-12-05 01:43:12.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:43:12 compute-0 nova_compute[349548]: 2025-12-05 01:43:12.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 01:43:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:43:13 compute-0 nova_compute[349548]: 2025-12-05 01:43:13.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:43:13 compute-0 nova_compute[349548]: 2025-12-05 01:43:13.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:43:13 compute-0 nova_compute[349548]: 2025-12-05 01:43:13.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:43:13 compute-0 nova_compute[349548]: 2025-12-05 01:43:13.123 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:43:13 compute-0 nova_compute[349548]: 2025-12-05 01:43:13.123 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:43:13 compute-0 nova_compute[349548]: 2025-12-05 01:43:13.124 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:43:13 compute-0 nova_compute[349548]: 2025-12-05 01:43:13.124 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 01:43:13 compute-0 nova_compute[349548]: 2025-12-05 01:43:13.124 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:43:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1008: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:43:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:43:13 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3070138185' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:43:13 compute-0 nova_compute[349548]: 2025-12-05 01:43:13.720 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:43:13 compute-0 podman[402853]: 2025-12-05 01:43:13.724807938 +0000 UTC m=+0.129730412 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Dec  5 01:43:13 compute-0 podman[402854]: 2025-12-05 01:43:13.748774312 +0000 UTC m=+0.148674755 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Dec  5 01:43:14 compute-0 nova_compute[349548]: 2025-12-05 01:43:14.195 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 01:43:14 compute-0 nova_compute[349548]: 2025-12-05 01:43:14.196 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4584MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 01:43:14 compute-0 nova_compute[349548]: 2025-12-05 01:43:14.196 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:43:14 compute-0 nova_compute[349548]: 2025-12-05 01:43:14.197 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:43:14 compute-0 nova_compute[349548]: 2025-12-05 01:43:14.281 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 01:43:14 compute-0 nova_compute[349548]: 2025-12-05 01:43:14.282 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 01:43:14 compute-0 nova_compute[349548]: 2025-12-05 01:43:14.301 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:43:14 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:43:14 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3767186678' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:43:14 compute-0 nova_compute[349548]: 2025-12-05 01:43:14.788 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:43:14 compute-0 nova_compute[349548]: 2025-12-05 01:43:14.798 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 01:43:14 compute-0 nova_compute[349548]: 2025-12-05 01:43:14.813 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 01:43:14 compute-0 nova_compute[349548]: 2025-12-05 01:43:14.815 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 01:43:14 compute-0 nova_compute[349548]: 2025-12-05 01:43:14.815 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:43:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1009: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:43:15 compute-0 nova_compute[349548]: 2025-12-05 01:43:15.811 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:43:15 compute-0 nova_compute[349548]: 2025-12-05 01:43:15.811 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:43:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:43:16
Dec  5 01:43:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 01:43:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 01:43:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'images', '.rgw.root', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', '.mgr']
Dec  5 01:43:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 01:43:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:43:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:43:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:43:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:43:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:43:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:43:16 compute-0 nova_compute[349548]: 2025-12-05 01:43:16.459 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:43:16 compute-0 nova_compute[349548]: 2025-12-05 01:43:16.460 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 01:43:16 compute-0 nova_compute[349548]: 2025-12-05 01:43:16.461 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  5 01:43:16 compute-0 nova_compute[349548]: 2025-12-05 01:43:16.485 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  5 01:43:16 compute-0 nova_compute[349548]: 2025-12-05 01:43:16.487 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:43:16 compute-0 nova_compute[349548]: 2025-12-05 01:43:16.488 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:43:16 compute-0 nova_compute[349548]: 2025-12-05 01:43:16.488 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:43:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 01:43:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:43:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 01:43:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:43:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:43:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:43:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:43:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:43:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:43:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:43:17 compute-0 ceph-mgr[193209]: client.0 ms_handle_reset on v2:192.168.122.100:6800/858078637
Dec  5 01:43:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1010: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:43:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:43:17 compute-0 podman[402914]: 2025-12-05 01:43:17.710594622 +0000 UTC m=+0.128545188 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, name=ubi9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., release-0.7.12=, config_id=edpm, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  5 01:43:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1011: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:43:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1012: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:43:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:43:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1013: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:43:24 compute-0 podman[402935]: 2025-12-05 01:43:24.713363308 +0000 UTC m=+0.121217703 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  5 01:43:24 compute-0 podman[402936]: 2025-12-05 01:43:24.76178744 +0000 UTC m=+0.170736185 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Dec  5 01:43:24 compute-0 podman[402979]: 2025-12-05 01:43:24.901875223 +0000 UTC m=+0.096896498 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 01:43:24 compute-0 podman[402980]: 2025-12-05 01:43:24.953086084 +0000 UTC m=+0.139907978 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, version=9.6, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.buildah.version=1.33.7, container_name=openstack_network_exporter, distribution-scope=public, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  5 01:43:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1014: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:43:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Dec  5 01:43:26 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3920326235' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Dec  5 01:43:26 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.14383 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec  5 01:43:26 compute-0 ceph-mgr[193209]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  5 01:43:26 compute-0 ceph-mgr[193209]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec  5 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:43:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 01:43:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1015: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:43:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:43:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1016: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:43:29 compute-0 podman[158197]: time="2025-12-05T01:43:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:43:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:43:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec  5 01:43:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:43:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8116 "" "Go-http-client/1.1"
Dec  5 01:43:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:43:30 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:43:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 01:43:30 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:43:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 01:43:30 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:43:30 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 7088d53b-39bd-4c69-9627-eed289b12f36 does not exist
Dec  5 01:43:30 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 8d4fad16-5f58-4fcf-8dd0-d3295ec433de does not exist
Dec  5 01:43:30 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev bf8ed7d2-c731-42e9-a95b-dad008959aa4 does not exist
Dec  5 01:43:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 01:43:30 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 01:43:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 01:43:30 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:43:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:43:30 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:43:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:43:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:43:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:43:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1017: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:43:31 compute-0 openstack_network_exporter[366555]: ERROR   01:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:43:31 compute-0 openstack_network_exporter[366555]: ERROR   01:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:43:31 compute-0 openstack_network_exporter[366555]: ERROR   01:43:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:43:31 compute-0 openstack_network_exporter[366555]: ERROR   01:43:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:43:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:43:31 compute-0 openstack_network_exporter[366555]: ERROR   01:43:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:43:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:43:31 compute-0 podman[403293]: 2025-12-05 01:43:31.876472015 +0000 UTC m=+0.111078307 container create 91fe2b47e734f34a455f07ffa0716a4ea95cd65365714bac139d549ccc623268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  5 01:43:31 compute-0 podman[403293]: 2025-12-05 01:43:31.835553893 +0000 UTC m=+0.070160245 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:43:31 compute-0 systemd[1]: Started libpod-conmon-91fe2b47e734f34a455f07ffa0716a4ea95cd65365714bac139d549ccc623268.scope.
Dec  5 01:43:31 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:43:32 compute-0 podman[403293]: 2025-12-05 01:43:32.014146269 +0000 UTC m=+0.248752621 container init 91fe2b47e734f34a455f07ffa0716a4ea95cd65365714bac139d549ccc623268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Dec  5 01:43:32 compute-0 podman[403293]: 2025-12-05 01:43:32.038801143 +0000 UTC m=+0.273407435 container start 91fe2b47e734f34a455f07ffa0716a4ea95cd65365714bac139d549ccc623268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_dirac, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  5 01:43:32 compute-0 podman[403293]: 2025-12-05 01:43:32.045761029 +0000 UTC m=+0.280367321 container attach 91fe2b47e734f34a455f07ffa0716a4ea95cd65365714bac139d549ccc623268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_dirac, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Dec  5 01:43:32 compute-0 mystifying_dirac[403309]: 167 167
Dec  5 01:43:32 compute-0 systemd[1]: libpod-91fe2b47e734f34a455f07ffa0716a4ea95cd65365714bac139d549ccc623268.scope: Deactivated successfully.
Dec  5 01:43:32 compute-0 podman[403293]: 2025-12-05 01:43:32.052100447 +0000 UTC m=+0.286706759 container died 91fe2b47e734f34a455f07ffa0716a4ea95cd65365714bac139d549ccc623268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  5 01:43:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-e75bbf53bcd35b182351b025f08645b4810373b029537b66fb1032588183f4d7-merged.mount: Deactivated successfully.
Dec  5 01:43:32 compute-0 podman[403293]: 2025-12-05 01:43:32.13248396 +0000 UTC m=+0.367090262 container remove 91fe2b47e734f34a455f07ffa0716a4ea95cd65365714bac139d549ccc623268 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_dirac, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Dec  5 01:43:32 compute-0 systemd[1]: libpod-conmon-91fe2b47e734f34a455f07ffa0716a4ea95cd65365714bac139d549ccc623268.scope: Deactivated successfully.
Dec  5 01:43:32 compute-0 podman[403331]: 2025-12-05 01:43:32.400605985 +0000 UTC m=+0.074871328 container create 3af65f3ed41dcfc31d32fbacf90ea24a53072352fe439701e2f54dd2f4dec6a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  5 01:43:32 compute-0 podman[403331]: 2025-12-05 01:43:32.372377791 +0000 UTC m=+0.046643174 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:43:32 compute-0 systemd[1]: Started libpod-conmon-3af65f3ed41dcfc31d32fbacf90ea24a53072352fe439701e2f54dd2f4dec6a3.scope.
Dec  5 01:43:32 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:43:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5730e355804d460d3c550e41792c26211984e56a48526a1d70b82c629c58486e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:43:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5730e355804d460d3c550e41792c26211984e56a48526a1d70b82c629c58486e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:43:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5730e355804d460d3c550e41792c26211984e56a48526a1d70b82c629c58486e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:43:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5730e355804d460d3c550e41792c26211984e56a48526a1d70b82c629c58486e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:43:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5730e355804d460d3c550e41792c26211984e56a48526a1d70b82c629c58486e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:43:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:43:32 compute-0 podman[403331]: 2025-12-05 01:43:32.543103815 +0000 UTC m=+0.217369238 container init 3af65f3ed41dcfc31d32fbacf90ea24a53072352fe439701e2f54dd2f4dec6a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_stonebraker, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:43:32 compute-0 podman[403331]: 2025-12-05 01:43:32.561120292 +0000 UTC m=+0.235385635 container start 3af65f3ed41dcfc31d32fbacf90ea24a53072352fe439701e2f54dd2f4dec6a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_stonebraker, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  5 01:43:32 compute-0 podman[403331]: 2025-12-05 01:43:32.566710769 +0000 UTC m=+0.240976182 container attach 3af65f3ed41dcfc31d32fbacf90ea24a53072352fe439701e2f54dd2f4dec6a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:43:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1018: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:43:33 compute-0 keen_stonebraker[403347]: --> passed data devices: 0 physical, 3 LVM
Dec  5 01:43:33 compute-0 keen_stonebraker[403347]: --> relative data size: 1.0
Dec  5 01:43:33 compute-0 keen_stonebraker[403347]: --> All data devices are unavailable
Dec  5 01:43:33 compute-0 systemd[1]: libpod-3af65f3ed41dcfc31d32fbacf90ea24a53072352fe439701e2f54dd2f4dec6a3.scope: Deactivated successfully.
Dec  5 01:43:33 compute-0 systemd[1]: libpod-3af65f3ed41dcfc31d32fbacf90ea24a53072352fe439701e2f54dd2f4dec6a3.scope: Consumed 1.277s CPU time.
Dec  5 01:43:33 compute-0 podman[403331]: 2025-12-05 01:43:33.904226218 +0000 UTC m=+1.578491571 container died 3af65f3ed41dcfc31d32fbacf90ea24a53072352fe439701e2f54dd2f4dec6a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_stonebraker, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  5 01:43:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-5730e355804d460d3c550e41792c26211984e56a48526a1d70b82c629c58486e-merged.mount: Deactivated successfully.
Dec  5 01:43:33 compute-0 podman[403331]: 2025-12-05 01:43:33.995795015 +0000 UTC m=+1.670060348 container remove 3af65f3ed41dcfc31d32fbacf90ea24a53072352fe439701e2f54dd2f4dec6a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:43:34 compute-0 systemd[1]: libpod-conmon-3af65f3ed41dcfc31d32fbacf90ea24a53072352fe439701e2f54dd2f4dec6a3.scope: Deactivated successfully.
Dec  5 01:43:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1019: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:43:35 compute-0 podman[403529]: 2025-12-05 01:43:35.200315901 +0000 UTC m=+0.095283562 container create 76e3a69a38bbf8f69a9ce9e2cf4dac58c5a8d3e40537540fb28374fa9c19dbdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  5 01:43:35 compute-0 podman[403529]: 2025-12-05 01:43:35.164560015 +0000 UTC m=+0.059527716 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:43:35 compute-0 systemd[1]: Started libpod-conmon-76e3a69a38bbf8f69a9ce9e2cf4dac58c5a8d3e40537540fb28374fa9c19dbdb.scope.
Dec  5 01:43:35 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:43:35 compute-0 podman[403529]: 2025-12-05 01:43:35.336685289 +0000 UTC m=+0.231652990 container init 76e3a69a38bbf8f69a9ce9e2cf4dac58c5a8d3e40537540fb28374fa9c19dbdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_carver, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  5 01:43:35 compute-0 podman[403529]: 2025-12-05 01:43:35.354000936 +0000 UTC m=+0.248968577 container start 76e3a69a38bbf8f69a9ce9e2cf4dac58c5a8d3e40537540fb28374fa9c19dbdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  5 01:43:35 compute-0 podman[403529]: 2025-12-05 01:43:35.36193773 +0000 UTC m=+0.256905361 container attach 76e3a69a38bbf8f69a9ce9e2cf4dac58c5a8d3e40537540fb28374fa9c19dbdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_carver, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  5 01:43:35 compute-0 stoic_carver[403545]: 167 167
Dec  5 01:43:35 compute-0 systemd[1]: libpod-76e3a69a38bbf8f69a9ce9e2cf4dac58c5a8d3e40537540fb28374fa9c19dbdb.scope: Deactivated successfully.
Dec  5 01:43:35 compute-0 podman[403529]: 2025-12-05 01:43:35.365554952 +0000 UTC m=+0.260522633 container died 76e3a69a38bbf8f69a9ce9e2cf4dac58c5a8d3e40537540fb28374fa9c19dbdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_carver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  5 01:43:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba0be9a4c806b0a40b06cdfb7c93640ed1c146b223cf566000f3c716d69d3288-merged.mount: Deactivated successfully.
Dec  5 01:43:35 compute-0 podman[403529]: 2025-12-05 01:43:35.438298039 +0000 UTC m=+0.333265670 container remove 76e3a69a38bbf8f69a9ce9e2cf4dac58c5a8d3e40537540fb28374fa9c19dbdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  5 01:43:35 compute-0 systemd[1]: libpod-conmon-76e3a69a38bbf8f69a9ce9e2cf4dac58c5a8d3e40537540fb28374fa9c19dbdb.scope: Deactivated successfully.
Dec  5 01:43:35 compute-0 podman[403570]: 2025-12-05 01:43:35.727270851 +0000 UTC m=+0.091984070 container create 1dd588ffe7d321461d73aa4e3720352bc71e1790fe811ff2309290dbcc6a1f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  5 01:43:35 compute-0 podman[403570]: 2025-12-05 01:43:35.693312205 +0000 UTC m=+0.058025474 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:43:35 compute-0 systemd[1]: Started libpod-conmon-1dd588ffe7d321461d73aa4e3720352bc71e1790fe811ff2309290dbcc6a1f81.scope.
Dec  5 01:43:35 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:43:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26b31646c1cd145c7e61e246b89d0a16caa6b8b63d3e2b8b8231027927c5311d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:43:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26b31646c1cd145c7e61e246b89d0a16caa6b8b63d3e2b8b8231027927c5311d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:43:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26b31646c1cd145c7e61e246b89d0a16caa6b8b63d3e2b8b8231027927c5311d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:43:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26b31646c1cd145c7e61e246b89d0a16caa6b8b63d3e2b8b8231027927c5311d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:43:35 compute-0 podman[403570]: 2025-12-05 01:43:35.897240474 +0000 UTC m=+0.261953723 container init 1dd588ffe7d321461d73aa4e3720352bc71e1790fe811ff2309290dbcc6a1f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  5 01:43:35 compute-0 podman[403570]: 2025-12-05 01:43:35.931366944 +0000 UTC m=+0.296080163 container start 1dd588ffe7d321461d73aa4e3720352bc71e1790fe811ff2309290dbcc6a1f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wiles, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:43:35 compute-0 podman[403570]: 2025-12-05 01:43:35.937817716 +0000 UTC m=+0.302530985 container attach 1dd588ffe7d321461d73aa4e3720352bc71e1790fe811ff2309290dbcc6a1f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  5 01:43:36 compute-0 reverent_wiles[403587]: {
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:    "0": [
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:        {
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:            "devices": [
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:                "/dev/loop3"
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:            ],
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:            "lv_name": "ceph_lv0",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:            "lv_size": "21470642176",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:            "name": "ceph_lv0",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:            "tags": {
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:                "ceph.cluster_name": "ceph",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:                "ceph.crush_device_class": "",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:                "ceph.encrypted": "0",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:                "ceph.osd_id": "0",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:                "ceph.type": "block",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:                "ceph.vdo": "0"
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:            },
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:            "type": "block",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:            "vg_name": "ceph_vg0"
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:        }
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:    ],
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:    "1": [
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:        {
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:            "devices": [
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:                "/dev/loop4"
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:            ],
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:            "lv_name": "ceph_lv1",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:            "lv_size": "21470642176",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:            "name": "ceph_lv1",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:            "tags": {
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:                "ceph.cluster_name": "ceph",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:                "ceph.crush_device_class": "",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:                "ceph.encrypted": "0",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:                "ceph.osd_id": "1",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:                "ceph.type": "block",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:                "ceph.vdo": "0"
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:            },
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:            "type": "block",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:            "vg_name": "ceph_vg1"
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:        }
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:    ],
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:    "2": [
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:        {
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:            "devices": [
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:                "/dev/loop5"
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:            ],
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:            "lv_name": "ceph_lv2",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:            "lv_size": "21470642176",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:            "name": "ceph_lv2",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:            "tags": {
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:                "ceph.cluster_name": "ceph",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:                "ceph.crush_device_class": "",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:                "ceph.encrypted": "0",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:                "ceph.osd_id": "2",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:                "ceph.type": "block",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:                "ceph.vdo": "0"
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:            },
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:            "type": "block",
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:            "vg_name": "ceph_vg2"
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:        }
Dec  5 01:43:36 compute-0 reverent_wiles[403587]:    ]
Dec  5 01:43:36 compute-0 reverent_wiles[403587]: }
Dec  5 01:43:36 compute-0 systemd[1]: libpod-1dd588ffe7d321461d73aa4e3720352bc71e1790fe811ff2309290dbcc6a1f81.scope: Deactivated successfully.
Dec  5 01:43:36 compute-0 podman[403570]: 2025-12-05 01:43:36.786118828 +0000 UTC m=+1.150832077 container died 1dd588ffe7d321461d73aa4e3720352bc71e1790fe811ff2309290dbcc6a1f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wiles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  5 01:43:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-26b31646c1cd145c7e61e246b89d0a16caa6b8b63d3e2b8b8231027927c5311d-merged.mount: Deactivated successfully.
Dec  5 01:43:36 compute-0 podman[403570]: 2025-12-05 01:43:36.873710872 +0000 UTC m=+1.238424051 container remove 1dd588ffe7d321461d73aa4e3720352bc71e1790fe811ff2309290dbcc6a1f81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  5 01:43:36 compute-0 systemd[1]: libpod-conmon-1dd588ffe7d321461d73aa4e3720352bc71e1790fe811ff2309290dbcc6a1f81.scope: Deactivated successfully.
Dec  5 01:43:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1020: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:43:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:43:38 compute-0 podman[403749]: 2025-12-05 01:43:38.0516151 +0000 UTC m=+0.080564108 container create 2b61d259e0e1ab65efa375b395be6107647ddb386c59cec12a15e3acb555e8de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_grothendieck, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:43:38 compute-0 podman[403749]: 2025-12-05 01:43:38.019052824 +0000 UTC m=+0.048001842 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:43:38 compute-0 systemd[1]: Started libpod-conmon-2b61d259e0e1ab65efa375b395be6107647ddb386c59cec12a15e3acb555e8de.scope.
Dec  5 01:43:38 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:43:38 compute-0 podman[403749]: 2025-12-05 01:43:38.211830199 +0000 UTC m=+0.240779207 container init 2b61d259e0e1ab65efa375b395be6107647ddb386c59cec12a15e3acb555e8de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:43:38 compute-0 podman[403749]: 2025-12-05 01:43:38.229025843 +0000 UTC m=+0.257974851 container start 2b61d259e0e1ab65efa375b395be6107647ddb386c59cec12a15e3acb555e8de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_grothendieck, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:43:38 compute-0 podman[403749]: 2025-12-05 01:43:38.236094832 +0000 UTC m=+0.265043890 container attach 2b61d259e0e1ab65efa375b395be6107647ddb386c59cec12a15e3acb555e8de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_grothendieck, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:43:38 compute-0 adoring_grothendieck[403764]: 167 167
Dec  5 01:43:38 compute-0 systemd[1]: libpod-2b61d259e0e1ab65efa375b395be6107647ddb386c59cec12a15e3acb555e8de.scope: Deactivated successfully.
Dec  5 01:43:38 compute-0 podman[403749]: 2025-12-05 01:43:38.241556175 +0000 UTC m=+0.270505183 container died 2b61d259e0e1ab65efa375b395be6107647ddb386c59cec12a15e3acb555e8de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_grothendieck, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  5 01:43:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-067adc8194c95d3bb24241c8649b3e74e7654c2f609745fbefe60e681b4aa8f8-merged.mount: Deactivated successfully.
Dec  5 01:43:38 compute-0 podman[403749]: 2025-12-05 01:43:38.306650757 +0000 UTC m=+0.335599725 container remove 2b61d259e0e1ab65efa375b395be6107647ddb386c59cec12a15e3acb555e8de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_grothendieck, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:43:38 compute-0 systemd[1]: libpod-conmon-2b61d259e0e1ab65efa375b395be6107647ddb386c59cec12a15e3acb555e8de.scope: Deactivated successfully.
Dec  5 01:43:38 compute-0 podman[403787]: 2025-12-05 01:43:38.602794141 +0000 UTC m=+0.097945507 container create 98e37e6af726f305a32294c320c231adbd74a22e516a187ca936c1ec7ac9a091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Dec  5 01:43:38 compute-0 podman[403787]: 2025-12-05 01:43:38.57078384 +0000 UTC m=+0.065935256 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:43:38 compute-0 systemd[1]: Started libpod-conmon-98e37e6af726f305a32294c320c231adbd74a22e516a187ca936c1ec7ac9a091.scope.
Dec  5 01:43:38 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:43:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7500c5c55bfcaa541ae6fa2b90b22a8bf7226612b4d8b29a300804052b22d133/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:43:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7500c5c55bfcaa541ae6fa2b90b22a8bf7226612b4d8b29a300804052b22d133/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:43:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7500c5c55bfcaa541ae6fa2b90b22a8bf7226612b4d8b29a300804052b22d133/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:43:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7500c5c55bfcaa541ae6fa2b90b22a8bf7226612b4d8b29a300804052b22d133/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:43:38 compute-0 podman[403787]: 2025-12-05 01:43:38.775762629 +0000 UTC m=+0.270914055 container init 98e37e6af726f305a32294c320c231adbd74a22e516a187ca936c1ec7ac9a091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kirch, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:43:38 compute-0 podman[403787]: 2025-12-05 01:43:38.801516374 +0000 UTC m=+0.296667740 container start 98e37e6af726f305a32294c320c231adbd74a22e516a187ca936c1ec7ac9a091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kirch, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:43:38 compute-0 podman[403787]: 2025-12-05 01:43:38.808963973 +0000 UTC m=+0.304115399 container attach 98e37e6af726f305a32294c320c231adbd74a22e516a187ca936c1ec7ac9a091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kirch, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:43:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1021: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:43:39 compute-0 laughing_kirch[403803]: {
Dec  5 01:43:39 compute-0 laughing_kirch[403803]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 01:43:39 compute-0 laughing_kirch[403803]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:43:39 compute-0 laughing_kirch[403803]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 01:43:39 compute-0 laughing_kirch[403803]:        "osd_id": 0,
Dec  5 01:43:39 compute-0 laughing_kirch[403803]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:43:39 compute-0 laughing_kirch[403803]:        "type": "bluestore"
Dec  5 01:43:39 compute-0 laughing_kirch[403803]:    },
Dec  5 01:43:39 compute-0 laughing_kirch[403803]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 01:43:39 compute-0 laughing_kirch[403803]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:43:39 compute-0 laughing_kirch[403803]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 01:43:39 compute-0 laughing_kirch[403803]:        "osd_id": 1,
Dec  5 01:43:39 compute-0 laughing_kirch[403803]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:43:39 compute-0 laughing_kirch[403803]:        "type": "bluestore"
Dec  5 01:43:39 compute-0 laughing_kirch[403803]:    },
Dec  5 01:43:39 compute-0 laughing_kirch[403803]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 01:43:39 compute-0 laughing_kirch[403803]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:43:39 compute-0 laughing_kirch[403803]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 01:43:39 compute-0 laughing_kirch[403803]:        "osd_id": 2,
Dec  5 01:43:39 compute-0 laughing_kirch[403803]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:43:39 compute-0 laughing_kirch[403803]:        "type": "bluestore"
Dec  5 01:43:39 compute-0 laughing_kirch[403803]:    }
Dec  5 01:43:39 compute-0 laughing_kirch[403803]: }
Dec  5 01:43:40 compute-0 systemd[1]: libpod-98e37e6af726f305a32294c320c231adbd74a22e516a187ca936c1ec7ac9a091.scope: Deactivated successfully.
Dec  5 01:43:40 compute-0 systemd[1]: libpod-98e37e6af726f305a32294c320c231adbd74a22e516a187ca936c1ec7ac9a091.scope: Consumed 1.228s CPU time.
Dec  5 01:43:40 compute-0 podman[403836]: 2025-12-05 01:43:40.124281768 +0000 UTC m=+0.065102513 container died 98e37e6af726f305a32294c320c231adbd74a22e516a187ca936c1ec7ac9a091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kirch, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  5 01:43:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-7500c5c55bfcaa541ae6fa2b90b22a8bf7226612b4d8b29a300804052b22d133-merged.mount: Deactivated successfully.
Dec  5 01:43:40 compute-0 podman[403836]: 2025-12-05 01:43:40.259653607 +0000 UTC m=+0.200474312 container remove 98e37e6af726f305a32294c320c231adbd74a22e516a187ca936c1ec7ac9a091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_kirch, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:43:40 compute-0 systemd[1]: libpod-conmon-98e37e6af726f305a32294c320c231adbd74a22e516a187ca936c1ec7ac9a091.scope: Deactivated successfully.
Dec  5 01:43:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:43:40 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:43:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:43:40 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:43:40 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 448b738f-8d3e-4aab-b1d1-bdc02a44c502 does not exist
Dec  5 01:43:40 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 40a4e974-f7f8-43ca-8fbf-8483553eb1f3 does not exist
Dec  5 01:43:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1022: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:43:41 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:43:41 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:43:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:43:42 compute-0 podman[403900]: 2025-12-05 01:43:42.724692836 +0000 UTC m=+0.128440295 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  5 01:43:42 compute-0 podman[403901]: 2025-12-05 01:43:42.749499214 +0000 UTC m=+0.150153116 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  5 01:43:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1023: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:43:44 compute-0 podman[403941]: 2025-12-05 01:43:44.730005588 +0000 UTC m=+0.129580038 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  5 01:43:44 compute-0 podman[403940]: 2025-12-05 01:43:44.752498691 +0000 UTC m=+0.159386877 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Dec  5 01:43:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1024: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:43:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 01:43:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/800433581' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 01:43:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 01:43:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/800433581' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 01:43:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:43:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:43:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:43:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:43:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:43:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:43:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1025: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:43:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:43:48 compute-0 podman[403976]: 2025-12-05 01:43:48.757719103 +0000 UTC m=+0.157755330 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, distribution-scope=public, io.openshift.tags=base rhel9, managed_by=edpm_ansible, config_id=edpm, vcs-type=git, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=kepler, release=1214.1726694543, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=)
Dec  5 01:43:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1026: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:43:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1027: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:43:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:43:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1028: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:43:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1029: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:43:55 compute-0 podman[403998]: 2025-12-05 01:43:55.718700063 +0000 UTC m=+0.107563708 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  5 01:43:55 compute-0 podman[403997]: 2025-12-05 01:43:55.756953 +0000 UTC m=+0.156393153 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  5 01:43:55 compute-0 podman[404005]: 2025-12-05 01:43:55.783776804 +0000 UTC m=+0.154958971 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-type=git, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, io.openshift.expose-services=, version=9.6, config_id=edpm, architecture=x86_64, distribution-scope=public, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal)
Dec  5 01:43:55 compute-0 podman[403999]: 2025-12-05 01:43:55.784213497 +0000 UTC m=+0.161013422 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125)
Dec  5 01:43:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:43:56.170 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:43:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:43:56.171 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:43:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:43:56.171 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:43:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1030: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:43:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:43:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1031: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:43:59 compute-0 podman[158197]: time="2025-12-05T01:43:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:43:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:43:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec  5 01:43:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:43:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8105 "" "Go-http-client/1.1"
Dec  5 01:44:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1032: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:44:01 compute-0 openstack_network_exporter[366555]: ERROR   01:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:44:01 compute-0 openstack_network_exporter[366555]: ERROR   01:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:44:01 compute-0 openstack_network_exporter[366555]: ERROR   01:44:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:44:01 compute-0 openstack_network_exporter[366555]: ERROR   01:44:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:44:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:44:01 compute-0 openstack_network_exporter[366555]: ERROR   01:44:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:44:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:44:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:44:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1033: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:44:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1034: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:44:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1035: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:44:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:44:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1036: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:44:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1037: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:44:12 compute-0 nova_compute[349548]: 2025-12-05 01:44:12.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:44:12 compute-0 nova_compute[349548]: 2025-12-05 01:44:12.066 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 01:44:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:44:13 compute-0 nova_compute[349548]: 2025-12-05 01:44:13.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:44:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1038: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:44:13 compute-0 podman[404086]: 2025-12-05 01:44:13.715848088 +0000 UTC m=+0.125877209 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:44:13 compute-0 podman[404087]: 2025-12-05 01:44:13.739459681 +0000 UTC m=+0.143263877 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  5 01:44:14 compute-0 nova_compute[349548]: 2025-12-05 01:44:14.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:44:14 compute-0 nova_compute[349548]: 2025-12-05 01:44:14.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:44:14 compute-0 nova_compute[349548]: 2025-12-05 01:44:14.148 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:44:14 compute-0 nova_compute[349548]: 2025-12-05 01:44:14.148 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:44:14 compute-0 nova_compute[349548]: 2025-12-05 01:44:14.149 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:44:14 compute-0 nova_compute[349548]: 2025-12-05 01:44:14.149 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 01:44:14 compute-0 nova_compute[349548]: 2025-12-05 01:44:14.149 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:44:14 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:44:14 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1262124617' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:44:14 compute-0 nova_compute[349548]: 2025-12-05 01:44:14.636 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:44:15 compute-0 nova_compute[349548]: 2025-12-05 01:44:15.165 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 01:44:15 compute-0 nova_compute[349548]: 2025-12-05 01:44:15.167 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4586MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 01:44:15 compute-0 nova_compute[349548]: 2025-12-05 01:44:15.168 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:44:15 compute-0 nova_compute[349548]: 2025-12-05 01:44:15.169 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:44:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1039: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:44:15 compute-0 nova_compute[349548]: 2025-12-05 01:44:15.276 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 01:44:15 compute-0 nova_compute[349548]: 2025-12-05 01:44:15.277 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 01:44:15 compute-0 nova_compute[349548]: 2025-12-05 01:44:15.305 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:44:15 compute-0 podman[404168]: 2025-12-05 01:44:15.731791248 +0000 UTC m=+0.138482663 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute)
Dec  5 01:44:15 compute-0 podman[404169]: 2025-12-05 01:44:15.758510419 +0000 UTC m=+0.158518376 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2)
Dec  5 01:44:15 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:44:15 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2676241468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:44:15 compute-0 nova_compute[349548]: 2025-12-05 01:44:15.877 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:44:15 compute-0 nova_compute[349548]: 2025-12-05 01:44:15.891 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 01:44:15 compute-0 nova_compute[349548]: 2025-12-05 01:44:15.916 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 01:44:15 compute-0 nova_compute[349548]: 2025-12-05 01:44:15.919 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 01:44:15 compute-0 nova_compute[349548]: 2025-12-05 01:44:15.919 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:44:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:44:16
Dec  5 01:44:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 01:44:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 01:44:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', '.mgr', 'images', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'vms']
Dec  5 01:44:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 01:44:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:44:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:44:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:44:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:44:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:44:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:44:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 01:44:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:44:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 01:44:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:44:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:44:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:44:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:44:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:44:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:44:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:44:16 compute-0 nova_compute[349548]: 2025-12-05 01:44:16.916 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:44:16 compute-0 nova_compute[349548]: 2025-12-05 01:44:16.916 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:44:16 compute-0 nova_compute[349548]: 2025-12-05 01:44:16.917 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 01:44:16 compute-0 nova_compute[349548]: 2025-12-05 01:44:16.917 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  5 01:44:16 compute-0 nova_compute[349548]: 2025-12-05 01:44:16.946 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  5 01:44:16 compute-0 nova_compute[349548]: 2025-12-05 01:44:16.947 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:44:16 compute-0 nova_compute[349548]: 2025-12-05 01:44:16.948 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:44:17 compute-0 nova_compute[349548]: 2025-12-05 01:44:17.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:44:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1040: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:44:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:44:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1041: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:44:19 compute-0 podman[404212]: 2025-12-05 01:44:19.769772078 +0000 UTC m=+0.119791347 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.openshift.expose-services=, vendor=Red Hat, Inc., version=9.4, com.redhat.component=ubi9-container, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, architecture=x86_64, io.openshift.tags=base rhel9, name=ubi9, release-0.7.12=, distribution-scope=public)
Dec  5 01:44:20 compute-0 systemd-logind[792]: New session 61 of user zuul.
Dec  5 01:44:20 compute-0 systemd[1]: Started Session 61 of User zuul.
Dec  5 01:44:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1042: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:44:22 compute-0 python3[404407]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  5 01:44:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:44:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 01:44:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 5879 writes, 24K keys, 5879 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5879 writes, 995 syncs, 5.91 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.10 MB, 0.00 MB/s#012Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  5 01:44:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1043: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:44:24 compute-0 python3[404640]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")#012journalctl -t "ceilometer_agent_compute" --no-pager -S "${tstamp}"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:44:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1044: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:44:26 compute-0 podman[404794]: 2025-12-05 01:44:26.010707206 +0000 UTC m=+0.100841085 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 01:44:26 compute-0 podman[404793]: 2025-12-05 01:44:26.02650011 +0000 UTC m=+0.115017724 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  5 01:44:26 compute-0 podman[404796]: 2025-12-05 01:44:26.058089708 +0000 UTC m=+0.127446373 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64)
Dec  5 01:44:26 compute-0 podman[404795]: 2025-12-05 01:44:26.080614321 +0000 UTC m=+0.158580868 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:44:26 compute-0 python3[404808]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")#012journalctl -t "nova_compute" --no-pager -S "${tstamp}"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:44:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 01:44:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1045: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:44:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:44:29 compute-0 python3[405028]: ansible-ansible.builtin.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  5 01:44:29 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 01:44:29 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.2 total, 600.0 interval#012Cumulative writes: 7187 writes, 29K keys, 7187 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7187 writes, 1327 syncs, 5.42 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  5 01:44:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1046: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:44:29 compute-0 podman[158197]: time="2025-12-05T01:44:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:44:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:44:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec  5 01:44:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:44:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8106 "" "Go-http-client/1.1"
Dec  5 01:44:30 compute-0 python3[405181]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  5 01:44:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1047: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:44:31 compute-0 openstack_network_exporter[366555]: ERROR   01:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:44:31 compute-0 openstack_network_exporter[366555]: ERROR   01:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:44:31 compute-0 openstack_network_exporter[366555]: ERROR   01:44:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:44:31 compute-0 openstack_network_exporter[366555]: ERROR   01:44:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:44:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:44:31 compute-0 openstack_network_exporter[366555]: ERROR   01:44:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:44:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:44:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:44:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1048: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:44:33 compute-0 python3[405416]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:44:34 compute-0 python3[405582]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 01:44:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1049: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:44:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 01:44:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 5917 writes, 24K keys, 5917 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5917 writes, 1021 syncs, 5.80 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  5 01:44:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1050: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:44:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:44:37 compute-0 ceph-mgr[193209]: [devicehealth INFO root] Check health
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.313 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.314 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.314 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.315 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.318 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.319 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.319 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.324 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.325 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.325 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.325 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.327 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.327 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.327 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.328 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.328 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.328 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.329 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.329 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.329 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.330 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.330 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.330 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.330 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.330 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.330 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.331 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.331 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.332 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.332 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.332 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.332 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.332 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.333 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.333 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.333 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.333 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.333 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:44:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:44:38.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:44:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1051: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:44:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1052: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:44:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:44:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:44:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:44:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:44:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:44:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:44:42 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:44:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 01:44:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:44:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 01:44:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:44:42 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 17616af6-a58a-49a5-9032-cec58491f179 does not exist
Dec  5 01:44:42 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 2b6f979b-329d-40c6-991d-42ff470aaf31 does not exist
Dec  5 01:44:42 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 4ada5392-ece7-42b2-811a-41983204fae1 does not exist
Dec  5 01:44:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 01:44:42 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 01:44:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 01:44:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:44:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:44:42 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:44:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1053: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:44:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:44:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:44:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:44:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:44:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:44:43 compute-0 podman[406007]: 2025-12-05 01:44:43.863304472 +0000 UTC m=+0.067371524 container create b4b51294a39d3e7fd0c92a75eb26a13d820a7aadd90d72328f100e015a3c14c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_taussig, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  5 01:44:43 compute-0 systemd[1]: Started libpod-conmon-b4b51294a39d3e7fd0c92a75eb26a13d820a7aadd90d72328f100e015a3c14c6.scope.
Dec  5 01:44:43 compute-0 podman[406007]: 2025-12-05 01:44:43.832427054 +0000 UTC m=+0.036494116 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:44:43 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:44:43 compute-0 podman[406007]: 2025-12-05 01:44:43.993359687 +0000 UTC m=+0.197426749 container init b4b51294a39d3e7fd0c92a75eb26a13d820a7aadd90d72328f100e015a3c14c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_taussig, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  5 01:44:44 compute-0 podman[406007]: 2025-12-05 01:44:44.011262381 +0000 UTC m=+0.215329413 container start b4b51294a39d3e7fd0c92a75eb26a13d820a7aadd90d72328f100e015a3c14c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_taussig, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:44:44 compute-0 podman[406007]: 2025-12-05 01:44:44.017102295 +0000 UTC m=+0.221169337 container attach b4b51294a39d3e7fd0c92a75eb26a13d820a7aadd90d72328f100e015a3c14c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_taussig, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  5 01:44:44 compute-0 exciting_taussig[406033]: 167 167
Dec  5 01:44:44 compute-0 systemd[1]: libpod-b4b51294a39d3e7fd0c92a75eb26a13d820a7aadd90d72328f100e015a3c14c6.scope: Deactivated successfully.
Dec  5 01:44:44 compute-0 podman[406007]: 2025-12-05 01:44:44.019349898 +0000 UTC m=+0.223416930 container died b4b51294a39d3e7fd0c92a75eb26a13d820a7aadd90d72328f100e015a3c14c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_taussig, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:44:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3c43527ab1ac3b07b2ba8f365c6f1121d7de1069f6858019d8fe9c2a5bc4f1c-merged.mount: Deactivated successfully.
Dec  5 01:44:44 compute-0 podman[406022]: 2025-12-05 01:44:44.055669669 +0000 UTC m=+0.125403506 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent)
Dec  5 01:44:44 compute-0 podman[406025]: 2025-12-05 01:44:44.060196766 +0000 UTC m=+0.121049253 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  5 01:44:44 compute-0 podman[406007]: 2025-12-05 01:44:44.083173302 +0000 UTC m=+0.287240334 container remove b4b51294a39d3e7fd0c92a75eb26a13d820a7aadd90d72328f100e015a3c14c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_taussig, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:44:44 compute-0 systemd[1]: libpod-conmon-b4b51294a39d3e7fd0c92a75eb26a13d820a7aadd90d72328f100e015a3c14c6.scope: Deactivated successfully.
Dec  5 01:44:44 compute-0 podman[406085]: 2025-12-05 01:44:44.357633606 +0000 UTC m=+0.093832509 container create 56db989389da1f1f0a8ee97504caf164c3a5427f211676b469490ecffbb4b15c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lumiere, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:44:44 compute-0 podman[406085]: 2025-12-05 01:44:44.319683619 +0000 UTC m=+0.055882582 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:44:44 compute-0 systemd[1]: Started libpod-conmon-56db989389da1f1f0a8ee97504caf164c3a5427f211676b469490ecffbb4b15c.scope.
Dec  5 01:44:44 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:44:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df0d9fa909de6b81db7c2fcd5d75200b5d9bda7680b7b684209aceea715cd60b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:44:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df0d9fa909de6b81db7c2fcd5d75200b5d9bda7680b7b684209aceea715cd60b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:44:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df0d9fa909de6b81db7c2fcd5d75200b5d9bda7680b7b684209aceea715cd60b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:44:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df0d9fa909de6b81db7c2fcd5d75200b5d9bda7680b7b684209aceea715cd60b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:44:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df0d9fa909de6b81db7c2fcd5d75200b5d9bda7680b7b684209aceea715cd60b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:44:44 compute-0 podman[406085]: 2025-12-05 01:44:44.554761986 +0000 UTC m=+0.290960859 container init 56db989389da1f1f0a8ee97504caf164c3a5427f211676b469490ecffbb4b15c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lumiere, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:44:44 compute-0 podman[406085]: 2025-12-05 01:44:44.587350012 +0000 UTC m=+0.323548885 container start 56db989389da1f1f0a8ee97504caf164c3a5427f211676b469490ecffbb4b15c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lumiere, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  5 01:44:44 compute-0 podman[406085]: 2025-12-05 01:44:44.593125475 +0000 UTC m=+0.329324358 container attach 56db989389da1f1f0a8ee97504caf164c3a5427f211676b469490ecffbb4b15c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  5 01:44:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1054: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:44:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 01:44:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3697009052' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 01:44:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 01:44:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3697009052' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 01:44:45 compute-0 crazy_lumiere[406100]: --> passed data devices: 0 physical, 3 LVM
Dec  5 01:44:45 compute-0 crazy_lumiere[406100]: --> relative data size: 1.0
Dec  5 01:44:45 compute-0 crazy_lumiere[406100]: --> All data devices are unavailable
Dec  5 01:44:45 compute-0 systemd[1]: libpod-56db989389da1f1f0a8ee97504caf164c3a5427f211676b469490ecffbb4b15c.scope: Deactivated successfully.
Dec  5 01:44:45 compute-0 systemd[1]: libpod-56db989389da1f1f0a8ee97504caf164c3a5427f211676b469490ecffbb4b15c.scope: Consumed 1.224s CPU time.
Dec  5 01:44:45 compute-0 conmon[406100]: conmon 56db989389da1f1f0a8e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-56db989389da1f1f0a8ee97504caf164c3a5427f211676b469490ecffbb4b15c.scope/container/memory.events
Dec  5 01:44:45 compute-0 podman[406085]: 2025-12-05 01:44:45.873568942 +0000 UTC m=+1.609767835 container died 56db989389da1f1f0a8ee97504caf164c3a5427f211676b469490ecffbb4b15c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:44:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-df0d9fa909de6b81db7c2fcd5d75200b5d9bda7680b7b684209aceea715cd60b-merged.mount: Deactivated successfully.
Dec  5 01:44:45 compute-0 podman[406085]: 2025-12-05 01:44:45.961355739 +0000 UTC m=+1.697554592 container remove 56db989389da1f1f0a8ee97504caf164c3a5427f211676b469490ecffbb4b15c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  5 01:44:45 compute-0 systemd[1]: libpod-conmon-56db989389da1f1f0a8ee97504caf164c3a5427f211676b469490ecffbb4b15c.scope: Deactivated successfully.
Dec  5 01:44:46 compute-0 podman[406133]: 2025-12-05 01:44:46.003449242 +0000 UTC m=+0.093463448 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3)
Dec  5 01:44:46 compute-0 podman[406131]: 2025-12-05 01:44:46.019443922 +0000 UTC m=+0.102078960 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Dec  5 01:44:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:44:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:44:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:44:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:44:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:44:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:44:47 compute-0 podman[406316]: 2025-12-05 01:44:47.113292666 +0000 UTC m=+0.092519322 container create 81760f0574973f628e830eac6644e27c6e61a2353c4c9ec6c1daae5c26c35ee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_poitras, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  5 01:44:47 compute-0 podman[406316]: 2025-12-05 01:44:47.078699553 +0000 UTC m=+0.057926259 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:44:47 compute-0 systemd[1]: Started libpod-conmon-81760f0574973f628e830eac6644e27c6e61a2353c4c9ec6c1daae5c26c35ee3.scope.
Dec  5 01:44:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1055: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:44:47 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:44:47 compute-0 podman[406316]: 2025-12-05 01:44:47.261617895 +0000 UTC m=+0.240844591 container init 81760f0574973f628e830eac6644e27c6e61a2353c4c9ec6c1daae5c26c35ee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Dec  5 01:44:47 compute-0 podman[406316]: 2025-12-05 01:44:47.279648211 +0000 UTC m=+0.258874867 container start 81760f0574973f628e830eac6644e27c6e61a2353c4c9ec6c1daae5c26c35ee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  5 01:44:47 compute-0 podman[406316]: 2025-12-05 01:44:47.286275448 +0000 UTC m=+0.265502224 container attach 81760f0574973f628e830eac6644e27c6e61a2353c4c9ec6c1daae5c26c35ee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_poitras, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:44:47 compute-0 agitated_poitras[406332]: 167 167
Dec  5 01:44:47 compute-0 systemd[1]: libpod-81760f0574973f628e830eac6644e27c6e61a2353c4c9ec6c1daae5c26c35ee3.scope: Deactivated successfully.
Dec  5 01:44:47 compute-0 podman[406316]: 2025-12-05 01:44:47.29383807 +0000 UTC m=+0.273064736 container died 81760f0574973f628e830eac6644e27c6e61a2353c4c9ec6c1daae5c26c35ee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_poitras, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:44:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-320511e234d1b44ea5df8b281e8260f0dd94e94c7290fe6aec6497123d2a28d0-merged.mount: Deactivated successfully.
Dec  5 01:44:47 compute-0 podman[406316]: 2025-12-05 01:44:47.358193719 +0000 UTC m=+0.337420355 container remove 81760f0574973f628e830eac6644e27c6e61a2353c4c9ec6c1daae5c26c35ee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_poitras, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  5 01:44:47 compute-0 systemd[1]: libpod-conmon-81760f0574973f628e830eac6644e27c6e61a2353c4c9ec6c1daae5c26c35ee3.scope: Deactivated successfully.
Dec  5 01:44:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:44:47 compute-0 podman[406356]: 2025-12-05 01:44:47.594431109 +0000 UTC m=+0.081494012 container create d715c8d72d6514f0205f883f2890dae5b75c07185bebdc25b2f42f67f16593cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  5 01:44:47 compute-0 podman[406356]: 2025-12-05 01:44:47.559949439 +0000 UTC m=+0.047012372 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:44:47 compute-0 systemd[1]: Started libpod-conmon-d715c8d72d6514f0205f883f2890dae5b75c07185bebdc25b2f42f67f16593cf.scope.
Dec  5 01:44:47 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:44:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baae144ac87900c9e4aa0ad1f15d27744aa071ef01f3ac4caed847205af80a96/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:44:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baae144ac87900c9e4aa0ad1f15d27744aa071ef01f3ac4caed847205af80a96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:44:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baae144ac87900c9e4aa0ad1f15d27744aa071ef01f3ac4caed847205af80a96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:44:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baae144ac87900c9e4aa0ad1f15d27744aa071ef01f3ac4caed847205af80a96/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:44:47 compute-0 podman[406356]: 2025-12-05 01:44:47.751007619 +0000 UTC m=+0.238070582 container init d715c8d72d6514f0205f883f2890dae5b75c07185bebdc25b2f42f67f16593cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_gauss, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  5 01:44:47 compute-0 podman[406356]: 2025-12-05 01:44:47.769456148 +0000 UTC m=+0.256519051 container start d715c8d72d6514f0205f883f2890dae5b75c07185bebdc25b2f42f67f16593cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:44:47 compute-0 podman[406356]: 2025-12-05 01:44:47.776355932 +0000 UTC m=+0.263418835 container attach d715c8d72d6514f0205f883f2890dae5b75c07185bebdc25b2f42f67f16593cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_gauss, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]: {
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:    "0": [
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:        {
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:            "devices": [
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:                "/dev/loop3"
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:            ],
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:            "lv_name": "ceph_lv0",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:            "lv_size": "21470642176",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:            "name": "ceph_lv0",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:            "tags": {
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:                "ceph.cluster_name": "ceph",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:                "ceph.crush_device_class": "",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:                "ceph.encrypted": "0",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:                "ceph.osd_id": "0",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:                "ceph.type": "block",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:                "ceph.vdo": "0"
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:            },
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:            "type": "block",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:            "vg_name": "ceph_vg0"
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:        }
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:    ],
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:    "1": [
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:        {
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:            "devices": [
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:                "/dev/loop4"
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:            ],
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:            "lv_name": "ceph_lv1",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:            "lv_size": "21470642176",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:            "name": "ceph_lv1",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:            "tags": {
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:                "ceph.cluster_name": "ceph",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:                "ceph.crush_device_class": "",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:                "ceph.encrypted": "0",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:                "ceph.osd_id": "1",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:                "ceph.type": "block",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:                "ceph.vdo": "0"
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:            },
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:            "type": "block",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:            "vg_name": "ceph_vg1"
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:        }
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:    ],
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:    "2": [
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:        {
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:            "devices": [
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:                "/dev/loop5"
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:            ],
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:            "lv_name": "ceph_lv2",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:            "lv_size": "21470642176",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:            "name": "ceph_lv2",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:            "tags": {
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:                "ceph.cluster_name": "ceph",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:                "ceph.crush_device_class": "",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:                "ceph.encrypted": "0",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:                "ceph.osd_id": "2",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:                "ceph.type": "block",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:                "ceph.vdo": "0"
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:            },
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:            "type": "block",
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:            "vg_name": "ceph_vg2"
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:        }
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]:    ]
Dec  5 01:44:48 compute-0 heuristic_gauss[406371]: }
Dec  5 01:44:48 compute-0 systemd[1]: libpod-d715c8d72d6514f0205f883f2890dae5b75c07185bebdc25b2f42f67f16593cf.scope: Deactivated successfully.
Dec  5 01:44:48 compute-0 conmon[406371]: conmon d715c8d72d6514f0205f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d715c8d72d6514f0205f883f2890dae5b75c07185bebdc25b2f42f67f16593cf.scope/container/memory.events
Dec  5 01:44:48 compute-0 podman[406356]: 2025-12-05 01:44:48.578465365 +0000 UTC m=+1.065528268 container died d715c8d72d6514f0205f883f2890dae5b75c07185bebdc25b2f42f67f16593cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:44:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-baae144ac87900c9e4aa0ad1f15d27744aa071ef01f3ac4caed847205af80a96-merged.mount: Deactivated successfully.
Dec  5 01:44:48 compute-0 podman[406356]: 2025-12-05 01:44:48.6707971 +0000 UTC m=+1.157859973 container remove d715c8d72d6514f0205f883f2890dae5b75c07185bebdc25b2f42f67f16593cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_gauss, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:44:48 compute-0 systemd[1]: libpod-conmon-d715c8d72d6514f0205f883f2890dae5b75c07185bebdc25b2f42f67f16593cf.scope: Deactivated successfully.
Dec  5 01:44:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1056: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:44:49 compute-0 podman[406528]: 2025-12-05 01:44:49.92462024 +0000 UTC m=+0.102313916 container create 7ce6b4aa09b7b2bce7ec34d26434450a2059ce25260910ceae061856b1b5f802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_saha, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  5 01:44:49 compute-0 podman[406528]: 2025-12-05 01:44:49.889495663 +0000 UTC m=+0.067189399 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:44:49 compute-0 systemd[1]: Started libpod-conmon-7ce6b4aa09b7b2bce7ec34d26434450a2059ce25260910ceae061856b1b5f802.scope.
Dec  5 01:44:50 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:44:50 compute-0 podman[406528]: 2025-12-05 01:44:50.063054321 +0000 UTC m=+0.240748077 container init 7ce6b4aa09b7b2bce7ec34d26434450a2059ce25260910ceae061856b1b5f802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  5 01:44:50 compute-0 podman[406528]: 2025-12-05 01:44:50.075179192 +0000 UTC m=+0.252872878 container start 7ce6b4aa09b7b2bce7ec34d26434450a2059ce25260910ceae061856b1b5f802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_saha, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  5 01:44:50 compute-0 podman[406528]: 2025-12-05 01:44:50.082015974 +0000 UTC m=+0.259709660 container attach 7ce6b4aa09b7b2bce7ec34d26434450a2059ce25260910ceae061856b1b5f802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_saha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Dec  5 01:44:50 compute-0 boring_saha[406544]: 167 167
Dec  5 01:44:50 compute-0 systemd[1]: libpod-7ce6b4aa09b7b2bce7ec34d26434450a2059ce25260910ceae061856b1b5f802.scope: Deactivated successfully.
Dec  5 01:44:50 compute-0 podman[406528]: 2025-12-05 01:44:50.088602189 +0000 UTC m=+0.266295875 container died 7ce6b4aa09b7b2bce7ec34d26434450a2059ce25260910ceae061856b1b5f802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_saha, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:44:50 compute-0 podman[406541]: 2025-12-05 01:44:50.120120765 +0000 UTC m=+0.120639812 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vendor=Red Hat, Inc., architecture=x86_64, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, maintainer=Red Hat, Inc., release-0.7.12=, config_id=edpm, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, build-date=2024-09-18T21:23:30, name=ubi9)
Dec  5 01:44:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-45601bda4962d8981c7cb2aefb96d011fc47490b2945fc7f57251a787dfe8b4f-merged.mount: Deactivated successfully.
Dec  5 01:44:50 compute-0 podman[406528]: 2025-12-05 01:44:50.159164702 +0000 UTC m=+0.336858358 container remove 7ce6b4aa09b7b2bce7ec34d26434450a2059ce25260910ceae061856b1b5f802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_saha, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  5 01:44:50 compute-0 systemd[1]: libpod-conmon-7ce6b4aa09b7b2bce7ec34d26434450a2059ce25260910ceae061856b1b5f802.scope: Deactivated successfully.
Dec  5 01:44:50 compute-0 podman[406587]: 2025-12-05 01:44:50.426013282 +0000 UTC m=+0.083398075 container create e602d04df59b540847cddefd4b097aad015499b76d6495e8721230da5c9c50e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:44:50 compute-0 podman[406587]: 2025-12-05 01:44:50.393961382 +0000 UTC m=+0.051346215 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:44:50 compute-0 systemd[1]: Started libpod-conmon-e602d04df59b540847cddefd4b097aad015499b76d6495e8721230da5c9c50e3.scope.
Dec  5 01:44:50 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:44:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6723c404a40397fd4499020d85fc373173d7951e03f448f05feccb7c83d6e213/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:44:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6723c404a40397fd4499020d85fc373173d7951e03f448f05feccb7c83d6e213/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:44:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6723c404a40397fd4499020d85fc373173d7951e03f448f05feccb7c83d6e213/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:44:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6723c404a40397fd4499020d85fc373173d7951e03f448f05feccb7c83d6e213/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:44:50 compute-0 podman[406587]: 2025-12-05 01:44:50.598243573 +0000 UTC m=+0.255628406 container init e602d04df59b540847cddefd4b097aad015499b76d6495e8721230da5c9c50e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  5 01:44:50 compute-0 podman[406587]: 2025-12-05 01:44:50.622202196 +0000 UTC m=+0.279586979 container start e602d04df59b540847cddefd4b097aad015499b76d6495e8721230da5c9c50e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:44:50 compute-0 podman[406587]: 2025-12-05 01:44:50.628801102 +0000 UTC m=+0.286185925 container attach e602d04df59b540847cddefd4b097aad015499b76d6495e8721230da5c9c50e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_golick, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  5 01:44:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1057: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:44:51 compute-0 nostalgic_golick[406602]: {
Dec  5 01:44:51 compute-0 nostalgic_golick[406602]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 01:44:51 compute-0 nostalgic_golick[406602]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:44:51 compute-0 nostalgic_golick[406602]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 01:44:51 compute-0 nostalgic_golick[406602]:        "osd_id": 0,
Dec  5 01:44:51 compute-0 nostalgic_golick[406602]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:44:51 compute-0 nostalgic_golick[406602]:        "type": "bluestore"
Dec  5 01:44:51 compute-0 nostalgic_golick[406602]:    },
Dec  5 01:44:51 compute-0 nostalgic_golick[406602]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 01:44:51 compute-0 nostalgic_golick[406602]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:44:51 compute-0 nostalgic_golick[406602]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 01:44:51 compute-0 nostalgic_golick[406602]:        "osd_id": 1,
Dec  5 01:44:51 compute-0 nostalgic_golick[406602]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:44:51 compute-0 nostalgic_golick[406602]:        "type": "bluestore"
Dec  5 01:44:51 compute-0 nostalgic_golick[406602]:    },
Dec  5 01:44:51 compute-0 nostalgic_golick[406602]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 01:44:51 compute-0 nostalgic_golick[406602]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:44:51 compute-0 nostalgic_golick[406602]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 01:44:51 compute-0 nostalgic_golick[406602]:        "osd_id": 2,
Dec  5 01:44:51 compute-0 nostalgic_golick[406602]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:44:51 compute-0 nostalgic_golick[406602]:        "type": "bluestore"
Dec  5 01:44:51 compute-0 nostalgic_golick[406602]:    }
Dec  5 01:44:51 compute-0 nostalgic_golick[406602]: }
Dec  5 01:44:51 compute-0 systemd[1]: libpod-e602d04df59b540847cddefd4b097aad015499b76d6495e8721230da5c9c50e3.scope: Deactivated successfully.
Dec  5 01:44:51 compute-0 systemd[1]: libpod-e602d04df59b540847cddefd4b097aad015499b76d6495e8721230da5c9c50e3.scope: Consumed 1.138s CPU time.
Dec  5 01:44:51 compute-0 podman[406635]: 2025-12-05 01:44:51.808981762 +0000 UTC m=+0.036338142 container died e602d04df59b540847cddefd4b097aad015499b76d6495e8721230da5c9c50e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_golick, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  5 01:44:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-6723c404a40397fd4499020d85fc373173d7951e03f448f05feccb7c83d6e213-merged.mount: Deactivated successfully.
Dec  5 01:44:51 compute-0 podman[406635]: 2025-12-05 01:44:51.93450668 +0000 UTC m=+0.161862960 container remove e602d04df59b540847cddefd4b097aad015499b76d6495e8721230da5c9c50e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_golick, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  5 01:44:51 compute-0 systemd[1]: libpod-conmon-e602d04df59b540847cddefd4b097aad015499b76d6495e8721230da5c9c50e3.scope: Deactivated successfully.
Dec  5 01:44:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:44:52 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:44:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:44:52 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:44:52 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 9ec7a2e1-0333-48d5-a24b-4930c0bc42d6 does not exist
Dec  5 01:44:52 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev e9271a9c-4bd9-4179-9763-a15aa1da35dd does not exist
Dec  5 01:44:52 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:44:52 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:44:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:44:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1058: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:44:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1059: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:44:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:44:56.171 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:44:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:44:56.172 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:44:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:44:56.172 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:44:56 compute-0 podman[406701]: 2025-12-05 01:44:56.708368443 +0000 UTC m=+0.102291266 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  5 01:44:56 compute-0 podman[406700]: 2025-12-05 01:44:56.741304119 +0000 UTC m=+0.135838849 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2)
Dec  5 01:44:56 compute-0 podman[406703]: 2025-12-05 01:44:56.742373219 +0000 UTC m=+0.127582057 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.buildah.version=1.33.7)
Dec  5 01:44:56 compute-0 podman[406702]: 2025-12-05 01:44:56.757571146 +0000 UTC m=+0.146738045 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec  5 01:44:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1060: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:44:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:44:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1061: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:44:59 compute-0 podman[158197]: time="2025-12-05T01:44:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:44:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:44:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec  5 01:44:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:44:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8122 "" "Go-http-client/1.1"
Dec  5 01:45:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1062: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:45:01 compute-0 openstack_network_exporter[366555]: ERROR   01:45:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:45:01 compute-0 openstack_network_exporter[366555]: ERROR   01:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:45:01 compute-0 openstack_network_exporter[366555]: ERROR   01:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:45:01 compute-0 openstack_network_exporter[366555]: ERROR   01:45:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:45:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:45:01 compute-0 openstack_network_exporter[366555]: ERROR   01:45:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:45:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:45:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:45:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1063: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:45:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1064: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:45:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1065: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:45:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:45:08 compute-0 nova_compute[349548]: 2025-12-05 01:45:08.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:45:08 compute-0 nova_compute[349548]: 2025-12-05 01:45:08.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  5 01:45:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1066: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:45:11 compute-0 nova_compute[349548]: 2025-12-05 01:45:11.098 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:45:11 compute-0 nova_compute[349548]: 2025-12-05 01:45:11.099 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  5 01:45:11 compute-0 nova_compute[349548]: 2025-12-05 01:45:11.115 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  5 01:45:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1067: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:45:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:45:13 compute-0 nova_compute[349548]: 2025-12-05 01:45:13.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:45:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1068: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:45:14 compute-0 nova_compute[349548]: 2025-12-05 01:45:14.085 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:45:14 compute-0 nova_compute[349548]: 2025-12-05 01:45:14.086 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 01:45:14 compute-0 podman[406782]: 2025-12-05 01:45:14.723464835 +0000 UTC m=+0.125594401 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  5 01:45:14 compute-0 podman[406783]: 2025-12-05 01:45:14.742352686 +0000 UTC m=+0.142002672 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  5 01:45:15 compute-0 nova_compute[349548]: 2025-12-05 01:45:15.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:45:15 compute-0 nova_compute[349548]: 2025-12-05 01:45:15.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 01:45:15 compute-0 nova_compute[349548]: 2025-12-05 01:45:15.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  5 01:45:15 compute-0 nova_compute[349548]: 2025-12-05 01:45:15.085 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  5 01:45:15 compute-0 nova_compute[349548]: 2025-12-05 01:45:15.086 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:45:15 compute-0 nova_compute[349548]: 2025-12-05 01:45:15.086 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:45:15 compute-0 nova_compute[349548]: 2025-12-05 01:45:15.087 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:45:15 compute-0 nova_compute[349548]: 2025-12-05 01:45:15.123 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:45:15 compute-0 nova_compute[349548]: 2025-12-05 01:45:15.124 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:45:15 compute-0 nova_compute[349548]: 2025-12-05 01:45:15.125 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:45:15 compute-0 nova_compute[349548]: 2025-12-05 01:45:15.126 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 01:45:15 compute-0 nova_compute[349548]: 2025-12-05 01:45:15.127 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:45:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1069: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:45:15 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:45:15 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2264434584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:45:15 compute-0 nova_compute[349548]: 2025-12-05 01:45:15.660 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:45:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:45:16
Dec  5 01:45:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 01:45:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 01:45:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['volumes', 'images', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'backups', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta']
Dec  5 01:45:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 01:45:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:45:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:45:16 compute-0 nova_compute[349548]: 2025-12-05 01:45:16.257 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 01:45:16 compute-0 nova_compute[349548]: 2025-12-05 01:45:16.258 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4571MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 01:45:16 compute-0 nova_compute[349548]: 2025-12-05 01:45:16.259 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:45:16 compute-0 nova_compute[349548]: 2025-12-05 01:45:16.259 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:45:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:45:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:45:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:45:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:45:16 compute-0 nova_compute[349548]: 2025-12-05 01:45:16.614 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 01:45:16 compute-0 nova_compute[349548]: 2025-12-05 01:45:16.615 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 01:45:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 01:45:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:45:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 01:45:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:45:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:45:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:45:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:45:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:45:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:45:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:45:16 compute-0 podman[406843]: 2025-12-05 01:45:16.693931527 +0000 UTC m=+0.102536083 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true)
Dec  5 01:45:16 compute-0 nova_compute[349548]: 2025-12-05 01:45:16.712 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing inventories for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  5 01:45:16 compute-0 podman[406844]: 2025-12-05 01:45:16.721736209 +0000 UTC m=+0.125363305 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  5 01:45:16 compute-0 nova_compute[349548]: 2025-12-05 01:45:16.821 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating ProviderTree inventory for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  5 01:45:16 compute-0 nova_compute[349548]: 2025-12-05 01:45:16.821 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating inventory in ProviderTree for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  5 01:45:16 compute-0 nova_compute[349548]: 2025-12-05 01:45:16.839 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing aggregate associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  5 01:45:16 compute-0 nova_compute[349548]: 2025-12-05 01:45:16.860 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing trait associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, traits: HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,HW_CPU_X86_ABM,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE41,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI2,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  5 01:45:16 compute-0 nova_compute[349548]: 2025-12-05 01:45:16.874 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:45:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1070: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:45:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:45:17 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1565133677' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:45:17 compute-0 nova_compute[349548]: 2025-12-05 01:45:17.353 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:45:17 compute-0 nova_compute[349548]: 2025-12-05 01:45:17.365 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 01:45:17 compute-0 nova_compute[349548]: 2025-12-05 01:45:17.379 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 01:45:17 compute-0 nova_compute[349548]: 2025-12-05 01:45:17.382 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 01:45:17 compute-0 nova_compute[349548]: 2025-12-05 01:45:17.382 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.123s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:45:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:45:18 compute-0 nova_compute[349548]: 2025-12-05 01:45:18.361 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:45:18 compute-0 nova_compute[349548]: 2025-12-05 01:45:18.362 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:45:18 compute-0 nova_compute[349548]: 2025-12-05 01:45:18.381 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:45:18 compute-0 nova_compute[349548]: 2025-12-05 01:45:18.381 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:45:18 compute-0 nova_compute[349548]: 2025-12-05 01:45:18.382 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:45:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1071: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:45:20 compute-0 podman[406905]: 2025-12-05 01:45:20.744707817 +0000 UTC m=+0.150069858 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.buildah.version=1.29.0, managed_by=edpm_ansible, architecture=x86_64, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.openshift.tags=base rhel9, io.openshift.expose-services=, maintainer=Red Hat, Inc.)
Dec  5 01:45:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1072: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:45:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:45:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1073: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:45:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1074: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:45:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 01:45:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1075: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:45:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:45:27 compute-0 podman[406925]: 2025-12-05 01:45:27.714431237 +0000 UTC m=+0.122886375 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 01:45:27 compute-0 podman[406924]: 2025-12-05 01:45:27.722860904 +0000 UTC m=+0.127892246 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  5 01:45:27 compute-0 podman[406927]: 2025-12-05 01:45:27.724570992 +0000 UTC m=+0.108439109 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, name=ubi9-minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=edpm, container_name=openstack_network_exporter, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  5 01:45:27 compute-0 podman[406926]: 2025-12-05 01:45:27.773054473 +0000 UTC m=+0.162422135 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  5 01:45:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1076: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:45:29 compute-0 podman[158197]: time="2025-12-05T01:45:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:45:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:45:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec  5 01:45:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:45:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8124 "" "Go-http-client/1.1"
Dec  5 01:45:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1077: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:45:31 compute-0 openstack_network_exporter[366555]: ERROR   01:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:45:31 compute-0 openstack_network_exporter[366555]: ERROR   01:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:45:31 compute-0 openstack_network_exporter[366555]: ERROR   01:45:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:45:31 compute-0 openstack_network_exporter[366555]: ERROR   01:45:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:45:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:45:31 compute-0 openstack_network_exporter[366555]: ERROR   01:45:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:45:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:45:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:45:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1078: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:45:34 compute-0 systemd[1]: session-61.scope: Deactivated successfully.
Dec  5 01:45:34 compute-0 systemd[1]: session-61.scope: Consumed 12.578s CPU time.
Dec  5 01:45:34 compute-0 systemd-logind[792]: Session 61 logged out. Waiting for processes to exit.
Dec  5 01:45:34 compute-0 systemd-logind[792]: Removed session 61.
Dec  5 01:45:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1079: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:45:37 compute-0 nova_compute[349548]: 2025-12-05 01:45:37.077 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:45:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1080: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:45:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:45:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1081: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:45:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1082: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:45:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:45:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1083: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:45:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 01:45:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/919219471' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 01:45:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 01:45:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/919219471' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 01:45:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1084: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:45:45 compute-0 podman[407007]: 2025-12-05 01:45:45.703984807 +0000 UTC m=+0.109424336 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  5 01:45:45 compute-0 podman[407008]: 2025-12-05 01:45:45.744974169 +0000 UTC m=+0.147543587 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  5 01:45:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:45:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:45:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:45:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:45:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:45:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:45:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1085: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:45:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:45:47 compute-0 podman[407046]: 2025-12-05 01:45:47.724442174 +0000 UTC m=+0.134428789 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  5 01:45:47 compute-0 podman[407047]: 2025-12-05 01:45:47.732598553 +0000 UTC m=+0.133030960 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:45:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1086: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:45:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1087: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:45:51 compute-0 podman[407082]: 2025-12-05 01:45:51.697311087 +0000 UTC m=+0.103339716 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, distribution-scope=public, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, version=9.4, com.redhat.component=ubi9-container, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., release-0.7.12=, architecture=x86_64, io.openshift.expose-services=, name=ubi9, container_name=kepler, io.openshift.tags=base rhel9, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543)
Dec  5 01:45:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:45:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1088: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:45:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:45:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:45:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:45:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:45:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:45:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:45:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1089: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:45:55 compute-0 podman[407492]: 2025-12-05 01:45:55.754003484 +0000 UTC m=+0.115244830 container create 5c545500c339fff151004d36ad0716012fed517765edb11fbf315323c53223af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_burnell, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:45:55 compute-0 podman[407492]: 2025-12-05 01:45:55.714764661 +0000 UTC m=+0.076006057 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:45:55 compute-0 systemd[1]: Started libpod-conmon-5c545500c339fff151004d36ad0716012fed517765edb11fbf315323c53223af.scope.
Dec  5 01:45:55 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:45:55 compute-0 podman[407492]: 2025-12-05 01:45:55.916657905 +0000 UTC m=+0.277899301 container init 5c545500c339fff151004d36ad0716012fed517765edb11fbf315323c53223af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_burnell, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  5 01:45:55 compute-0 podman[407492]: 2025-12-05 01:45:55.933996373 +0000 UTC m=+0.295237719 container start 5c545500c339fff151004d36ad0716012fed517765edb11fbf315323c53223af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_burnell, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  5 01:45:55 compute-0 podman[407492]: 2025-12-05 01:45:55.941605196 +0000 UTC m=+0.302846562 container attach 5c545500c339fff151004d36ad0716012fed517765edb11fbf315323c53223af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_burnell, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:45:55 compute-0 tender_burnell[407509]: 167 167
Dec  5 01:45:55 compute-0 systemd[1]: libpod-5c545500c339fff151004d36ad0716012fed517765edb11fbf315323c53223af.scope: Deactivated successfully.
Dec  5 01:45:55 compute-0 podman[407492]: 2025-12-05 01:45:55.945715552 +0000 UTC m=+0.306956938 container died 5c545500c339fff151004d36ad0716012fed517765edb11fbf315323c53223af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:45:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-d607773a2de060657b6c11bce39c69d15895930ea2e7f6662bf7909e5d353243-merged.mount: Deactivated successfully.
Dec  5 01:45:56 compute-0 podman[407492]: 2025-12-05 01:45:56.032608944 +0000 UTC m=+0.393850290 container remove 5c545500c339fff151004d36ad0716012fed517765edb11fbf315323c53223af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  5 01:45:56 compute-0 systemd[1]: libpod-conmon-5c545500c339fff151004d36ad0716012fed517765edb11fbf315323c53223af.scope: Deactivated successfully.
Dec  5 01:45:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:45:56.173 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:45:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:45:56.175 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:45:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:45:56.176 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:45:56 compute-0 podman[407532]: 2025-12-05 01:45:56.3313471 +0000 UTC m=+0.097022197 container create cfd3d628a48a7b31438ca05b851a62fa6d18b0833a4ddc54a245e112c02805cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_fermi, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  5 01:45:56 compute-0 podman[407532]: 2025-12-05 01:45:56.28827316 +0000 UTC m=+0.053948317 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:45:56 compute-0 systemd[1]: Started libpod-conmon-cfd3d628a48a7b31438ca05b851a62fa6d18b0833a4ddc54a245e112c02805cf.scope.
Dec  5 01:45:56 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:45:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/620f88242226ee491e83cda68696f6d4751454792e33146a568d360cb24b32a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:45:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/620f88242226ee491e83cda68696f6d4751454792e33146a568d360cb24b32a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:45:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/620f88242226ee491e83cda68696f6d4751454792e33146a568d360cb24b32a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:45:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/620f88242226ee491e83cda68696f6d4751454792e33146a568d360cb24b32a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:45:56 compute-0 podman[407532]: 2025-12-05 01:45:56.493020803 +0000 UTC m=+0.258695950 container init cfd3d628a48a7b31438ca05b851a62fa6d18b0833a4ddc54a245e112c02805cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:45:56 compute-0 podman[407532]: 2025-12-05 01:45:56.511945035 +0000 UTC m=+0.277620112 container start cfd3d628a48a7b31438ca05b851a62fa6d18b0833a4ddc54a245e112c02805cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_fermi, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  5 01:45:56 compute-0 podman[407532]: 2025-12-05 01:45:56.517533442 +0000 UTC m=+0.283208579 container attach cfd3d628a48a7b31438ca05b851a62fa6d18b0833a4ddc54a245e112c02805cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:45:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1090: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:45:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:45:58 compute-0 podman[408877]: 2025-12-05 01:45:58.695668512 +0000 UTC m=+0.094538518 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 01:45:58 compute-0 podman[408857]: 2025-12-05 01:45:58.731480608 +0000 UTC m=+0.133350039 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  5 01:45:58 compute-0 podman[408888]: 2025-12-05 01:45:58.748324632 +0000 UTC m=+0.137236759 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, distribution-scope=public, io.buildah.version=1.33.7, vendor=Red Hat, Inc., version=9.6, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  5 01:45:58 compute-0 podman[408883]: 2025-12-05 01:45:58.760749191 +0000 UTC m=+0.157457877 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:45:58 compute-0 hardcore_fermi[407548]: [
Dec  5 01:45:58 compute-0 hardcore_fermi[407548]:    {
Dec  5 01:45:58 compute-0 hardcore_fermi[407548]:        "available": false,
Dec  5 01:45:58 compute-0 hardcore_fermi[407548]:        "ceph_device": false,
Dec  5 01:45:58 compute-0 hardcore_fermi[407548]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec  5 01:45:58 compute-0 hardcore_fermi[407548]:        "lsm_data": {},
Dec  5 01:45:58 compute-0 hardcore_fermi[407548]:        "lvs": [],
Dec  5 01:45:58 compute-0 hardcore_fermi[407548]:        "path": "/dev/sr0",
Dec  5 01:45:58 compute-0 hardcore_fermi[407548]:        "rejected_reasons": [
Dec  5 01:45:58 compute-0 hardcore_fermi[407548]:            "Has a FileSystem",
Dec  5 01:45:58 compute-0 hardcore_fermi[407548]:            "Insufficient space (<5GB)"
Dec  5 01:45:58 compute-0 hardcore_fermi[407548]:        ],
Dec  5 01:45:58 compute-0 hardcore_fermi[407548]:        "sys_api": {
Dec  5 01:45:58 compute-0 hardcore_fermi[407548]:            "actuators": null,
Dec  5 01:45:58 compute-0 hardcore_fermi[407548]:            "device_nodes": "sr0",
Dec  5 01:45:58 compute-0 hardcore_fermi[407548]:            "devname": "sr0",
Dec  5 01:45:58 compute-0 hardcore_fermi[407548]:            "human_readable_size": "482.00 KB",
Dec  5 01:45:58 compute-0 hardcore_fermi[407548]:            "id_bus": "ata",
Dec  5 01:45:58 compute-0 hardcore_fermi[407548]:            "model": "QEMU DVD-ROM",
Dec  5 01:45:58 compute-0 hardcore_fermi[407548]:            "nr_requests": "2",
Dec  5 01:45:58 compute-0 hardcore_fermi[407548]:            "parent": "/dev/sr0",
Dec  5 01:45:58 compute-0 hardcore_fermi[407548]:            "partitions": {},
Dec  5 01:45:58 compute-0 hardcore_fermi[407548]:            "path": "/dev/sr0",
Dec  5 01:45:58 compute-0 hardcore_fermi[407548]:            "removable": "1",
Dec  5 01:45:58 compute-0 hardcore_fermi[407548]:            "rev": "2.5+",
Dec  5 01:45:58 compute-0 hardcore_fermi[407548]:            "ro": "0",
Dec  5 01:45:58 compute-0 hardcore_fermi[407548]:            "rotational": "1",
Dec  5 01:45:58 compute-0 hardcore_fermi[407548]:            "sas_address": "",
Dec  5 01:45:58 compute-0 hardcore_fermi[407548]:            "sas_device_handle": "",
Dec  5 01:45:58 compute-0 hardcore_fermi[407548]:            "scheduler_mode": "mq-deadline",
Dec  5 01:45:58 compute-0 hardcore_fermi[407548]:            "sectors": 0,
Dec  5 01:45:58 compute-0 hardcore_fermi[407548]:            "sectorsize": "2048",
Dec  5 01:45:58 compute-0 hardcore_fermi[407548]:            "size": 493568.0,
Dec  5 01:45:58 compute-0 hardcore_fermi[407548]:            "support_discard": "2048",
Dec  5 01:45:58 compute-0 hardcore_fermi[407548]:            "type": "disk",
Dec  5 01:45:58 compute-0 hardcore_fermi[407548]:            "vendor": "QEMU"
Dec  5 01:45:58 compute-0 hardcore_fermi[407548]:        }
Dec  5 01:45:58 compute-0 hardcore_fermi[407548]:    }
Dec  5 01:45:58 compute-0 hardcore_fermi[407548]: ]
Dec  5 01:45:58 compute-0 systemd[1]: libpod-cfd3d628a48a7b31438ca05b851a62fa6d18b0833a4ddc54a245e112c02805cf.scope: Deactivated successfully.
Dec  5 01:45:58 compute-0 systemd[1]: libpod-cfd3d628a48a7b31438ca05b851a62fa6d18b0833a4ddc54a245e112c02805cf.scope: Consumed 2.534s CPU time.
Dec  5 01:45:58 compute-0 conmon[407548]: conmon cfd3d628a48a7b31438c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cfd3d628a48a7b31438ca05b851a62fa6d18b0833a4ddc54a245e112c02805cf.scope/container/memory.events
Dec  5 01:45:58 compute-0 podman[407532]: 2025-12-05 01:45:58.966164344 +0000 UTC m=+2.731839411 container died cfd3d628a48a7b31438ca05b851a62fa6d18b0833a4ddc54a245e112c02805cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_fermi, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:45:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-620f88242226ee491e83cda68696f6d4751454792e33146a568d360cb24b32a9-merged.mount: Deactivated successfully.
Dec  5 01:45:59 compute-0 podman[407532]: 2025-12-05 01:45:59.049828756 +0000 UTC m=+2.815503813 container remove cfd3d628a48a7b31438ca05b851a62fa6d18b0833a4ddc54a245e112c02805cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  5 01:45:59 compute-0 systemd[1]: libpod-conmon-cfd3d628a48a7b31438ca05b851a62fa6d18b0833a4ddc54a245e112c02805cf.scope: Deactivated successfully.
Dec  5 01:45:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:45:59 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:45:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:45:59 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:45:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:45:59 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:45:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 01:45:59 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:45:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 01:45:59 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:45:59 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 89d00e85-6e9c-40d6-83da-a3f4c49764c9 does not exist
Dec  5 01:45:59 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 17b7d2b3-59ba-40cb-99aa-29cde6a8cf09 does not exist
Dec  5 01:45:59 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev d87436a0-8eca-4a80-840b-fee71c34b557 does not exist
Dec  5 01:45:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 01:45:59 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 01:45:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 01:45:59 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:45:59 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:45:59 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:45:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1091: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:45:59 compute-0 podman[158197]: time="2025-12-05T01:45:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:45:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:45:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec  5 01:45:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:45:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8117 "" "Go-http-client/1.1"
Dec  5 01:46:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:46:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:46:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:46:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:46:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:46:00 compute-0 podman[409847]: 2025-12-05 01:46:00.28071271 +0000 UTC m=+0.089591739 container create 02d6edcf38bfad964bd3167b0b045986cdd8988792059df8a34e356417041cc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Dec  5 01:46:00 compute-0 podman[409847]: 2025-12-05 01:46:00.248057462 +0000 UTC m=+0.056936541 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:46:00 compute-0 systemd[1]: Started libpod-conmon-02d6edcf38bfad964bd3167b0b045986cdd8988792059df8a34e356417041cc8.scope.
Dec  5 01:46:00 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:46:00 compute-0 podman[409847]: 2025-12-05 01:46:00.421311922 +0000 UTC m=+0.230191001 container init 02d6edcf38bfad964bd3167b0b045986cdd8988792059df8a34e356417041cc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shockley, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  5 01:46:00 compute-0 podman[409847]: 2025-12-05 01:46:00.438700291 +0000 UTC m=+0.247579300 container start 02d6edcf38bfad964bd3167b0b045986cdd8988792059df8a34e356417041cc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  5 01:46:00 compute-0 podman[409847]: 2025-12-05 01:46:00.445215914 +0000 UTC m=+0.254094993 container attach 02d6edcf38bfad964bd3167b0b045986cdd8988792059df8a34e356417041cc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:46:00 compute-0 hardcore_shockley[409862]: 167 167
Dec  5 01:46:00 compute-0 systemd[1]: libpod-02d6edcf38bfad964bd3167b0b045986cdd8988792059df8a34e356417041cc8.scope: Deactivated successfully.
Dec  5 01:46:00 compute-0 podman[409847]: 2025-12-05 01:46:00.451587673 +0000 UTC m=+0.260466682 container died 02d6edcf38bfad964bd3167b0b045986cdd8988792059df8a34e356417041cc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  5 01:46:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-d10a567a0c9a154bc63a0a1e95e36dc705fcc9e844c060339d5c29b0410f500d-merged.mount: Deactivated successfully.
Dec  5 01:46:00 compute-0 podman[409847]: 2025-12-05 01:46:00.524311167 +0000 UTC m=+0.333190206 container remove 02d6edcf38bfad964bd3167b0b045986cdd8988792059df8a34e356417041cc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shockley, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Dec  5 01:46:00 compute-0 systemd[1]: libpod-conmon-02d6edcf38bfad964bd3167b0b045986cdd8988792059df8a34e356417041cc8.scope: Deactivated successfully.
Dec  5 01:46:00 compute-0 podman[409887]: 2025-12-05 01:46:00.774494968 +0000 UTC m=+0.082356645 container create b57230d778f91f786bc0f0004a332ddcd581aad3160b16b30df935510ea50320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:46:00 compute-0 podman[409887]: 2025-12-05 01:46:00.74963006 +0000 UTC m=+0.057491767 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:46:00 compute-0 systemd[1]: Started libpod-conmon-b57230d778f91f786bc0f0004a332ddcd581aad3160b16b30df935510ea50320.scope.
Dec  5 01:46:00 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:46:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b08708a8f9ed8658133c9f0ba4e4cef121dacd2c6265c50b29a76aa86568a73c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:46:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b08708a8f9ed8658133c9f0ba4e4cef121dacd2c6265c50b29a76aa86568a73c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:46:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b08708a8f9ed8658133c9f0ba4e4cef121dacd2c6265c50b29a76aa86568a73c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:46:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b08708a8f9ed8658133c9f0ba4e4cef121dacd2c6265c50b29a76aa86568a73c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:46:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b08708a8f9ed8658133c9f0ba4e4cef121dacd2c6265c50b29a76aa86568a73c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:46:00 compute-0 podman[409887]: 2025-12-05 01:46:00.966397152 +0000 UTC m=+0.274258859 container init b57230d778f91f786bc0f0004a332ddcd581aad3160b16b30df935510ea50320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  5 01:46:00 compute-0 podman[409887]: 2025-12-05 01:46:00.994035099 +0000 UTC m=+0.301896806 container start b57230d778f91f786bc0f0004a332ddcd581aad3160b16b30df935510ea50320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_cannon, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  5 01:46:01 compute-0 podman[409887]: 2025-12-05 01:46:01.000954593 +0000 UTC m=+0.308816330 container attach b57230d778f91f786bc0f0004a332ddcd581aad3160b16b30df935510ea50320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_cannon, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  5 01:46:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1092: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:46:01 compute-0 openstack_network_exporter[366555]: ERROR   01:46:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:46:01 compute-0 openstack_network_exporter[366555]: ERROR   01:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:46:01 compute-0 openstack_network_exporter[366555]: ERROR   01:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:46:01 compute-0 openstack_network_exporter[366555]: ERROR   01:46:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:46:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:46:01 compute-0 openstack_network_exporter[366555]: ERROR   01:46:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:46:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:46:02 compute-0 laughing_cannon[409903]: --> passed data devices: 0 physical, 3 LVM
Dec  5 01:46:02 compute-0 laughing_cannon[409903]: --> relative data size: 1.0
Dec  5 01:46:02 compute-0 laughing_cannon[409903]: --> All data devices are unavailable
Dec  5 01:46:02 compute-0 systemd[1]: libpod-b57230d778f91f786bc0f0004a332ddcd581aad3160b16b30df935510ea50320.scope: Deactivated successfully.
Dec  5 01:46:02 compute-0 podman[409887]: 2025-12-05 01:46:02.303066761 +0000 UTC m=+1.610928518 container died b57230d778f91f786bc0f0004a332ddcd581aad3160b16b30df935510ea50320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  5 01:46:02 compute-0 systemd[1]: libpod-b57230d778f91f786bc0f0004a332ddcd581aad3160b16b30df935510ea50320.scope: Consumed 1.247s CPU time.
Dec  5 01:46:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-b08708a8f9ed8658133c9f0ba4e4cef121dacd2c6265c50b29a76aa86568a73c-merged.mount: Deactivated successfully.
Dec  5 01:46:02 compute-0 podman[409887]: 2025-12-05 01:46:02.397641909 +0000 UTC m=+1.705503576 container remove b57230d778f91f786bc0f0004a332ddcd581aad3160b16b30df935510ea50320 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:46:02 compute-0 systemd[1]: libpod-conmon-b57230d778f91f786bc0f0004a332ddcd581aad3160b16b30df935510ea50320.scope: Deactivated successfully.
Dec  5 01:46:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:46:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1093: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:46:03 compute-0 podman[410082]: 2025-12-05 01:46:03.641332493 +0000 UTC m=+0.088233940 container create 2929dd4231ae2d81e995063ba18047f293597c09afba4444fc8d7e7dfdaaee4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  5 01:46:03 compute-0 podman[410082]: 2025-12-05 01:46:03.604746375 +0000 UTC m=+0.051647852 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:46:03 compute-0 systemd[1]: Started libpod-conmon-2929dd4231ae2d81e995063ba18047f293597c09afba4444fc8d7e7dfdaaee4a.scope.
Dec  5 01:46:03 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:46:03 compute-0 podman[410082]: 2025-12-05 01:46:03.794671633 +0000 UTC m=+0.241573160 container init 2929dd4231ae2d81e995063ba18047f293597c09afba4444fc8d7e7dfdaaee4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_grothendieck, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:46:03 compute-0 podman[410082]: 2025-12-05 01:46:03.804365036 +0000 UTC m=+0.251266483 container start 2929dd4231ae2d81e995063ba18047f293597c09afba4444fc8d7e7dfdaaee4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  5 01:46:03 compute-0 podman[410082]: 2025-12-05 01:46:03.810397505 +0000 UTC m=+0.257298962 container attach 2929dd4231ae2d81e995063ba18047f293597c09afba4444fc8d7e7dfdaaee4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_grothendieck, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  5 01:46:03 compute-0 competent_grothendieck[410098]: 167 167
Dec  5 01:46:03 compute-0 systemd[1]: libpod-2929dd4231ae2d81e995063ba18047f293597c09afba4444fc8d7e7dfdaaee4a.scope: Deactivated successfully.
Dec  5 01:46:03 compute-0 podman[410082]: 2025-12-05 01:46:03.819072229 +0000 UTC m=+0.265973746 container died 2929dd4231ae2d81e995063ba18047f293597c09afba4444fc8d7e7dfdaaee4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  5 01:46:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-98e6b425de61a773851b564ad0d7f731386c35cac627a439cce9c16f1bce11a9-merged.mount: Deactivated successfully.
Dec  5 01:46:03 compute-0 podman[410082]: 2025-12-05 01:46:03.896547707 +0000 UTC m=+0.343449154 container remove 2929dd4231ae2d81e995063ba18047f293597c09afba4444fc8d7e7dfdaaee4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:46:03 compute-0 systemd[1]: libpod-conmon-2929dd4231ae2d81e995063ba18047f293597c09afba4444fc8d7e7dfdaaee4a.scope: Deactivated successfully.
Dec  5 01:46:04 compute-0 podman[410122]: 2025-12-05 01:46:04.173015107 +0000 UTC m=+0.083266771 container create 77026929f48570f3f360752ceb04f05c585fc34e7b6cb5488da8b3beafa3069f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:46:04 compute-0 podman[410122]: 2025-12-05 01:46:04.144633369 +0000 UTC m=+0.054885113 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:46:04 compute-0 systemd[1]: Started libpod-conmon-77026929f48570f3f360752ceb04f05c585fc34e7b6cb5488da8b3beafa3069f.scope.
Dec  5 01:46:04 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:46:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d33ab1900530df64fa26cb454143d7b5e5dfdf3401cd4737e794eb9827da2ec5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:46:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d33ab1900530df64fa26cb454143d7b5e5dfdf3401cd4737e794eb9827da2ec5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:46:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d33ab1900530df64fa26cb454143d7b5e5dfdf3401cd4737e794eb9827da2ec5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:46:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d33ab1900530df64fa26cb454143d7b5e5dfdf3401cd4737e794eb9827da2ec5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:46:04 compute-0 podman[410122]: 2025-12-05 01:46:04.304623086 +0000 UTC m=+0.214874740 container init 77026929f48570f3f360752ceb04f05c585fc34e7b6cb5488da8b3beafa3069f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_williamson, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:46:04 compute-0 podman[410122]: 2025-12-05 01:46:04.325790711 +0000 UTC m=+0.236042375 container start 77026929f48570f3f360752ceb04f05c585fc34e7b6cb5488da8b3beafa3069f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  5 01:46:04 compute-0 podman[410122]: 2025-12-05 01:46:04.331631525 +0000 UTC m=+0.241883179 container attach 77026929f48570f3f360752ceb04f05c585fc34e7b6cb5488da8b3beafa3069f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:46:05 compute-0 serene_williamson[410136]: {
Dec  5 01:46:05 compute-0 serene_williamson[410136]:    "0": [
Dec  5 01:46:05 compute-0 serene_williamson[410136]:        {
Dec  5 01:46:05 compute-0 serene_williamson[410136]:            "devices": [
Dec  5 01:46:05 compute-0 serene_williamson[410136]:                "/dev/loop3"
Dec  5 01:46:05 compute-0 serene_williamson[410136]:            ],
Dec  5 01:46:05 compute-0 serene_williamson[410136]:            "lv_name": "ceph_lv0",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:            "lv_size": "21470642176",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:            "name": "ceph_lv0",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:            "tags": {
Dec  5 01:46:05 compute-0 serene_williamson[410136]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:                "ceph.cluster_name": "ceph",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:                "ceph.crush_device_class": "",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:                "ceph.encrypted": "0",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:                "ceph.osd_id": "0",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:                "ceph.type": "block",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:                "ceph.vdo": "0"
Dec  5 01:46:05 compute-0 serene_williamson[410136]:            },
Dec  5 01:46:05 compute-0 serene_williamson[410136]:            "type": "block",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:            "vg_name": "ceph_vg0"
Dec  5 01:46:05 compute-0 serene_williamson[410136]:        }
Dec  5 01:46:05 compute-0 serene_williamson[410136]:    ],
Dec  5 01:46:05 compute-0 serene_williamson[410136]:    "1": [
Dec  5 01:46:05 compute-0 serene_williamson[410136]:        {
Dec  5 01:46:05 compute-0 serene_williamson[410136]:            "devices": [
Dec  5 01:46:05 compute-0 serene_williamson[410136]:                "/dev/loop4"
Dec  5 01:46:05 compute-0 serene_williamson[410136]:            ],
Dec  5 01:46:05 compute-0 serene_williamson[410136]:            "lv_name": "ceph_lv1",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:            "lv_size": "21470642176",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:            "name": "ceph_lv1",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:            "tags": {
Dec  5 01:46:05 compute-0 serene_williamson[410136]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:                "ceph.cluster_name": "ceph",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:                "ceph.crush_device_class": "",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:                "ceph.encrypted": "0",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:                "ceph.osd_id": "1",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:                "ceph.type": "block",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:                "ceph.vdo": "0"
Dec  5 01:46:05 compute-0 serene_williamson[410136]:            },
Dec  5 01:46:05 compute-0 serene_williamson[410136]:            "type": "block",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:            "vg_name": "ceph_vg1"
Dec  5 01:46:05 compute-0 serene_williamson[410136]:        }
Dec  5 01:46:05 compute-0 serene_williamson[410136]:    ],
Dec  5 01:46:05 compute-0 serene_williamson[410136]:    "2": [
Dec  5 01:46:05 compute-0 serene_williamson[410136]:        {
Dec  5 01:46:05 compute-0 serene_williamson[410136]:            "devices": [
Dec  5 01:46:05 compute-0 serene_williamson[410136]:                "/dev/loop5"
Dec  5 01:46:05 compute-0 serene_williamson[410136]:            ],
Dec  5 01:46:05 compute-0 serene_williamson[410136]:            "lv_name": "ceph_lv2",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:            "lv_size": "21470642176",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:            "name": "ceph_lv2",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:            "tags": {
Dec  5 01:46:05 compute-0 serene_williamson[410136]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:                "ceph.cluster_name": "ceph",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:                "ceph.crush_device_class": "",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:                "ceph.encrypted": "0",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:                "ceph.osd_id": "2",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:                "ceph.type": "block",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:                "ceph.vdo": "0"
Dec  5 01:46:05 compute-0 serene_williamson[410136]:            },
Dec  5 01:46:05 compute-0 serene_williamson[410136]:            "type": "block",
Dec  5 01:46:05 compute-0 serene_williamson[410136]:            "vg_name": "ceph_vg2"
Dec  5 01:46:05 compute-0 serene_williamson[410136]:        }
Dec  5 01:46:05 compute-0 serene_williamson[410136]:    ]
Dec  5 01:46:05 compute-0 serene_williamson[410136]: }
Dec  5 01:46:05 compute-0 systemd[1]: libpod-77026929f48570f3f360752ceb04f05c585fc34e7b6cb5488da8b3beafa3069f.scope: Deactivated successfully.
Dec  5 01:46:05 compute-0 podman[410122]: 2025-12-05 01:46:05.226523977 +0000 UTC m=+1.136775641 container died 77026929f48570f3f360752ceb04f05c585fc34e7b6cb5488da8b3beafa3069f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:46:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1094: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 0 B/s wr, 11 op/s
Dec  5 01:46:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-d33ab1900530df64fa26cb454143d7b5e5dfdf3401cd4737e794eb9827da2ec5-merged.mount: Deactivated successfully.
Dec  5 01:46:05 compute-0 podman[410122]: 2025-12-05 01:46:05.297676197 +0000 UTC m=+1.207927851 container remove 77026929f48570f3f360752ceb04f05c585fc34e7b6cb5488da8b3beafa3069f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  5 01:46:05 compute-0 systemd[1]: libpod-conmon-77026929f48570f3f360752ceb04f05c585fc34e7b6cb5488da8b3beafa3069f.scope: Deactivated successfully.
Dec  5 01:46:06 compute-0 podman[410296]: 2025-12-05 01:46:06.350667923 +0000 UTC m=+0.069749932 container create a3a2e81b9e2df32cf8e651f41e478770a31d774e8c59e1c4651a47c9d45acd83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_thompson, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  5 01:46:06 compute-0 systemd[1]: Started libpod-conmon-a3a2e81b9e2df32cf8e651f41e478770a31d774e8c59e1c4651a47c9d45acd83.scope.
Dec  5 01:46:06 compute-0 podman[410296]: 2025-12-05 01:46:06.32497273 +0000 UTC m=+0.044054769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:46:06 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:46:06 compute-0 podman[410296]: 2025-12-05 01:46:06.470417178 +0000 UTC m=+0.189499267 container init a3a2e81b9e2df32cf8e651f41e478770a31d774e8c59e1c4651a47c9d45acd83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_thompson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Dec  5 01:46:06 compute-0 podman[410296]: 2025-12-05 01:46:06.485257345 +0000 UTC m=+0.204339374 container start a3a2e81b9e2df32cf8e651f41e478770a31d774e8c59e1c4651a47c9d45acd83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Dec  5 01:46:06 compute-0 podman[410296]: 2025-12-05 01:46:06.491511771 +0000 UTC m=+0.210593770 container attach a3a2e81b9e2df32cf8e651f41e478770a31d774e8c59e1c4651a47c9d45acd83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_thompson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  5 01:46:06 compute-0 magical_thompson[410311]: 167 167
Dec  5 01:46:06 compute-0 systemd[1]: libpod-a3a2e81b9e2df32cf8e651f41e478770a31d774e8c59e1c4651a47c9d45acd83.scope: Deactivated successfully.
Dec  5 01:46:06 compute-0 podman[410296]: 2025-12-05 01:46:06.497207691 +0000 UTC m=+0.216289720 container died a3a2e81b9e2df32cf8e651f41e478770a31d774e8c59e1c4651a47c9d45acd83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_thompson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:46:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-41523574de32a182627a3be617350c1fbbcfe1641b3fcffed3b840530ba73220-merged.mount: Deactivated successfully.
Dec  5 01:46:06 compute-0 podman[410296]: 2025-12-05 01:46:06.581279384 +0000 UTC m=+0.300361423 container remove a3a2e81b9e2df32cf8e651f41e478770a31d774e8c59e1c4651a47c9d45acd83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  5 01:46:06 compute-0 systemd[1]: libpod-conmon-a3a2e81b9e2df32cf8e651f41e478770a31d774e8c59e1c4651a47c9d45acd83.scope: Deactivated successfully.
Dec  5 01:46:06 compute-0 podman[410334]: 2025-12-05 01:46:06.854533634 +0000 UTC m=+0.075692808 container create ab120b3ee7f4f1de6c276b322d9d350e103b33c6409e2bc0cbea3418db5b35c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:46:06 compute-0 systemd[1]: Started libpod-conmon-ab120b3ee7f4f1de6c276b322d9d350e103b33c6409e2bc0cbea3418db5b35c9.scope.
Dec  5 01:46:06 compute-0 podman[410334]: 2025-12-05 01:46:06.830095247 +0000 UTC m=+0.051254471 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:46:06 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:46:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c632f468fc69e044474fcab3e36029aeff7943ca18e6ed6bfb8710886d474652/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:46:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c632f468fc69e044474fcab3e36029aeff7943ca18e6ed6bfb8710886d474652/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:46:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c632f468fc69e044474fcab3e36029aeff7943ca18e6ed6bfb8710886d474652/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:46:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c632f468fc69e044474fcab3e36029aeff7943ca18e6ed6bfb8710886d474652/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:46:06 compute-0 podman[410334]: 2025-12-05 01:46:06.981179364 +0000 UTC m=+0.202338588 container init ab120b3ee7f4f1de6c276b322d9d350e103b33c6409e2bc0cbea3418db5b35c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lamport, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec  5 01:46:07 compute-0 podman[410334]: 2025-12-05 01:46:07.010744395 +0000 UTC m=+0.231903609 container start ab120b3ee7f4f1de6c276b322d9d350e103b33c6409e2bc0cbea3418db5b35c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lamport, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:46:07 compute-0 podman[410334]: 2025-12-05 01:46:07.018870123 +0000 UTC m=+0.240029327 container attach ab120b3ee7f4f1de6c276b322d9d350e103b33c6409e2bc0cbea3418db5b35c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lamport, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:46:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1095: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 49 op/s
Dec  5 01:46:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:46:08 compute-0 nice_lamport[410350]: {
Dec  5 01:46:08 compute-0 nice_lamport[410350]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 01:46:08 compute-0 nice_lamport[410350]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:46:08 compute-0 nice_lamport[410350]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 01:46:08 compute-0 nice_lamport[410350]:        "osd_id": 0,
Dec  5 01:46:08 compute-0 nice_lamport[410350]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:46:08 compute-0 nice_lamport[410350]:        "type": "bluestore"
Dec  5 01:46:08 compute-0 nice_lamport[410350]:    },
Dec  5 01:46:08 compute-0 nice_lamport[410350]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 01:46:08 compute-0 nice_lamport[410350]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:46:08 compute-0 nice_lamport[410350]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 01:46:08 compute-0 nice_lamport[410350]:        "osd_id": 1,
Dec  5 01:46:08 compute-0 nice_lamport[410350]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:46:08 compute-0 nice_lamport[410350]:        "type": "bluestore"
Dec  5 01:46:08 compute-0 nice_lamport[410350]:    },
Dec  5 01:46:08 compute-0 nice_lamport[410350]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 01:46:08 compute-0 nice_lamport[410350]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:46:08 compute-0 nice_lamport[410350]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 01:46:08 compute-0 nice_lamport[410350]:        "osd_id": 2,
Dec  5 01:46:08 compute-0 nice_lamport[410350]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:46:08 compute-0 nice_lamport[410350]:        "type": "bluestore"
Dec  5 01:46:08 compute-0 nice_lamport[410350]:    }
Dec  5 01:46:08 compute-0 nice_lamport[410350]: }
Dec  5 01:46:08 compute-0 systemd[1]: libpod-ab120b3ee7f4f1de6c276b322d9d350e103b33c6409e2bc0cbea3418db5b35c9.scope: Deactivated successfully.
Dec  5 01:46:08 compute-0 podman[410334]: 2025-12-05 01:46:08.151199888 +0000 UTC m=+1.372359072 container died ab120b3ee7f4f1de6c276b322d9d350e103b33c6409e2bc0cbea3418db5b35c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lamport, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  5 01:46:08 compute-0 systemd[1]: libpod-ab120b3ee7f4f1de6c276b322d9d350e103b33c6409e2bc0cbea3418db5b35c9.scope: Consumed 1.140s CPU time.
Dec  5 01:46:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-c632f468fc69e044474fcab3e36029aeff7943ca18e6ed6bfb8710886d474652-merged.mount: Deactivated successfully.
Dec  5 01:46:08 compute-0 podman[410334]: 2025-12-05 01:46:08.238304436 +0000 UTC m=+1.459463610 container remove ab120b3ee7f4f1de6c276b322d9d350e103b33c6409e2bc0cbea3418db5b35c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lamport, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  5 01:46:08 compute-0 systemd[1]: libpod-conmon-ab120b3ee7f4f1de6c276b322d9d350e103b33c6409e2bc0cbea3418db5b35c9.scope: Deactivated successfully.
Dec  5 01:46:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:46:08 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:46:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:46:08 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:46:08 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 2b9f0602-cf97-42e6-bc5a-75f8b593e147 does not exist
Dec  5 01:46:08 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 55b19990-59fc-42fe-92f6-22ef2319c8dd does not exist
Dec  5 01:46:08 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:46:08 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:46:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1096: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 49 op/s
Dec  5 01:46:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1097: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  5 01:46:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:46:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1098: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  5 01:46:15 compute-0 nova_compute[349548]: 2025-12-05 01:46:15.086 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:46:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1099: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  5 01:46:16 compute-0 nova_compute[349548]: 2025-12-05 01:46:16.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:46:16 compute-0 nova_compute[349548]: 2025-12-05 01:46:16.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:46:16 compute-0 nova_compute[349548]: 2025-12-05 01:46:16.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:46:16 compute-0 nova_compute[349548]: 2025-12-05 01:46:16.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 01:46:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:46:16
Dec  5 01:46:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 01:46:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 01:46:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'backups', 'default.rgw.log', 'default.rgw.control', '.mgr', 'default.rgw.meta']
Dec  5 01:46:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:46:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:46:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 01:46:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:46:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:46:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:46:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:46:16 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Dec  5 01:46:16 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:46:16.449005) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  5 01:46:16 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Dec  5 01:46:16 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899176449073, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2047, "num_deletes": 251, "total_data_size": 3470778, "memory_usage": 3532560, "flush_reason": "Manual Compaction"}
Dec  5 01:46:16 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Dec  5 01:46:16 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899176477687, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3383215, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20896, "largest_seqno": 22942, "table_properties": {"data_size": 3373943, "index_size": 5830, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18421, "raw_average_key_size": 19, "raw_value_size": 3355552, "raw_average_value_size": 3627, "num_data_blocks": 264, "num_entries": 925, "num_filter_entries": 925, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764898952, "oldest_key_time": 1764898952, "file_creation_time": 1764899176, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Dec  5 01:46:16 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 28787 microseconds, and 15546 cpu microseconds.
Dec  5 01:46:16 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 01:46:16 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:46:16.477798) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3383215 bytes OK
Dec  5 01:46:16 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:46:16.477833) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Dec  5 01:46:16 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:46:16.480389) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Dec  5 01:46:16 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:46:16.480414) EVENT_LOG_v1 {"time_micros": 1764899176480405, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  5 01:46:16 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:46:16.480445) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  5 01:46:16 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3462222, prev total WAL file size 3462222, number of live WAL files 2.
Dec  5 01:46:16 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 01:46:16 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:46:16.482774) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Dec  5 01:46:16 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  5 01:46:16 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3303KB)], [50(7306KB)]
Dec  5 01:46:16 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899176482938, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 10865298, "oldest_snapshot_seqno": -1}
Dec  5 01:46:16 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4690 keys, 9147253 bytes, temperature: kUnknown
Dec  5 01:46:16 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899176570418, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9147253, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9113411, "index_size": 20996, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11781, "raw_key_size": 114753, "raw_average_key_size": 24, "raw_value_size": 9026081, "raw_average_value_size": 1924, "num_data_blocks": 885, "num_entries": 4690, "num_filter_entries": 4690, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764899176, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Dec  5 01:46:16 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 01:46:16 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:46:16.571645) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9147253 bytes
Dec  5 01:46:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 01:46:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:46:16 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:46:16.649684) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 122.8 rd, 103.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.1 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 5204, records dropped: 514 output_compression: NoCompression
Dec  5 01:46:16 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:46:16.649722) EVENT_LOG_v1 {"time_micros": 1764899176649708, "job": 26, "event": "compaction_finished", "compaction_time_micros": 88479, "compaction_time_cpu_micros": 21003, "output_level": 6, "num_output_files": 1, "total_output_size": 9147253, "num_input_records": 5204, "num_output_records": 4690, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  5 01:46:16 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 01:46:16 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899176654614, "job": 26, "event": "table_file_deletion", "file_number": 52}
Dec  5 01:46:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 01:46:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:46:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:46:16 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 01:46:16 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899176657498, "job": 26, "event": "table_file_deletion", "file_number": 50}
Dec  5 01:46:16 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:46:16.482422) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:46:16 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:46:16.658080) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:46:16 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:46:16.658090) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:46:16 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:46:16.658095) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:46:16 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:46:16.658099) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:46:16 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:46:16.658103) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:46:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:46:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:46:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:46:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:46:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:46:16 compute-0 podman[410444]: 2025-12-05 01:46:16.731437874 +0000 UTC m=+0.128935835 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 01:46:16 compute-0 podman[410443]: 2025-12-05 01:46:16.748863223 +0000 UTC m=+0.147323591 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:46:17 compute-0 nova_compute[349548]: 2025-12-05 01:46:17.063 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:46:17 compute-0 nova_compute[349548]: 2025-12-05 01:46:17.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:46:17 compute-0 nova_compute[349548]: 2025-12-05 01:46:17.065 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 01:46:17 compute-0 nova_compute[349548]: 2025-12-05 01:46:17.066 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  5 01:46:17 compute-0 nova_compute[349548]: 2025-12-05 01:46:17.092 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  5 01:46:17 compute-0 nova_compute[349548]: 2025-12-05 01:46:17.093 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:46:17 compute-0 nova_compute[349548]: 2025-12-05 01:46:17.132 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:46:17 compute-0 nova_compute[349548]: 2025-12-05 01:46:17.133 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:46:17 compute-0 nova_compute[349548]: 2025-12-05 01:46:17.133 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:46:17 compute-0 nova_compute[349548]: 2025-12-05 01:46:17.133 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 01:46:17 compute-0 nova_compute[349548]: 2025-12-05 01:46:17.134 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:46:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1100: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 47 op/s
Dec  5 01:46:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:46:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:46:17 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4289739040' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:46:17 compute-0 nova_compute[349548]: 2025-12-05 01:46:17.588 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:46:18 compute-0 nova_compute[349548]: 2025-12-05 01:46:18.132 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 01:46:18 compute-0 nova_compute[349548]: 2025-12-05 01:46:18.133 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4535MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 01:46:18 compute-0 nova_compute[349548]: 2025-12-05 01:46:18.134 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:46:18 compute-0 nova_compute[349548]: 2025-12-05 01:46:18.134 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:46:18 compute-0 nova_compute[349548]: 2025-12-05 01:46:18.216 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 01:46:18 compute-0 nova_compute[349548]: 2025-12-05 01:46:18.217 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 01:46:18 compute-0 nova_compute[349548]: 2025-12-05 01:46:18.254 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:46:18 compute-0 podman[410525]: 2025-12-05 01:46:18.722505934 +0000 UTC m=+0.134740068 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  5 01:46:18 compute-0 podman[410526]: 2025-12-05 01:46:18.743028271 +0000 UTC m=+0.154515664 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.vendor=CentOS)
Dec  5 01:46:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:46:18 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1195552152' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:46:18 compute-0 nova_compute[349548]: 2025-12-05 01:46:18.785 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:46:18 compute-0 nova_compute[349548]: 2025-12-05 01:46:18.794 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 01:46:18 compute-0 nova_compute[349548]: 2025-12-05 01:46:18.816 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 01:46:18 compute-0 nova_compute[349548]: 2025-12-05 01:46:18.819 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 01:46:18 compute-0 nova_compute[349548]: 2025-12-05 01:46:18.820 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:46:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1101: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 0 B/s wr, 10 op/s
Dec  5 01:46:19 compute-0 nova_compute[349548]: 2025-12-05 01:46:19.793 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:46:19 compute-0 nova_compute[349548]: 2025-12-05 01:46:19.794 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:46:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1102: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 0 B/s wr, 10 op/s
Dec  5 01:46:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:46:22 compute-0 podman[410566]: 2025-12-05 01:46:22.731228064 +0000 UTC m=+0.138652718 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, config_id=edpm, container_name=kepler, managed_by=edpm_ansible, name=ubi9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, architecture=x86_64, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  5 01:46:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1103: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:46:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1104: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:46:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 01:46:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1105: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:46:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:46:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1106: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:46:29 compute-0 podman[410586]: 2025-12-05 01:46:29.689796963 +0000 UTC m=+0.090705430 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 01:46:29 compute-0 podman[410585]: 2025-12-05 01:46:29.700972667 +0000 UTC m=+0.111801313 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  5 01:46:29 compute-0 podman[410588]: 2025-12-05 01:46:29.728757258 +0000 UTC m=+0.127041021 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=edpm, version=9.6, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, release=1755695350)
Dec  5 01:46:29 compute-0 podman[410587]: 2025-12-05 01:46:29.744006937 +0000 UTC m=+0.157292262 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec  5 01:46:29 compute-0 podman[158197]: time="2025-12-05T01:46:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:46:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:46:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec  5 01:46:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:46:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8118 "" "Go-http-client/1.1"
Dec  5 01:46:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1107: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:46:31 compute-0 openstack_network_exporter[366555]: ERROR   01:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:46:31 compute-0 openstack_network_exporter[366555]: ERROR   01:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:46:31 compute-0 openstack_network_exporter[366555]: ERROR   01:46:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:46:31 compute-0 openstack_network_exporter[366555]: ERROR   01:46:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:46:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:46:31 compute-0 openstack_network_exporter[366555]: ERROR   01:46:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:46:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:46:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:46:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1108: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:46:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Dec  5 01:46:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Dec  5 01:46:35 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Dec  5 01:46:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1110: 321 pgs: 321 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:46:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Dec  5 01:46:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Dec  5 01:46:36 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Dec  5 01:46:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Dec  5 01:46:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Dec  5 01:46:37 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Dec  5 01:46:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1113: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 4.3 KiB/s rd, 2.6 MiB/s wr, 7 op/s
Dec  5 01:46:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.314 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.315 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.315 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.320 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.321 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.321 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.321 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.321 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.321 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.321 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.321 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.322 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.322 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.323 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.323 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.324 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.325 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.325 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.327 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.327 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.326 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.327 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.328 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.328 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.328 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.328 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.329 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.allocation': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.allocation': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.330 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.330 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.330 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.allocation': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.allocation': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.allocation': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.allocation': [], 'network.incoming.packets.error': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.332 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.332 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.332 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.332 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.333 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.333 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.333 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:46:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:46:38.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:46:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1114: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 4.3 KiB/s rd, 2.6 MiB/s wr, 7 op/s
Dec  5 01:46:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1115: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 6.6 KiB/s rd, 2.5 MiB/s wr, 11 op/s
Dec  5 01:46:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:46:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Dec  5 01:46:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Dec  5 01:46:42 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Dec  5 01:46:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1117: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 2.2 MiB/s wr, 15 op/s
Dec  5 01:46:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 01:46:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4230110696' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 01:46:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 01:46:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4230110696' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 01:46:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1118: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 5.4 KiB/s rd, 1.2 KiB/s wr, 8 op/s
Dec  5 01:46:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:46:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:46:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:46:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:46:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:46:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:46:46 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:46:46.466 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:c8:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:b5:45:4f:f9:d2'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 01:46:46 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:46:46.468 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  5 01:46:46 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:46:46.469 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8dd76c1c-ab01-42af-b35e-2e870841b6ad, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 01:46:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1119: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 4.4 KiB/s rd, 1023 B/s wr, 6 op/s
Dec  5 01:46:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:46:47 compute-0 podman[410670]: 2025-12-05 01:46:47.740202133 +0000 UTC m=+0.136728564 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 01:46:47 compute-0 podman[410669]: 2025-12-05 01:46:47.756709047 +0000 UTC m=+0.156881911 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  5 01:46:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1120: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 4.4 KiB/s rd, 1023 B/s wr, 6 op/s
Dec  5 01:46:49 compute-0 podman[410713]: 2025-12-05 01:46:49.721597442 +0000 UTC m=+0.122529585 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm)
Dec  5 01:46:49 compute-0 podman[410712]: 2025-12-05 01:46:49.744825775 +0000 UTC m=+0.150213443 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Dec  5 01:46:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1121: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 307 B/s wr, 3 op/s
Dec  5 01:46:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:46:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1122: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:46:53 compute-0 podman[410752]: 2025-12-05 01:46:53.722632345 +0000 UTC m=+0.129770398 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, build-date=2024-09-18T21:23:30, config_id=edpm, com.redhat.component=ubi9-container, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.openshift.expose-services=, distribution-scope=public, release=1214.1726694543, architecture=x86_64, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, managed_by=edpm_ansible)
Dec  5 01:46:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1123: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:46:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:46:56.173 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:46:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:46:56.174 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:46:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:46:56.174 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:46:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1124: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:46:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:46:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1125: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:46:59 compute-0 podman[158197]: time="2025-12-05T01:46:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:46:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:46:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec  5 01:46:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:46:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8121 "" "Go-http-client/1.1"
Dec  5 01:47:00 compute-0 podman[410772]: 2025-12-05 01:47:00.735235793 +0000 UTC m=+0.132544866 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:47:00 compute-0 podman[410773]: 2025-12-05 01:47:00.740409419 +0000 UTC m=+0.133763701 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 01:47:00 compute-0 podman[410775]: 2025-12-05 01:47:00.742737114 +0000 UTC m=+0.118146452 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, release=1755695350, vendor=Red Hat, Inc., container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.33.7, version=9.6, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible)
Dec  5 01:47:00 compute-0 podman[410774]: 2025-12-05 01:47:00.784523709 +0000 UTC m=+0.170339789 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:47:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1126: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:47:01 compute-0 openstack_network_exporter[366555]: ERROR   01:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:47:01 compute-0 openstack_network_exporter[366555]: ERROR   01:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:47:01 compute-0 openstack_network_exporter[366555]: ERROR   01:47:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:47:01 compute-0 openstack_network_exporter[366555]: ERROR   01:47:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:47:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:47:01 compute-0 openstack_network_exporter[366555]: ERROR   01:47:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:47:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:47:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:47:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1127: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:47:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1128: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:47:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1129: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:47:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:47:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1130: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:47:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec  5 01:47:09 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  5 01:47:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:47:09 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:47:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 01:47:09 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:47:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 01:47:09 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:47:09 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev ad6dd14d-5b37-4462-a897-263f65d64d47 does not exist
Dec  5 01:47:09 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 0751906c-ebaf-4820-bf03-44ce664e0a60 does not exist
Dec  5 01:47:09 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev c47572b3-fe68-410d-aeed-27ce94544431 does not exist
Dec  5 01:47:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 01:47:09 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 01:47:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 01:47:09 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:47:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:47:09 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:47:10 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  5 01:47:10 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:47:10 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:47:10 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:47:11 compute-0 podman[411121]: 2025-12-05 01:47:11.096441296 +0000 UTC m=+0.094865987 container create f8a4b35c3ee0bbc14022f567bb643d96ce35f2d1d9dc60676dff5dc2ec993962 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  5 01:47:11 compute-0 podman[411121]: 2025-12-05 01:47:11.060069554 +0000 UTC m=+0.058494305 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:47:11 compute-0 systemd[1]: Started libpod-conmon-f8a4b35c3ee0bbc14022f567bb643d96ce35f2d1d9dc60676dff5dc2ec993962.scope.
Dec  5 01:47:11 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:47:11 compute-0 podman[411121]: 2025-12-05 01:47:11.239344063 +0000 UTC m=+0.237768754 container init f8a4b35c3ee0bbc14022f567bb643d96ce35f2d1d9dc60676dff5dc2ec993962 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_neumann, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:47:11 compute-0 podman[411121]: 2025-12-05 01:47:11.255374693 +0000 UTC m=+0.253799384 container start f8a4b35c3ee0bbc14022f567bb643d96ce35f2d1d9dc60676dff5dc2ec993962 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_neumann, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  5 01:47:11 compute-0 podman[411121]: 2025-12-05 01:47:11.262642837 +0000 UTC m=+0.261067588 container attach f8a4b35c3ee0bbc14022f567bb643d96ce35f2d1d9dc60676dff5dc2ec993962 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_neumann, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:47:11 compute-0 youthful_neumann[411137]: 167 167
Dec  5 01:47:11 compute-0 systemd[1]: libpod-f8a4b35c3ee0bbc14022f567bb643d96ce35f2d1d9dc60676dff5dc2ec993962.scope: Deactivated successfully.
Dec  5 01:47:11 compute-0 podman[411121]: 2025-12-05 01:47:11.268576694 +0000 UTC m=+0.267001405 container died f8a4b35c3ee0bbc14022f567bb643d96ce35f2d1d9dc60676dff5dc2ec993962 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  5 01:47:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1131: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:47:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-758371781c17c3b7e6a0aa7dd7efa541ce08c4b5eb8694a3ccd3cb20fe444aa1-merged.mount: Deactivated successfully.
Dec  5 01:47:11 compute-0 podman[411121]: 2025-12-05 01:47:11.35808191 +0000 UTC m=+0.356506611 container remove f8a4b35c3ee0bbc14022f567bb643d96ce35f2d1d9dc60676dff5dc2ec993962 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_neumann, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  5 01:47:11 compute-0 systemd[1]: libpod-conmon-f8a4b35c3ee0bbc14022f567bb643d96ce35f2d1d9dc60676dff5dc2ec993962.scope: Deactivated successfully.
Dec  5 01:47:11 compute-0 podman[411162]: 2025-12-05 01:47:11.648607134 +0000 UTC m=+0.091196363 container create 869840ce9f420231de0057d16a2f01c73f50ebda0a2c46eb75988fa52ac56e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ptolemy, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:47:11 compute-0 podman[411162]: 2025-12-05 01:47:11.61357058 +0000 UTC m=+0.056159889 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:47:11 compute-0 systemd[1]: Started libpod-conmon-869840ce9f420231de0057d16a2f01c73f50ebda0a2c46eb75988fa52ac56e47.scope.
Dec  5 01:47:11 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:47:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b402432b3f575db961c8602a3b776fef0733249b95dc062adb5a094ebeed06/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:47:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b402432b3f575db961c8602a3b776fef0733249b95dc062adb5a094ebeed06/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:47:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b402432b3f575db961c8602a3b776fef0733249b95dc062adb5a094ebeed06/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:47:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b402432b3f575db961c8602a3b776fef0733249b95dc062adb5a094ebeed06/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:47:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b402432b3f575db961c8602a3b776fef0733249b95dc062adb5a094ebeed06/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:47:11 compute-0 podman[411162]: 2025-12-05 01:47:11.850071457 +0000 UTC m=+0.292660766 container init 869840ce9f420231de0057d16a2f01c73f50ebda0a2c46eb75988fa52ac56e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ptolemy, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:47:11 compute-0 podman[411162]: 2025-12-05 01:47:11.873747202 +0000 UTC m=+0.316336471 container start 869840ce9f420231de0057d16a2f01c73f50ebda0a2c46eb75988fa52ac56e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ptolemy, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  5 01:47:11 compute-0 podman[411162]: 2025-12-05 01:47:11.880746049 +0000 UTC m=+0.323335358 container attach 869840ce9f420231de0057d16a2f01c73f50ebda0a2c46eb75988fa52ac56e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ptolemy, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:47:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:47:13 compute-0 cool_ptolemy[411178]: --> passed data devices: 0 physical, 3 LVM
Dec  5 01:47:13 compute-0 cool_ptolemy[411178]: --> relative data size: 1.0
Dec  5 01:47:13 compute-0 cool_ptolemy[411178]: --> All data devices are unavailable
Dec  5 01:47:13 compute-0 systemd[1]: libpod-869840ce9f420231de0057d16a2f01c73f50ebda0a2c46eb75988fa52ac56e47.scope: Deactivated successfully.
Dec  5 01:47:13 compute-0 systemd[1]: libpod-869840ce9f420231de0057d16a2f01c73f50ebda0a2c46eb75988fa52ac56e47.scope: Consumed 1.277s CPU time.
Dec  5 01:47:13 compute-0 podman[411162]: 2025-12-05 01:47:13.200696628 +0000 UTC m=+1.643285897 container died 869840ce9f420231de0057d16a2f01c73f50ebda0a2c46eb75988fa52ac56e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:47:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-92b402432b3f575db961c8602a3b776fef0733249b95dc062adb5a094ebeed06-merged.mount: Deactivated successfully.
Dec  5 01:47:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1132: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:47:13 compute-0 podman[411162]: 2025-12-05 01:47:13.312725277 +0000 UTC m=+1.755314536 container remove 869840ce9f420231de0057d16a2f01c73f50ebda0a2c46eb75988fa52ac56e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ptolemy, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  5 01:47:13 compute-0 systemd[1]: libpod-conmon-869840ce9f420231de0057d16a2f01c73f50ebda0a2c46eb75988fa52ac56e47.scope: Deactivated successfully.
Dec  5 01:47:14 compute-0 podman[411357]: 2025-12-05 01:47:14.52041304 +0000 UTC m=+0.068576558 container create 0760c74bd044b43f7a82975b218193f9d5872d8db4ab1205cfef811790848358 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_engelbart, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  5 01:47:14 compute-0 systemd[1]: Started libpod-conmon-0760c74bd044b43f7a82975b218193f9d5872d8db4ab1205cfef811790848358.scope.
Dec  5 01:47:14 compute-0 podman[411357]: 2025-12-05 01:47:14.499320107 +0000 UTC m=+0.047483655 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:47:14 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:47:14 compute-0 podman[411357]: 2025-12-05 01:47:14.664830089 +0000 UTC m=+0.212993677 container init 0760c74bd044b43f7a82975b218193f9d5872d8db4ab1205cfef811790848358 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:47:14 compute-0 podman[411357]: 2025-12-05 01:47:14.683464743 +0000 UTC m=+0.231628291 container start 0760c74bd044b43f7a82975b218193f9d5872d8db4ab1205cfef811790848358 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_engelbart, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  5 01:47:14 compute-0 distracted_engelbart[411373]: 167 167
Dec  5 01:47:14 compute-0 podman[411357]: 2025-12-05 01:47:14.689746939 +0000 UTC m=+0.237910487 container attach 0760c74bd044b43f7a82975b218193f9d5872d8db4ab1205cfef811790848358 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_engelbart, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:47:14 compute-0 systemd[1]: libpod-0760c74bd044b43f7a82975b218193f9d5872d8db4ab1205cfef811790848358.scope: Deactivated successfully.
Dec  5 01:47:14 compute-0 podman[411378]: 2025-12-05 01:47:14.769762258 +0000 UTC m=+0.057435245 container died 0760c74bd044b43f7a82975b218193f9d5872d8db4ab1205cfef811790848358 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  5 01:47:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecef7733cd8f40583d7892957fa3ce53ece3ec169a85157184062b73811d0f1b-merged.mount: Deactivated successfully.
Dec  5 01:47:14 compute-0 podman[411378]: 2025-12-05 01:47:14.842297397 +0000 UTC m=+0.129970414 container remove 0760c74bd044b43f7a82975b218193f9d5872d8db4ab1205cfef811790848358 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_engelbart, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Dec  5 01:47:14 compute-0 systemd[1]: libpod-conmon-0760c74bd044b43f7a82975b218193f9d5872d8db4ab1205cfef811790848358.scope: Deactivated successfully.
Dec  5 01:47:15 compute-0 podman[411399]: 2025-12-05 01:47:15.152259548 +0000 UTC m=+0.091635046 container create cf83c8b1abb947e6aefa8f6c73935426083ed86b8628842e058653ac0bff793b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:47:15 compute-0 podman[411399]: 2025-12-05 01:47:15.118323885 +0000 UTC m=+0.057699443 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:47:15 compute-0 systemd[1]: Started libpod-conmon-cf83c8b1abb947e6aefa8f6c73935426083ed86b8628842e058653ac0bff793b.scope.
Dec  5 01:47:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1133: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:47:15 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:47:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8eadb16d7e89d32500700bbeca341accbf4d034cf4440aa4b3cd4a1e969959/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:47:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8eadb16d7e89d32500700bbeca341accbf4d034cf4440aa4b3cd4a1e969959/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:47:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8eadb16d7e89d32500700bbeca341accbf4d034cf4440aa4b3cd4a1e969959/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:47:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e8eadb16d7e89d32500700bbeca341accbf4d034cf4440aa4b3cd4a1e969959/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:47:15 compute-0 podman[411399]: 2025-12-05 01:47:15.338569194 +0000 UTC m=+0.277944772 container init cf83c8b1abb947e6aefa8f6c73935426083ed86b8628842e058653ac0bff793b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_brown, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:47:15 compute-0 podman[411399]: 2025-12-05 01:47:15.350395657 +0000 UTC m=+0.289771165 container start cf83c8b1abb947e6aefa8f6c73935426083ed86b8628842e058653ac0bff793b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_brown, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:47:15 compute-0 podman[411399]: 2025-12-05 01:47:15.358033981 +0000 UTC m=+0.297409739 container attach cf83c8b1abb947e6aefa8f6c73935426083ed86b8628842e058653ac0bff793b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  5 01:47:16 compute-0 nova_compute[349548]: 2025-12-05 01:47:16.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:47:16 compute-0 eager_brown[411415]: {
Dec  5 01:47:16 compute-0 eager_brown[411415]:    "0": [
Dec  5 01:47:16 compute-0 eager_brown[411415]:        {
Dec  5 01:47:16 compute-0 eager_brown[411415]:            "devices": [
Dec  5 01:47:16 compute-0 eager_brown[411415]:                "/dev/loop3"
Dec  5 01:47:16 compute-0 eager_brown[411415]:            ],
Dec  5 01:47:16 compute-0 eager_brown[411415]:            "lv_name": "ceph_lv0",
Dec  5 01:47:16 compute-0 eager_brown[411415]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:47:16 compute-0 eager_brown[411415]:            "lv_size": "21470642176",
Dec  5 01:47:16 compute-0 eager_brown[411415]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:47:16 compute-0 eager_brown[411415]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:47:16 compute-0 eager_brown[411415]:            "name": "ceph_lv0",
Dec  5 01:47:16 compute-0 eager_brown[411415]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:47:16 compute-0 eager_brown[411415]:            "tags": {
Dec  5 01:47:16 compute-0 eager_brown[411415]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:47:16 compute-0 eager_brown[411415]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:47:16 compute-0 eager_brown[411415]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:47:16 compute-0 eager_brown[411415]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:47:16 compute-0 eager_brown[411415]:                "ceph.cluster_name": "ceph",
Dec  5 01:47:16 compute-0 eager_brown[411415]:                "ceph.crush_device_class": "",
Dec  5 01:47:16 compute-0 eager_brown[411415]:                "ceph.encrypted": "0",
Dec  5 01:47:16 compute-0 eager_brown[411415]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:47:16 compute-0 eager_brown[411415]:                "ceph.osd_id": "0",
Dec  5 01:47:16 compute-0 eager_brown[411415]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:47:16 compute-0 eager_brown[411415]:                "ceph.type": "block",
Dec  5 01:47:16 compute-0 eager_brown[411415]:                "ceph.vdo": "0"
Dec  5 01:47:16 compute-0 eager_brown[411415]:            },
Dec  5 01:47:16 compute-0 eager_brown[411415]:            "type": "block",
Dec  5 01:47:16 compute-0 eager_brown[411415]:            "vg_name": "ceph_vg0"
Dec  5 01:47:16 compute-0 eager_brown[411415]:        }
Dec  5 01:47:16 compute-0 eager_brown[411415]:    ],
Dec  5 01:47:16 compute-0 eager_brown[411415]:    "1": [
Dec  5 01:47:16 compute-0 eager_brown[411415]:        {
Dec  5 01:47:16 compute-0 eager_brown[411415]:            "devices": [
Dec  5 01:47:16 compute-0 eager_brown[411415]:                "/dev/loop4"
Dec  5 01:47:16 compute-0 eager_brown[411415]:            ],
Dec  5 01:47:16 compute-0 eager_brown[411415]:            "lv_name": "ceph_lv1",
Dec  5 01:47:16 compute-0 eager_brown[411415]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:47:16 compute-0 eager_brown[411415]:            "lv_size": "21470642176",
Dec  5 01:47:16 compute-0 eager_brown[411415]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:47:16 compute-0 eager_brown[411415]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:47:16 compute-0 eager_brown[411415]:            "name": "ceph_lv1",
Dec  5 01:47:16 compute-0 eager_brown[411415]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:47:16 compute-0 eager_brown[411415]:            "tags": {
Dec  5 01:47:16 compute-0 eager_brown[411415]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:47:16 compute-0 eager_brown[411415]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:47:16 compute-0 eager_brown[411415]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:47:16 compute-0 eager_brown[411415]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:47:16 compute-0 eager_brown[411415]:                "ceph.cluster_name": "ceph",
Dec  5 01:47:16 compute-0 eager_brown[411415]:                "ceph.crush_device_class": "",
Dec  5 01:47:16 compute-0 eager_brown[411415]:                "ceph.encrypted": "0",
Dec  5 01:47:16 compute-0 eager_brown[411415]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:47:16 compute-0 eager_brown[411415]:                "ceph.osd_id": "1",
Dec  5 01:47:16 compute-0 eager_brown[411415]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:47:16 compute-0 eager_brown[411415]:                "ceph.type": "block",
Dec  5 01:47:16 compute-0 eager_brown[411415]:                "ceph.vdo": "0"
Dec  5 01:47:16 compute-0 eager_brown[411415]:            },
Dec  5 01:47:16 compute-0 eager_brown[411415]:            "type": "block",
Dec  5 01:47:16 compute-0 eager_brown[411415]:            "vg_name": "ceph_vg1"
Dec  5 01:47:16 compute-0 eager_brown[411415]:        }
Dec  5 01:47:16 compute-0 eager_brown[411415]:    ],
Dec  5 01:47:16 compute-0 eager_brown[411415]:    "2": [
Dec  5 01:47:16 compute-0 eager_brown[411415]:        {
Dec  5 01:47:16 compute-0 eager_brown[411415]:            "devices": [
Dec  5 01:47:16 compute-0 eager_brown[411415]:                "/dev/loop5"
Dec  5 01:47:16 compute-0 eager_brown[411415]:            ],
Dec  5 01:47:16 compute-0 eager_brown[411415]:            "lv_name": "ceph_lv2",
Dec  5 01:47:16 compute-0 eager_brown[411415]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:47:16 compute-0 eager_brown[411415]:            "lv_size": "21470642176",
Dec  5 01:47:16 compute-0 eager_brown[411415]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:47:16 compute-0 eager_brown[411415]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:47:16 compute-0 eager_brown[411415]:            "name": "ceph_lv2",
Dec  5 01:47:16 compute-0 eager_brown[411415]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:47:16 compute-0 eager_brown[411415]:            "tags": {
Dec  5 01:47:16 compute-0 eager_brown[411415]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:47:16 compute-0 eager_brown[411415]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:47:16 compute-0 eager_brown[411415]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:47:16 compute-0 eager_brown[411415]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:47:16 compute-0 eager_brown[411415]:                "ceph.cluster_name": "ceph",
Dec  5 01:47:16 compute-0 eager_brown[411415]:                "ceph.crush_device_class": "",
Dec  5 01:47:16 compute-0 eager_brown[411415]:                "ceph.encrypted": "0",
Dec  5 01:47:16 compute-0 eager_brown[411415]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:47:16 compute-0 eager_brown[411415]:                "ceph.osd_id": "2",
Dec  5 01:47:16 compute-0 eager_brown[411415]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:47:16 compute-0 eager_brown[411415]:                "ceph.type": "block",
Dec  5 01:47:16 compute-0 eager_brown[411415]:                "ceph.vdo": "0"
Dec  5 01:47:16 compute-0 eager_brown[411415]:            },
Dec  5 01:47:16 compute-0 eager_brown[411415]:            "type": "block",
Dec  5 01:47:16 compute-0 eager_brown[411415]:            "vg_name": "ceph_vg2"
Dec  5 01:47:16 compute-0 eager_brown[411415]:        }
Dec  5 01:47:16 compute-0 eager_brown[411415]:    ]
Dec  5 01:47:16 compute-0 eager_brown[411415]: }
Dec  5 01:47:16 compute-0 systemd[1]: libpod-cf83c8b1abb947e6aefa8f6c73935426083ed86b8628842e058653ac0bff793b.scope: Deactivated successfully.
Dec  5 01:47:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:47:16
Dec  5 01:47:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 01:47:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 01:47:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'backups', '.rgw.root', 'images', 'volumes', 'default.rgw.meta']
Dec  5 01:47:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:47:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:47:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 01:47:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:47:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:47:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:47:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:47:16 compute-0 podman[411424]: 2025-12-05 01:47:16.321093419 +0000 UTC m=+0.049278216 container died cf83c8b1abb947e6aefa8f6c73935426083ed86b8628842e058653ac0bff793b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:47:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e8eadb16d7e89d32500700bbeca341accbf4d034cf4440aa4b3cd4a1e969959-merged.mount: Deactivated successfully.
Dec  5 01:47:16 compute-0 podman[411424]: 2025-12-05 01:47:16.437649925 +0000 UTC m=+0.165834682 container remove cf83c8b1abb947e6aefa8f6c73935426083ed86b8628842e058653ac0bff793b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_brown, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:47:16 compute-0 systemd[1]: libpod-conmon-cf83c8b1abb947e6aefa8f6c73935426083ed86b8628842e058653ac0bff793b.scope: Deactivated successfully.
Dec  5 01:47:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 01:47:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:47:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:47:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 01:47:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:47:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:47:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:47:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:47:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:47:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:47:17 compute-0 nova_compute[349548]: 2025-12-05 01:47:17.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:47:17 compute-0 nova_compute[349548]: 2025-12-05 01:47:17.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 01:47:17 compute-0 nova_compute[349548]: 2025-12-05 01:47:17.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  5 01:47:17 compute-0 nova_compute[349548]: 2025-12-05 01:47:17.087 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  5 01:47:17 compute-0 nova_compute[349548]: 2025-12-05 01:47:17.088 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:47:17 compute-0 nova_compute[349548]: 2025-12-05 01:47:17.088 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:47:17 compute-0 nova_compute[349548]: 2025-12-05 01:47:17.089 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 01:47:17 compute-0 nova_compute[349548]: 2025-12-05 01:47:17.090 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:47:17 compute-0 nova_compute[349548]: 2025-12-05 01:47:17.120 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:47:17 compute-0 nova_compute[349548]: 2025-12-05 01:47:17.121 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:47:17 compute-0 nova_compute[349548]: 2025-12-05 01:47:17.121 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:47:17 compute-0 nova_compute[349548]: 2025-12-05 01:47:17.122 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 01:47:17 compute-0 nova_compute[349548]: 2025-12-05 01:47:17.123 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:47:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1134: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:47:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:47:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:47:17 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1089642857' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:47:17 compute-0 nova_compute[349548]: 2025-12-05 01:47:17.653 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:47:17 compute-0 podman[411599]: 2025-12-05 01:47:17.671219586 +0000 UTC m=+0.088641032 container create f4fefc963745ab0eb9a5a26b76951a216fff7e6290f175547aa1b0a77d84d712 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  5 01:47:17 compute-0 systemd[1]: Started libpod-conmon-f4fefc963745ab0eb9a5a26b76951a216fff7e6290f175547aa1b0a77d84d712.scope.
Dec  5 01:47:17 compute-0 podman[411599]: 2025-12-05 01:47:17.639562176 +0000 UTC m=+0.056983622 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:47:17 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:47:17 compute-0 podman[411599]: 2025-12-05 01:47:17.804271356 +0000 UTC m=+0.221692812 container init f4fefc963745ab0eb9a5a26b76951a216fff7e6290f175547aa1b0a77d84d712 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hermann, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  5 01:47:17 compute-0 podman[411599]: 2025-12-05 01:47:17.81580318 +0000 UTC m=+0.233224616 container start f4fefc963745ab0eb9a5a26b76951a216fff7e6290f175547aa1b0a77d84d712 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  5 01:47:17 compute-0 podman[411599]: 2025-12-05 01:47:17.821796958 +0000 UTC m=+0.239218384 container attach f4fefc963745ab0eb9a5a26b76951a216fff7e6290f175547aa1b0a77d84d712 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hermann, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:47:17 compute-0 compassionate_hermann[411616]: 167 167
Dec  5 01:47:17 compute-0 systemd[1]: libpod-f4fefc963745ab0eb9a5a26b76951a216fff7e6290f175547aa1b0a77d84d712.scope: Deactivated successfully.
Dec  5 01:47:17 compute-0 podman[411599]: 2025-12-05 01:47:17.826306135 +0000 UTC m=+0.243727581 container died f4fefc963745ab0eb9a5a26b76951a216fff7e6290f175547aa1b0a77d84d712 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:47:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5d2db5cb3fbaa14d7c83a7c36671b03f6d538226353cbb25353112129626a53-merged.mount: Deactivated successfully.
Dec  5 01:47:17 compute-0 podman[411599]: 2025-12-05 01:47:17.890275913 +0000 UTC m=+0.307697329 container remove f4fefc963745ab0eb9a5a26b76951a216fff7e6290f175547aa1b0a77d84d712 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hermann, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:47:17 compute-0 systemd[1]: libpod-conmon-f4fefc963745ab0eb9a5a26b76951a216fff7e6290f175547aa1b0a77d84d712.scope: Deactivated successfully.
Dec  5 01:47:17 compute-0 podman[411620]: 2025-12-05 01:47:17.947873682 +0000 UTC m=+0.127664539 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:47:17 compute-0 podman[411619]: 2025-12-05 01:47:17.977670709 +0000 UTC m=+0.153613208 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 01:47:18 compute-0 podman[411675]: 2025-12-05 01:47:18.085301405 +0000 UTC m=+0.067566680 container create 057bdcc0bf2227f998ad2ced57c2f557263473a01de824338dd79533e1f5eefe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:47:18 compute-0 nova_compute[349548]: 2025-12-05 01:47:18.137 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 01:47:18 compute-0 nova_compute[349548]: 2025-12-05 01:47:18.138 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4517MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 01:47:18 compute-0 nova_compute[349548]: 2025-12-05 01:47:18.138 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:47:18 compute-0 nova_compute[349548]: 2025-12-05 01:47:18.138 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:47:18 compute-0 podman[411675]: 2025-12-05 01:47:18.053634655 +0000 UTC m=+0.035899930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:47:18 compute-0 systemd[1]: Started libpod-conmon-057bdcc0bf2227f998ad2ced57c2f557263473a01de824338dd79533e1f5eefe.scope.
Dec  5 01:47:18 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:47:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82d7816d7a74395b7722b83871fd191362a1c7ac9913269bc9898a1a518ea16f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:47:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82d7816d7a74395b7722b83871fd191362a1c7ac9913269bc9898a1a518ea16f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:47:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82d7816d7a74395b7722b83871fd191362a1c7ac9913269bc9898a1a518ea16f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:47:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82d7816d7a74395b7722b83871fd191362a1c7ac9913269bc9898a1a518ea16f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:47:18 compute-0 podman[411675]: 2025-12-05 01:47:18.228981473 +0000 UTC m=+0.211246788 container init 057bdcc0bf2227f998ad2ced57c2f557263473a01de824338dd79533e1f5eefe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_nobel, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  5 01:47:18 compute-0 nova_compute[349548]: 2025-12-05 01:47:18.236 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 01:47:18 compute-0 nova_compute[349548]: 2025-12-05 01:47:18.236 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 01:47:18 compute-0 podman[411675]: 2025-12-05 01:47:18.260576901 +0000 UTC m=+0.242842146 container start 057bdcc0bf2227f998ad2ced57c2f557263473a01de824338dd79533e1f5eefe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  5 01:47:18 compute-0 podman[411675]: 2025-12-05 01:47:18.266645351 +0000 UTC m=+0.248910626 container attach 057bdcc0bf2227f998ad2ced57c2f557263473a01de824338dd79533e1f5eefe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  5 01:47:18 compute-0 nova_compute[349548]: 2025-12-05 01:47:18.278 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:47:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:47:18 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1996216443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:47:18 compute-0 nova_compute[349548]: 2025-12-05 01:47:18.775 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:47:18 compute-0 nova_compute[349548]: 2025-12-05 01:47:18.788 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 01:47:18 compute-0 nova_compute[349548]: 2025-12-05 01:47:18.808 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 01:47:18 compute-0 nova_compute[349548]: 2025-12-05 01:47:18.810 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 01:47:18 compute-0 nova_compute[349548]: 2025-12-05 01:47:18.811 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:47:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1135: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:47:19 compute-0 quizzical_nobel[411692]: {
Dec  5 01:47:19 compute-0 quizzical_nobel[411692]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 01:47:19 compute-0 quizzical_nobel[411692]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:47:19 compute-0 quizzical_nobel[411692]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 01:47:19 compute-0 quizzical_nobel[411692]:        "osd_id": 0,
Dec  5 01:47:19 compute-0 quizzical_nobel[411692]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:47:19 compute-0 quizzical_nobel[411692]:        "type": "bluestore"
Dec  5 01:47:19 compute-0 quizzical_nobel[411692]:    },
Dec  5 01:47:19 compute-0 quizzical_nobel[411692]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 01:47:19 compute-0 quizzical_nobel[411692]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:47:19 compute-0 quizzical_nobel[411692]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 01:47:19 compute-0 quizzical_nobel[411692]:        "osd_id": 1,
Dec  5 01:47:19 compute-0 quizzical_nobel[411692]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:47:19 compute-0 quizzical_nobel[411692]:        "type": "bluestore"
Dec  5 01:47:19 compute-0 quizzical_nobel[411692]:    },
Dec  5 01:47:19 compute-0 quizzical_nobel[411692]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 01:47:19 compute-0 quizzical_nobel[411692]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:47:19 compute-0 quizzical_nobel[411692]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 01:47:19 compute-0 quizzical_nobel[411692]:        "osd_id": 2,
Dec  5 01:47:19 compute-0 quizzical_nobel[411692]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:47:19 compute-0 quizzical_nobel[411692]:        "type": "bluestore"
Dec  5 01:47:19 compute-0 quizzical_nobel[411692]:    }
Dec  5 01:47:19 compute-0 quizzical_nobel[411692]: }
Dec  5 01:47:19 compute-0 systemd[1]: libpod-057bdcc0bf2227f998ad2ced57c2f557263473a01de824338dd79533e1f5eefe.scope: Deactivated successfully.
Dec  5 01:47:19 compute-0 systemd[1]: libpod-057bdcc0bf2227f998ad2ced57c2f557263473a01de824338dd79533e1f5eefe.scope: Consumed 1.207s CPU time.
Dec  5 01:47:19 compute-0 podman[411747]: 2025-12-05 01:47:19.540835033 +0000 UTC m=+0.051319393 container died 057bdcc0bf2227f998ad2ced57c2f557263473a01de824338dd79533e1f5eefe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_nobel, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  5 01:47:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-82d7816d7a74395b7722b83871fd191362a1c7ac9913269bc9898a1a518ea16f-merged.mount: Deactivated successfully.
Dec  5 01:47:19 compute-0 podman[411747]: 2025-12-05 01:47:19.654536789 +0000 UTC m=+0.165021129 container remove 057bdcc0bf2227f998ad2ced57c2f557263473a01de824338dd79533e1f5eefe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  5 01:47:19 compute-0 systemd[1]: libpod-conmon-057bdcc0bf2227f998ad2ced57c2f557263473a01de824338dd79533e1f5eefe.scope: Deactivated successfully.
Dec  5 01:47:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:47:19 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:47:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:47:19 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:47:19 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 12ce805b-c793-4957-8617-17fd90948acd does not exist
Dec  5 01:47:19 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev be294e56-88cb-41ac-bac2-7ad0ac4ad1a7 does not exist
Dec  5 01:47:19 compute-0 nova_compute[349548]: 2025-12-05 01:47:19.791 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:47:19 compute-0 nova_compute[349548]: 2025-12-05 01:47:19.791 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:47:19 compute-0 nova_compute[349548]: 2025-12-05 01:47:19.792 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:47:19 compute-0 nova_compute[349548]: 2025-12-05 01:47:19.792 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:47:20 compute-0 podman[411787]: 2025-12-05 01:47:20.047697779 +0000 UTC m=+0.117908335 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible)
Dec  5 01:47:20 compute-0 nova_compute[349548]: 2025-12-05 01:47:20.062 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:47:20 compute-0 podman[411786]: 2025-12-05 01:47:20.077094265 +0000 UTC m=+0.150866741 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  5 01:47:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:47:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:47:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1136: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:47:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:47:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1137: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:47:24 compute-0 podman[411846]: 2025-12-05 01:47:24.718860376 +0000 UTC m=+0.122456833 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vcs-type=git, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., version=9.4, io.openshift.tags=base rhel9, config_id=edpm, managed_by=edpm_ansible, container_name=kepler, release-0.7.12=, architecture=x86_64, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  5 01:47:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1138: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  5 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:47:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 01:47:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1139: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:47:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:47:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1140: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:47:29 compute-0 podman[158197]: time="2025-12-05T01:47:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:47:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:47:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec  5 01:47:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:47:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8116 "" "Go-http-client/1.1"
Dec  5 01:47:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1141: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:47:31 compute-0 openstack_network_exporter[366555]: ERROR   01:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:47:31 compute-0 openstack_network_exporter[366555]: ERROR   01:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:47:31 compute-0 openstack_network_exporter[366555]: ERROR   01:47:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:47:31 compute-0 openstack_network_exporter[366555]: ERROR   01:47:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:47:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:47:31 compute-0 openstack_network_exporter[366555]: ERROR   01:47:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:47:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:47:31 compute-0 podman[411863]: 2025-12-05 01:47:31.705729869 +0000 UTC m=+0.122487984 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 01:47:31 compute-0 podman[411864]: 2025-12-05 01:47:31.716144341 +0000 UTC m=+0.121371832 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 01:47:31 compute-0 podman[411871]: 2025-12-05 01:47:31.737271225 +0000 UTC m=+0.115020094 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., config_id=edpm, container_name=openstack_network_exporter, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  5 01:47:31 compute-0 podman[411869]: 2025-12-05 01:47:31.758750599 +0000 UTC m=+0.149775731 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  5 01:47:31 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:47:31.805 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:c8:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:b5:45:4f:f9:d2'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 01:47:31 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:47:31.806 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  5 01:47:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:47:32 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Dec  5 01:47:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:47:32.589319) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  5 01:47:32 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Dec  5 01:47:32 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899252589357, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1159, "num_deletes": 506, "total_data_size": 1209092, "memory_usage": 1235512, "flush_reason": "Manual Compaction"}
Dec  5 01:47:32 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Dec  5 01:47:32 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899252602254, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 907396, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22943, "largest_seqno": 24101, "table_properties": {"data_size": 902752, "index_size": 1720, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 13993, "raw_average_key_size": 18, "raw_value_size": 890981, "raw_average_value_size": 1200, "num_data_blocks": 77, "num_entries": 742, "num_filter_entries": 742, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764899177, "oldest_key_time": 1764899177, "file_creation_time": 1764899252, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Dec  5 01:47:32 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 13035 microseconds, and 7365 cpu microseconds.
Dec  5 01:47:32 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 01:47:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:47:32.602350) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 907396 bytes OK
Dec  5 01:47:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:47:32.602376) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Dec  5 01:47:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:47:32.604790) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Dec  5 01:47:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:47:32.604813) EVENT_LOG_v1 {"time_micros": 1764899252604806, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  5 01:47:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:47:32.604836) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  5 01:47:32 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1202657, prev total WAL file size 1202657, number of live WAL files 2.
Dec  5 01:47:32 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 01:47:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:47:32.606053) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353033' seq:72057594037927935, type:22 .. '6C6F676D00373535' seq:0, type:0; will stop at (end)
Dec  5 01:47:32 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  5 01:47:32 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(886KB)], [53(8932KB)]
Dec  5 01:47:32 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899252606088, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 10054649, "oldest_snapshot_seqno": -1}
Dec  5 01:47:32 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4425 keys, 6961519 bytes, temperature: kUnknown
Dec  5 01:47:32 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899252670552, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 6961519, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6932304, "index_size": 17073, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11077, "raw_key_size": 110889, "raw_average_key_size": 25, "raw_value_size": 6852390, "raw_average_value_size": 1548, "num_data_blocks": 711, "num_entries": 4425, "num_filter_entries": 4425, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764899252, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Dec  5 01:47:32 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 01:47:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:47:32.671107) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 6961519 bytes
Dec  5 01:47:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:47:32.674868) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.7 rd, 107.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 8.7 +0.0 blob) out(6.6 +0.0 blob), read-write-amplify(18.8) write-amplify(7.7) OK, records in: 5432, records dropped: 1007 output_compression: NoCompression
Dec  5 01:47:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:47:32.675002) EVENT_LOG_v1 {"time_micros": 1764899252674983, "job": 28, "event": "compaction_finished", "compaction_time_micros": 64573, "compaction_time_cpu_micros": 37042, "output_level": 6, "num_output_files": 1, "total_output_size": 6961519, "num_input_records": 5432, "num_output_records": 4425, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  5 01:47:32 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 01:47:32 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899252675543, "job": 28, "event": "table_file_deletion", "file_number": 55}
Dec  5 01:47:32 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 01:47:32 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899252679180, "job": 28, "event": "table_file_deletion", "file_number": 53}
Dec  5 01:47:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:47:32.605748) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:47:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:47:32.679426) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:47:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:47:32.679432) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:47:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:47:32.679434) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:47:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:47:32.679436) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:47:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:47:32.679438) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:47:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1142: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:47:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1143: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:47:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1144: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:47:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:47:37 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:47:37.808 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8dd76c1c-ab01-42af-b35e-2e870841b6ad, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 01:47:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1145: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:47:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1146: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:47:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:47:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1147: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:47:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 01:47:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3277609033' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 01:47:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 01:47:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3277609033' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 01:47:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1148: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:47:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:47:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:47:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:47:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:47:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:47:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:47:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1149: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:47:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:47:48 compute-0 podman[411950]: 2025-12-05 01:47:48.721818581 +0000 UTC m=+0.131970041 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 01:47:48 compute-0 podman[411951]: 2025-12-05 01:47:48.73282383 +0000 UTC m=+0.135351495 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 01:47:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1150: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:47:50 compute-0 podman[411989]: 2025-12-05 01:47:50.717255554 +0000 UTC m=+0.127766522 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4)
Dec  5 01:47:50 compute-0 podman[411990]: 2025-12-05 01:47:50.762946938 +0000 UTC m=+0.164516835 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 01:47:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1151: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:47:52 compute-0 nova_compute[349548]: 2025-12-05 01:47:52.397 349552 DEBUG oslo_concurrency.lockutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:47:52 compute-0 nova_compute[349548]: 2025-12-05 01:47:52.398 349552 DEBUG oslo_concurrency.lockutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:47:52 compute-0 nova_compute[349548]: 2025-12-05 01:47:52.431 349552 DEBUG nova.compute.manager [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  5 01:47:52 compute-0 nova_compute[349548]: 2025-12-05 01:47:52.561 349552 DEBUG oslo_concurrency.lockutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:47:52 compute-0 nova_compute[349548]: 2025-12-05 01:47:52.562 349552 DEBUG oslo_concurrency.lockutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:47:52 compute-0 nova_compute[349548]: 2025-12-05 01:47:52.575 349552 DEBUG nova.virt.hardware [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  5 01:47:52 compute-0 nova_compute[349548]: 2025-12-05 01:47:52.576 349552 INFO nova.compute.claims [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  5 01:47:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:47:52 compute-0 nova_compute[349548]: 2025-12-05 01:47:52.702 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:47:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:47:53 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1279624068' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:47:53 compute-0 nova_compute[349548]: 2025-12-05 01:47:53.202 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:47:53 compute-0 nova_compute[349548]: 2025-12-05 01:47:53.218 349552 DEBUG nova.compute.provider_tree [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 01:47:53 compute-0 nova_compute[349548]: 2025-12-05 01:47:53.237 349552 DEBUG nova.scheduler.client.report [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 01:47:53 compute-0 nova_compute[349548]: 2025-12-05 01:47:53.269 349552 DEBUG oslo_concurrency.lockutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:47:53 compute-0 nova_compute[349548]: 2025-12-05 01:47:53.270 349552 DEBUG nova.compute.manager [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  5 01:47:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1152: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:47:53 compute-0 nova_compute[349548]: 2025-12-05 01:47:53.324 349552 DEBUG nova.compute.manager [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  5 01:47:53 compute-0 nova_compute[349548]: 2025-12-05 01:47:53.325 349552 DEBUG nova.network.neutron [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  5 01:47:53 compute-0 nova_compute[349548]: 2025-12-05 01:47:53.356 349552 INFO nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  5 01:47:53 compute-0 nova_compute[349548]: 2025-12-05 01:47:53.398 349552 DEBUG nova.compute.manager [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  5 01:47:53 compute-0 nova_compute[349548]: 2025-12-05 01:47:53.527 349552 DEBUG nova.compute.manager [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  5 01:47:53 compute-0 nova_compute[349548]: 2025-12-05 01:47:53.529 349552 DEBUG nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  5 01:47:53 compute-0 nova_compute[349548]: 2025-12-05 01:47:53.530 349552 INFO nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Creating image(s)#033[00m
Dec  5 01:47:53 compute-0 nova_compute[349548]: 2025-12-05 01:47:53.587 349552 DEBUG nova.storage.rbd_utils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image b69a0e24-1bc4-46a5-92d7-367c1efd53df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 01:47:53 compute-0 nova_compute[349548]: 2025-12-05 01:47:53.652 349552 DEBUG nova.storage.rbd_utils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image b69a0e24-1bc4-46a5-92d7-367c1efd53df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 01:47:53 compute-0 nova_compute[349548]: 2025-12-05 01:47:53.713 349552 DEBUG nova.storage.rbd_utils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image b69a0e24-1bc4-46a5-92d7-367c1efd53df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 01:47:53 compute-0 nova_compute[349548]: 2025-12-05 01:47:53.723 349552 DEBUG oslo_concurrency.lockutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "af0f6d73e40706411141d751e7ebef271f1a5b42" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:47:53 compute-0 nova_compute[349548]: 2025-12-05 01:47:53.725 349552 DEBUG oslo_concurrency.lockutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "af0f6d73e40706411141d751e7ebef271f1a5b42" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:47:54 compute-0 nova_compute[349548]: 2025-12-05 01:47:54.022 349552 DEBUG nova.virt.libvirt.imagebackend [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Image locations are: [{'url': 'rbd://cbd280d3-cbd8-528b-ace6-2b3a887cdcee/images/aa58c1e9-bdcc-4e60-9cee-eaeee0741251/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://cbd280d3-cbd8-528b-ace6-2b3a887cdcee/images/aa58c1e9-bdcc-4e60-9cee-eaeee0741251/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Dec  5 01:47:54 compute-0 nova_compute[349548]: 2025-12-05 01:47:54.229 349552 WARNING oslo_policy.policy [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Dec  5 01:47:54 compute-0 nova_compute[349548]: 2025-12-05 01:47:54.230 349552 WARNING oslo_policy.policy [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Dec  5 01:47:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1153: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Dec  5 01:47:55 compute-0 nova_compute[349548]: 2025-12-05 01:47:55.371 349552 DEBUG nova.network.neutron [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Successfully created port: 68143c81-65a4-4ed0-8902-dbe0c8d89224 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  5 01:47:55 compute-0 nova_compute[349548]: 2025-12-05 01:47:55.664 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:47:55 compute-0 podman[412103]: 2025-12-05 01:47:55.732534892 +0000 UTC m=+0.135760827 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, name=ubi9, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, com.redhat.component=ubi9-container, release=1214.1726694543, vendor=Red Hat, Inc., architecture=x86_64, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, distribution-scope=public, release-0.7.12=, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  5 01:47:55 compute-0 nova_compute[349548]: 2025-12-05 01:47:55.765 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42.part --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:47:55 compute-0 nova_compute[349548]: 2025-12-05 01:47:55.767 349552 DEBUG nova.virt.images [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] aa58c1e9-bdcc-4e60-9cee-eaeee0741251 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec  5 01:47:55 compute-0 nova_compute[349548]: 2025-12-05 01:47:55.769 349552 DEBUG nova.privsep.utils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  5 01:47:55 compute-0 nova_compute[349548]: 2025-12-05 01:47:55.770 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42.part /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:47:55 compute-0 nova_compute[349548]: 2025-12-05 01:47:55.971 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42.part /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42.converted" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:47:55 compute-0 nova_compute[349548]: 2025-12-05 01:47:55.983 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:47:56 compute-0 nova_compute[349548]: 2025-12-05 01:47:56.066 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42.converted --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:47:56 compute-0 nova_compute[349548]: 2025-12-05 01:47:56.069 349552 DEBUG oslo_concurrency.lockutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "af0f6d73e40706411141d751e7ebef271f1a5b42" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.344s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:47:56 compute-0 nova_compute[349548]: 2025-12-05 01:47:56.116 349552 DEBUG nova.storage.rbd_utils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image b69a0e24-1bc4-46a5-92d7-367c1efd53df_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 01:47:56 compute-0 nova_compute[349548]: 2025-12-05 01:47:56.124 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42 b69a0e24-1bc4-46a5-92d7-367c1efd53df_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:47:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:47:56.174 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:47:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:47:56.175 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:47:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:47:56.175 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:47:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Dec  5 01:47:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Dec  5 01:47:56 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Dec  5 01:47:56 compute-0 nova_compute[349548]: 2025-12-05 01:47:56.567 349552 DEBUG nova.network.neutron [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Successfully updated port: 68143c81-65a4-4ed0-8902-dbe0c8d89224 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  5 01:47:56 compute-0 nova_compute[349548]: 2025-12-05 01:47:56.589 349552 DEBUG oslo_concurrency.lockutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 01:47:56 compute-0 nova_compute[349548]: 2025-12-05 01:47:56.589 349552 DEBUG oslo_concurrency.lockutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquired lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 01:47:56 compute-0 nova_compute[349548]: 2025-12-05 01:47:56.589 349552 DEBUG nova.network.neutron [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  5 01:47:56 compute-0 nova_compute[349548]: 2025-12-05 01:47:56.753 349552 DEBUG nova.network.neutron [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  5 01:47:57 compute-0 nova_compute[349548]: 2025-12-05 01:47:57.090 349552 DEBUG nova.compute.manager [req-f28f4562-ba29-4bcc-8622-f9502d453a2e req-2f0d602a-4578-44cb-9bb0-300c96c33a59 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Received event network-changed-68143c81-65a4-4ed0-8902-dbe0c8d89224 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 01:47:57 compute-0 nova_compute[349548]: 2025-12-05 01:47:57.090 349552 DEBUG nova.compute.manager [req-f28f4562-ba29-4bcc-8622-f9502d453a2e req-2f0d602a-4578-44cb-9bb0-300c96c33a59 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Refreshing instance network info cache due to event network-changed-68143c81-65a4-4ed0-8902-dbe0c8d89224. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  5 01:47:57 compute-0 nova_compute[349548]: 2025-12-05 01:47:57.091 349552 DEBUG oslo_concurrency.lockutils [req-f28f4562-ba29-4bcc-8622-f9502d453a2e req-2f0d602a-4578-44cb-9bb0-300c96c33a59 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 01:47:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1155: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 204 B/s wr, 8 op/s
Dec  5 01:47:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Dec  5 01:47:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Dec  5 01:47:57 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Dec  5 01:47:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:47:57 compute-0 nova_compute[349548]: 2025-12-05 01:47:57.802 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42 b69a0e24-1bc4-46a5-92d7-367c1efd53df_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.678s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:47:57 compute-0 nova_compute[349548]: 2025-12-05 01:47:57.960 349552 DEBUG nova.storage.rbd_utils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] resizing rbd image b69a0e24-1bc4-46a5-92d7-367c1efd53df_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  5 01:47:58 compute-0 nova_compute[349548]: 2025-12-05 01:47:58.229 349552 DEBUG nova.objects.instance [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lazy-loading 'migration_context' on Instance uuid b69a0e24-1bc4-46a5-92d7-367c1efd53df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 01:47:58 compute-0 nova_compute[349548]: 2025-12-05 01:47:58.300 349552 DEBUG nova.storage.rbd_utils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image b69a0e24-1bc4-46a5-92d7-367c1efd53df_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 01:47:58 compute-0 nova_compute[349548]: 2025-12-05 01:47:58.359 349552 DEBUG nova.storage.rbd_utils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image b69a0e24-1bc4-46a5-92d7-367c1efd53df_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 01:47:58 compute-0 nova_compute[349548]: 2025-12-05 01:47:58.368 349552 DEBUG oslo_concurrency.lockutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:47:58 compute-0 nova_compute[349548]: 2025-12-05 01:47:58.369 349552 DEBUG oslo_concurrency.lockutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:47:58 compute-0 nova_compute[349548]: 2025-12-05 01:47:58.370 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:47:58 compute-0 nova_compute[349548]: 2025-12-05 01:47:58.416 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:47:58 compute-0 nova_compute[349548]: 2025-12-05 01:47:58.417 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:47:58 compute-0 nova_compute[349548]: 2025-12-05 01:47:58.459 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:47:58 compute-0 nova_compute[349548]: 2025-12-05 01:47:58.461 349552 DEBUG oslo_concurrency.lockutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.092s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:47:58 compute-0 nova_compute[349548]: 2025-12-05 01:47:58.501 349552 DEBUG nova.storage.rbd_utils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image b69a0e24-1bc4-46a5-92d7-367c1efd53df_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 01:47:58 compute-0 nova_compute[349548]: 2025-12-05 01:47:58.513 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 b69a0e24-1bc4-46a5-92d7-367c1efd53df_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:47:58 compute-0 nova_compute[349548]: 2025-12-05 01:47:58.805 349552 DEBUG nova.network.neutron [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updating instance_info_cache with network_info: [{"id": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "address": "fa:16:3e:0c:12:24", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.48", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68143c81-65", "ovs_interfaceid": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 01:47:58 compute-0 nova_compute[349548]: 2025-12-05 01:47:58.838 349552 DEBUG oslo_concurrency.lockutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Releasing lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 01:47:58 compute-0 nova_compute[349548]: 2025-12-05 01:47:58.840 349552 DEBUG nova.compute.manager [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Instance network_info: |[{"id": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "address": "fa:16:3e:0c:12:24", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.48", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68143c81-65", "ovs_interfaceid": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  5 01:47:58 compute-0 nova_compute[349548]: 2025-12-05 01:47:58.841 349552 DEBUG oslo_concurrency.lockutils [req-f28f4562-ba29-4bcc-8622-f9502d453a2e req-2f0d602a-4578-44cb-9bb0-300c96c33a59 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 01:47:58 compute-0 nova_compute[349548]: 2025-12-05 01:47:58.842 349552 DEBUG nova.network.neutron [req-f28f4562-ba29-4bcc-8622-f9502d453a2e req-2f0d602a-4578-44cb-9bb0-300c96c33a59 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Refreshing network info cache for port 68143c81-65a4-4ed0-8902-dbe0c8d89224 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  5 01:47:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1157: 321 pgs: 321 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 255 B/s wr, 10 op/s
Dec  5 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.528 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 b69a0e24-1bc4-46a5-92d7-367c1efd53df_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.719 349552 DEBUG nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  5 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.719 349552 DEBUG nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Ensure instance console log exists: /var/lib/nova/instances/b69a0e24-1bc4-46a5-92d7-367c1efd53df/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  5 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.720 349552 DEBUG oslo_concurrency.lockutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.720 349552 DEBUG oslo_concurrency.lockutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.721 349552 DEBUG oslo_concurrency.lockutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.724 349552 DEBUG nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Start _get_guest_xml network_info=[{"id": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "address": "fa:16:3e:0c:12:24", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.48", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68143c81-65", "ovs_interfaceid": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-05T01:46:34Z,direct_url=<?>,disk_format='qcow2',id=aa58c1e9-bdcc-4e60-9cee-eaeee0741251,min_disk=0,min_ram=0,name='cirros',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-05T01:46:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'image_id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}], 'ephemerals': [{'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'device_name': '/dev/vdb', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 1}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  5 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.731 349552 WARNING nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.738 349552 DEBUG nova.virt.libvirt.host [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  5 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.738 349552 DEBUG nova.virt.libvirt.host [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  5 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.745 349552 DEBUG nova.virt.libvirt.host [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  5 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.745 349552 DEBUG nova.virt.libvirt.host [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  5 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.746 349552 DEBUG nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  5 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.746 349552 DEBUG nova.virt.hardware [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-05T01:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='7d473820-6f66-40b4-b8d1-decd466d7dd2',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-05T01:46:34Z,direct_url=<?>,disk_format='qcow2',id=aa58c1e9-bdcc-4e60-9cee-eaeee0741251,min_disk=0,min_ram=0,name='cirros',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-05T01:46:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  5 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.746 349552 DEBUG nova.virt.hardware [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  5 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.747 349552 DEBUG nova.virt.hardware [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  5 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.747 349552 DEBUG nova.virt.hardware [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  5 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.747 349552 DEBUG nova.virt.hardware [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  5 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.747 349552 DEBUG nova.virt.hardware [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  5 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.747 349552 DEBUG nova.virt.hardware [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  5 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.748 349552 DEBUG nova.virt.hardware [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  5 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.748 349552 DEBUG nova.virt.hardware [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  5 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.748 349552 DEBUG nova.virt.hardware [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  5 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.748 349552 DEBUG nova.virt.hardware [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  5 01:47:59 compute-0 podman[158197]: time="2025-12-05T01:47:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.753 349552 DEBUG nova.privsep.utils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  5 01:47:59 compute-0 nova_compute[349548]: 2025-12-05 01:47:59.753 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:47:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:47:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec  5 01:47:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:47:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8124 "" "Go-http-client/1.1"
Dec  5 01:48:00 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  5 01:48:00 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3742020724' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  5 01:48:00 compute-0 nova_compute[349548]: 2025-12-05 01:48:00.264 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:48:00 compute-0 nova_compute[349548]: 2025-12-05 01:48:00.266 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:48:00 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  5 01:48:00 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3644237013' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  5 01:48:00 compute-0 nova_compute[349548]: 2025-12-05 01:48:00.741 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:48:00 compute-0 nova_compute[349548]: 2025-12-05 01:48:00.791 349552 DEBUG nova.storage.rbd_utils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image b69a0e24-1bc4-46a5-92d7-367c1efd53df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 01:48:00 compute-0 nova_compute[349548]: 2025-12-05 01:48:00.803 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:48:01 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  5 01:48:01 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/870861907' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  5 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.321 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.323 349552 DEBUG nova.virt.libvirt.vif [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T01:47:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ad982b73954486390215862ee62239f',ramdisk_id='',reservation_id='r-u7sbhrgz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T01:47:53Z,user_data=None,user_id='ff880837791d4f49a54672b8d0e705ff',uuid=b69a0e24-1bc4-46a5-92d7-367c1efd53df,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "address": "fa:16:3e:0c:12:24", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.48", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68143c81-65", "ovs_interfaceid": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  5 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.323 349552 DEBUG nova.network.os_vif_util [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converting VIF {"id": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "address": "fa:16:3e:0c:12:24", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.48", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68143c81-65", "ovs_interfaceid": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.324 349552 DEBUG nova.network.os_vif_util [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:12:24,bridge_name='br-int',has_traffic_filtering=True,id=68143c81-65a4-4ed0-8902-dbe0c8d89224,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68143c81-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 01:48:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1158: 321 pgs: 321 active+clean; 25 MiB data, 167 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 438 KiB/s wr, 43 op/s
Dec  5 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.328 349552 DEBUG nova.objects.instance [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lazy-loading 'pci_devices' on Instance uuid b69a0e24-1bc4-46a5-92d7-367c1efd53df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.350 349552 DEBUG nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] End _get_guest_xml xml=<domain type="kvm">
Dec  5 01:48:01 compute-0 nova_compute[349548]:  <uuid>b69a0e24-1bc4-46a5-92d7-367c1efd53df</uuid>
Dec  5 01:48:01 compute-0 nova_compute[349548]:  <name>instance-00000001</name>
Dec  5 01:48:01 compute-0 nova_compute[349548]:  <memory>524288</memory>
Dec  5 01:48:01 compute-0 nova_compute[349548]:  <vcpu>1</vcpu>
Dec  5 01:48:01 compute-0 nova_compute[349548]:  <metadata>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  5 01:48:01 compute-0 nova_compute[349548]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:      <nova:name>test_0</nova:name>
Dec  5 01:48:01 compute-0 nova_compute[349548]:      <nova:creationTime>2025-12-05 01:47:59</nova:creationTime>
Dec  5 01:48:01 compute-0 nova_compute[349548]:      <nova:flavor name="m1.small">
Dec  5 01:48:01 compute-0 nova_compute[349548]:        <nova:memory>512</nova:memory>
Dec  5 01:48:01 compute-0 nova_compute[349548]:        <nova:disk>1</nova:disk>
Dec  5 01:48:01 compute-0 nova_compute[349548]:        <nova:swap>0</nova:swap>
Dec  5 01:48:01 compute-0 nova_compute[349548]:        <nova:ephemeral>1</nova:ephemeral>
Dec  5 01:48:01 compute-0 nova_compute[349548]:        <nova:vcpus>1</nova:vcpus>
Dec  5 01:48:01 compute-0 nova_compute[349548]:      </nova:flavor>
Dec  5 01:48:01 compute-0 nova_compute[349548]:      <nova:owner>
Dec  5 01:48:01 compute-0 nova_compute[349548]:        <nova:user uuid="ff880837791d4f49a54672b8d0e705ff">admin</nova:user>
Dec  5 01:48:01 compute-0 nova_compute[349548]:        <nova:project uuid="6ad982b73954486390215862ee62239f">admin</nova:project>
Dec  5 01:48:01 compute-0 nova_compute[349548]:      </nova:owner>
Dec  5 01:48:01 compute-0 nova_compute[349548]:      <nova:root type="image" uuid="aa58c1e9-bdcc-4e60-9cee-eaeee0741251"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:      <nova:ports>
Dec  5 01:48:01 compute-0 nova_compute[349548]:        <nova:port uuid="68143c81-65a4-4ed0-8902-dbe0c8d89224">
Dec  5 01:48:01 compute-0 nova_compute[349548]:          <nova:ip type="fixed" address="192.168.0.48" ipVersion="4"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:        </nova:port>
Dec  5 01:48:01 compute-0 nova_compute[349548]:      </nova:ports>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    </nova:instance>
Dec  5 01:48:01 compute-0 nova_compute[349548]:  </metadata>
Dec  5 01:48:01 compute-0 nova_compute[349548]:  <sysinfo type="smbios">
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <system>
Dec  5 01:48:01 compute-0 nova_compute[349548]:      <entry name="manufacturer">RDO</entry>
Dec  5 01:48:01 compute-0 nova_compute[349548]:      <entry name="product">OpenStack Compute</entry>
Dec  5 01:48:01 compute-0 nova_compute[349548]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  5 01:48:01 compute-0 nova_compute[349548]:      <entry name="serial">b69a0e24-1bc4-46a5-92d7-367c1efd53df</entry>
Dec  5 01:48:01 compute-0 nova_compute[349548]:      <entry name="uuid">b69a0e24-1bc4-46a5-92d7-367c1efd53df</entry>
Dec  5 01:48:01 compute-0 nova_compute[349548]:      <entry name="family">Virtual Machine</entry>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    </system>
Dec  5 01:48:01 compute-0 nova_compute[349548]:  </sysinfo>
Dec  5 01:48:01 compute-0 nova_compute[349548]:  <os>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <boot dev="hd"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <smbios mode="sysinfo"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:  </os>
Dec  5 01:48:01 compute-0 nova_compute[349548]:  <features>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <acpi/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <apic/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <vmcoreinfo/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:  </features>
Dec  5 01:48:01 compute-0 nova_compute[349548]:  <clock offset="utc">
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <timer name="pit" tickpolicy="delay"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <timer name="hpet" present="no"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:  </clock>
Dec  5 01:48:01 compute-0 nova_compute[349548]:  <cpu mode="host-model" match="exact">
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <topology sockets="1" cores="1" threads="1"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:  </cpu>
Dec  5 01:48:01 compute-0 nova_compute[349548]:  <devices>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <disk type="network" device="disk">
Dec  5 01:48:01 compute-0 nova_compute[349548]:      <driver type="raw" cache="none"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:      <source protocol="rbd" name="vms/b69a0e24-1bc4-46a5-92d7-367c1efd53df_disk">
Dec  5 01:48:01 compute-0 nova_compute[349548]:        <host name="192.168.122.100" port="6789"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:      </source>
Dec  5 01:48:01 compute-0 nova_compute[349548]:      <auth username="openstack">
Dec  5 01:48:01 compute-0 nova_compute[349548]:        <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:      </auth>
Dec  5 01:48:01 compute-0 nova_compute[349548]:      <target dev="vda" bus="virtio"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    </disk>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <disk type="network" device="disk">
Dec  5 01:48:01 compute-0 nova_compute[349548]:      <driver type="raw" cache="none"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:      <source protocol="rbd" name="vms/b69a0e24-1bc4-46a5-92d7-367c1efd53df_disk.eph0">
Dec  5 01:48:01 compute-0 nova_compute[349548]:        <host name="192.168.122.100" port="6789"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:      </source>
Dec  5 01:48:01 compute-0 nova_compute[349548]:      <auth username="openstack">
Dec  5 01:48:01 compute-0 nova_compute[349548]:        <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:      </auth>
Dec  5 01:48:01 compute-0 nova_compute[349548]:      <target dev="vdb" bus="virtio"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    </disk>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <disk type="network" device="cdrom">
Dec  5 01:48:01 compute-0 nova_compute[349548]:      <driver type="raw" cache="none"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:      <source protocol="rbd" name="vms/b69a0e24-1bc4-46a5-92d7-367c1efd53df_disk.config">
Dec  5 01:48:01 compute-0 nova_compute[349548]:        <host name="192.168.122.100" port="6789"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:      </source>
Dec  5 01:48:01 compute-0 nova_compute[349548]:      <auth username="openstack">
Dec  5 01:48:01 compute-0 nova_compute[349548]:        <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:      </auth>
Dec  5 01:48:01 compute-0 nova_compute[349548]:      <target dev="sda" bus="sata"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    </disk>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <interface type="ethernet">
Dec  5 01:48:01 compute-0 nova_compute[349548]:      <mac address="fa:16:3e:0c:12:24"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:      <model type="virtio"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:      <driver name="vhost" rx_queue_size="512"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:      <mtu size="1442"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:      <target dev="tap68143c81-65"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    </interface>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <serial type="pty">
Dec  5 01:48:01 compute-0 nova_compute[349548]:      <log file="/var/lib/nova/instances/b69a0e24-1bc4-46a5-92d7-367c1efd53df/console.log" append="off"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    </serial>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <video>
Dec  5 01:48:01 compute-0 nova_compute[349548]:      <model type="virtio"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    </video>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <input type="tablet" bus="usb"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <rng model="virtio">
Dec  5 01:48:01 compute-0 nova_compute[349548]:      <backend model="random">/dev/urandom</backend>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    </rng>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <controller type="usb" index="0"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    <memballoon model="virtio">
Dec  5 01:48:01 compute-0 nova_compute[349548]:      <stats period="10"/>
Dec  5 01:48:01 compute-0 nova_compute[349548]:    </memballoon>
Dec  5 01:48:01 compute-0 nova_compute[349548]:  </devices>
Dec  5 01:48:01 compute-0 nova_compute[349548]: </domain>
Dec  5 01:48:01 compute-0 nova_compute[349548]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  5 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.350 349552 DEBUG nova.compute.manager [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Preparing to wait for external event network-vif-plugged-68143c81-65a4-4ed0-8902-dbe0c8d89224 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  5 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.351 349552 DEBUG oslo_concurrency.lockutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.351 349552 DEBUG oslo_concurrency.lockutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.351 349552 DEBUG oslo_concurrency.lockutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.352 349552 DEBUG nova.virt.libvirt.vif [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T01:47:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ad982b73954486390215862ee62239f',ramdisk_id='',reservation_id='r-u7sbhrgz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T01:47:53Z,user_data=None,user_id='ff880837791d4f49a54672b8d0e705ff',uuid=b69a0e24-1bc4-46a5-92d7-367c1efd53df,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "address": "fa:16:3e:0c:12:24", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.48", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68143c81-65", "ovs_interfaceid": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  5 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.352 349552 DEBUG nova.network.os_vif_util [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converting VIF {"id": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "address": "fa:16:3e:0c:12:24", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.48", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68143c81-65", "ovs_interfaceid": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.353 349552 DEBUG nova.network.os_vif_util [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:12:24,bridge_name='br-int',has_traffic_filtering=True,id=68143c81-65a4-4ed0-8902-dbe0c8d89224,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68143c81-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.353 349552 DEBUG os_vif [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:12:24,bridge_name='br-int',has_traffic_filtering=True,id=68143c81-65a4-4ed0-8902-dbe0c8d89224,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68143c81-65') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  5 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.388 349552 DEBUG ovsdbapp.backend.ovs_idl [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  5 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.389 349552 DEBUG ovsdbapp.backend.ovs_idl [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  5 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.389 349552 DEBUG ovsdbapp.backend.ovs_idl [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  5 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.389 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  5 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.390 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [POLLOUT] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.390 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  5 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.391 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.392 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.394 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.402 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.403 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.403 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.404 349552 INFO oslo.privsep.daemon [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp4qlrv0yp/privsep.sock']#033[00m
Dec  5 01:48:01 compute-0 openstack_network_exporter[366555]: ERROR   01:48:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:48:01 compute-0 openstack_network_exporter[366555]: ERROR   01:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:48:01 compute-0 openstack_network_exporter[366555]: ERROR   01:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:48:01 compute-0 openstack_network_exporter[366555]: ERROR   01:48:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:48:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:48:01 compute-0 openstack_network_exporter[366555]: ERROR   01:48:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:48:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.465 349552 DEBUG nova.network.neutron [req-f28f4562-ba29-4bcc-8622-f9502d453a2e req-2f0d602a-4578-44cb-9bb0-300c96c33a59 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updated VIF entry in instance network info cache for port 68143c81-65a4-4ed0-8902-dbe0c8d89224. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  5 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.466 349552 DEBUG nova.network.neutron [req-f28f4562-ba29-4bcc-8622-f9502d453a2e req-2f0d602a-4578-44cb-9bb0-300c96c33a59 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updating instance_info_cache with network_info: [{"id": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "address": "fa:16:3e:0c:12:24", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.48", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68143c81-65", "ovs_interfaceid": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 01:48:01 compute-0 nova_compute[349548]: 2025-12-05 01:48:01.487 349552 DEBUG oslo_concurrency.lockutils [req-f28f4562-ba29-4bcc-8622-f9502d453a2e req-2f0d602a-4578-44cb-9bb0-300c96c33a59 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 01:48:02 compute-0 nova_compute[349548]: 2025-12-05 01:48:02.218 349552 INFO oslo.privsep.daemon [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Spawned new privsep daemon via rootwrap#033[00m
Dec  5 01:48:02 compute-0 nova_compute[349548]: 2025-12-05 01:48:02.089 412465 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  5 01:48:02 compute-0 nova_compute[349548]: 2025-12-05 01:48:02.096 412465 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  5 01:48:02 compute-0 nova_compute[349548]: 2025-12-05 01:48:02.100 412465 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Dec  5 01:48:02 compute-0 nova_compute[349548]: 2025-12-05 01:48:02.100 412465 INFO oslo.privsep.daemon [-] privsep daemon running as pid 412465#033[00m
Dec  5 01:48:02 compute-0 nova_compute[349548]: 2025-12-05 01:48:02.591 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:02 compute-0 nova_compute[349548]: 2025-12-05 01:48:02.592 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap68143c81-65, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 01:48:02 compute-0 nova_compute[349548]: 2025-12-05 01:48:02.593 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap68143c81-65, col_values=(('external_ids', {'iface-id': '68143c81-65a4-4ed0-8902-dbe0c8d89224', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0c:12:24', 'vm-uuid': 'b69a0e24-1bc4-46a5-92d7-367c1efd53df'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 01:48:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:48:02 compute-0 nova_compute[349548]: 2025-12-05 01:48:02.597 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:02 compute-0 NetworkManager[49092]: <info>  [1764899282.5975] manager: (tap68143c81-65): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Dec  5 01:48:02 compute-0 nova_compute[349548]: 2025-12-05 01:48:02.600 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  5 01:48:02 compute-0 nova_compute[349548]: 2025-12-05 01:48:02.610 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:02 compute-0 nova_compute[349548]: 2025-12-05 01:48:02.611 349552 INFO os_vif [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:12:24,bridge_name='br-int',has_traffic_filtering=True,id=68143c81-65a4-4ed0-8902-dbe0c8d89224,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68143c81-65')#033[00m
Dec  5 01:48:02 compute-0 nova_compute[349548]: 2025-12-05 01:48:02.678 349552 DEBUG nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  5 01:48:02 compute-0 nova_compute[349548]: 2025-12-05 01:48:02.680 349552 DEBUG nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  5 01:48:02 compute-0 nova_compute[349548]: 2025-12-05 01:48:02.680 349552 DEBUG nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  5 01:48:02 compute-0 nova_compute[349548]: 2025-12-05 01:48:02.680 349552 DEBUG nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] No VIF found with MAC fa:16:3e:0c:12:24, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  5 01:48:02 compute-0 nova_compute[349548]: 2025-12-05 01:48:02.682 349552 INFO nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Using config drive#033[00m
Dec  5 01:48:02 compute-0 podman[412469]: 2025-12-05 01:48:02.683036112 +0000 UTC m=+0.101681919 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  5 01:48:02 compute-0 podman[412472]: 2025-12-05 01:48:02.701195502 +0000 UTC m=+0.098084808 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., release=1755695350, config_id=edpm, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, version=9.6)
Dec  5 01:48:02 compute-0 podman[412470]: 2025-12-05 01:48:02.716265526 +0000 UTC m=+0.122639228 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  5 01:48:02 compute-0 nova_compute[349548]: 2025-12-05 01:48:02.731 349552 DEBUG nova.storage.rbd_utils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image b69a0e24-1bc4-46a5-92d7-367c1efd53df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 01:48:02 compute-0 podman[412471]: 2025-12-05 01:48:02.746444324 +0000 UTC m=+0.145632984 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  5 01:48:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1159: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 65 op/s
Dec  5 01:48:03 compute-0 nova_compute[349548]: 2025-12-05 01:48:03.373 349552 INFO nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Creating config drive at /var/lib/nova/instances/b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.config#033[00m
Dec  5 01:48:03 compute-0 nova_compute[349548]: 2025-12-05 01:48:03.382 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc2vnvoxp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:48:03 compute-0 nova_compute[349548]: 2025-12-05 01:48:03.530 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc2vnvoxp" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:48:03 compute-0 nova_compute[349548]: 2025-12-05 01:48:03.587 349552 DEBUG nova.storage.rbd_utils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image b69a0e24-1bc4-46a5-92d7-367c1efd53df_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 01:48:03 compute-0 nova_compute[349548]: 2025-12-05 01:48:03.599 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.config b69a0e24-1bc4-46a5-92d7-367c1efd53df_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:48:03 compute-0 nova_compute[349548]: 2025-12-05 01:48:03.927 349552 DEBUG oslo_concurrency.processutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.config b69a0e24-1bc4-46a5-92d7-367c1efd53df_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:48:03 compute-0 nova_compute[349548]: 2025-12-05 01:48:03.929 349552 INFO nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Deleting local config drive /var/lib/nova/instances/b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.config because it was imported into RBD.#033[00m
Dec  5 01:48:03 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec  5 01:48:04 compute-0 systemd[1]: Started libvirt secret daemon.
Dec  5 01:48:04 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Dec  5 01:48:04 compute-0 kernel: tap68143c81-65: entered promiscuous mode
Dec  5 01:48:04 compute-0 NetworkManager[49092]: <info>  [1764899284.1463] manager: (tap68143c81-65): new Tun device (/org/freedesktop/NetworkManager/Devices/22)
Dec  5 01:48:04 compute-0 ovn_controller[89286]: 2025-12-05T01:48:04Z|00027|binding|INFO|Claiming lport 68143c81-65a4-4ed0-8902-dbe0c8d89224 for this chassis.
Dec  5 01:48:04 compute-0 ovn_controller[89286]: 2025-12-05T01:48:04Z|00028|binding|INFO|68143c81-65a4-4ed0-8902-dbe0c8d89224: Claiming fa:16:3e:0c:12:24 192.168.0.48
Dec  5 01:48:04 compute-0 nova_compute[349548]: 2025-12-05 01:48:04.151 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:04 compute-0 nova_compute[349548]: 2025-12-05 01:48:04.170 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:04 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:04.185 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:12:24 192.168.0.48'], port_security=['fa:16:3e:0c:12:24 192.168.0.48'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.48/24', 'neutron:device_id': 'b69a0e24-1bc4-46a5-92d7-367c1efd53df', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ad982b73954486390215862ee62239f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cf07c149-4b4f-4cc9-a5b5-cfd139acbede', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8440543a-d57d-422f-b491-49a678c2776e, chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=68143c81-65a4-4ed0-8902-dbe0c8d89224) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 01:48:04 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:04.187 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 68143c81-65a4-4ed0-8902-dbe0c8d89224 in datapath 49f7d2f1-f1ff-4dcc-94db-d088dc8d3183 bound to our chassis#033[00m
Dec  5 01:48:04 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:04.189 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 49f7d2f1-f1ff-4dcc-94db-d088dc8d3183#033[00m
Dec  5 01:48:04 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:04.192 287122 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpft85vvy5/privsep.sock']#033[00m
Dec  5 01:48:04 compute-0 systemd-udevd[412648]: Network interface NamePolicy= disabled on kernel command line.
Dec  5 01:48:04 compute-0 NetworkManager[49092]: <info>  [1764899284.2522] device (tap68143c81-65): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  5 01:48:04 compute-0 NetworkManager[49092]: <info>  [1764899284.2634] device (tap68143c81-65): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  5 01:48:04 compute-0 systemd-machined[138700]: New machine qemu-1-instance-00000001.
Dec  5 01:48:04 compute-0 nova_compute[349548]: 2025-12-05 01:48:04.280 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:04 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Dec  5 01:48:04 compute-0 ovn_controller[89286]: 2025-12-05T01:48:04Z|00029|binding|INFO|Setting lport 68143c81-65a4-4ed0-8902-dbe0c8d89224 ovn-installed in OVS
Dec  5 01:48:04 compute-0 ovn_controller[89286]: 2025-12-05T01:48:04Z|00030|binding|INFO|Setting lport 68143c81-65a4-4ed0-8902-dbe0c8d89224 up in Southbound
Dec  5 01:48:04 compute-0 nova_compute[349548]: 2025-12-05 01:48:04.290 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:04 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec  5 01:48:04 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec  5 01:48:04 compute-0 nova_compute[349548]: 2025-12-05 01:48:04.910 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764899284.9092321, b69a0e24-1bc4-46a5-92d7-367c1efd53df => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 01:48:04 compute-0 nova_compute[349548]: 2025-12-05 01:48:04.911 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] VM Started (Lifecycle Event)#033[00m
Dec  5 01:48:04 compute-0 nova_compute[349548]: 2025-12-05 01:48:04.967 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 01:48:04 compute-0 nova_compute[349548]: 2025-12-05 01:48:04.975 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764899284.9093807, b69a0e24-1bc4-46a5-92d7-367c1efd53df => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 01:48:04 compute-0 nova_compute[349548]: 2025-12-05 01:48:04.975 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] VM Paused (Lifecycle Event)#033[00m
Dec  5 01:48:04 compute-0 nova_compute[349548]: 2025-12-05 01:48:04.996 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 01:48:05 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:05.001 287122 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec  5 01:48:05 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:05.002 287122 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpft85vvy5/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec  5 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.005 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  5 01:48:05 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:04.859 412744 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  5 01:48:05 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:04.865 412744 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  5 01:48:05 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:04.869 412744 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Dec  5 01:48:05 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:04.870 412744 INFO oslo.privsep.daemon [-] privsep daemon running as pid 412744#033[00m
Dec  5 01:48:05 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:05.008 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[6435c739-6104-49d0-ad72-c5e8e65ee199]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.029 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  5 01:48:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1160: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 70 op/s
Dec  5 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.360 349552 DEBUG nova.compute.manager [req-3c8fd7c3-84b0-40ce-b4c2-a5a6956fe60e req-3558dab2-8ad8-4b40-9944-72e6862064f2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Received event network-vif-plugged-68143c81-65a4-4ed0-8902-dbe0c8d89224 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.361 349552 DEBUG oslo_concurrency.lockutils [req-3c8fd7c3-84b0-40ce-b4c2-a5a6956fe60e req-3558dab2-8ad8-4b40-9944-72e6862064f2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.362 349552 DEBUG oslo_concurrency.lockutils [req-3c8fd7c3-84b0-40ce-b4c2-a5a6956fe60e req-3558dab2-8ad8-4b40-9944-72e6862064f2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.363 349552 DEBUG oslo_concurrency.lockutils [req-3c8fd7c3-84b0-40ce-b4c2-a5a6956fe60e req-3558dab2-8ad8-4b40-9944-72e6862064f2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.364 349552 DEBUG nova.compute.manager [req-3c8fd7c3-84b0-40ce-b4c2-a5a6956fe60e req-3558dab2-8ad8-4b40-9944-72e6862064f2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Processing event network-vif-plugged-68143c81-65a4-4ed0-8902-dbe0c8d89224 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  5 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.365 349552 DEBUG nova.compute.manager [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  5 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.372 349552 DEBUG nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  5 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.373 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764899285.3715525, b69a0e24-1bc4-46a5-92d7-367c1efd53df => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.373 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] VM Resumed (Lifecycle Event)#033[00m
Dec  5 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.392 349552 INFO nova.virt.libvirt.driver [-] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Instance spawned successfully.#033[00m
Dec  5 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.393 349552 DEBUG nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  5 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.405 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.420 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  5 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.431 349552 DEBUG nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.432 349552 DEBUG nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.438 349552 DEBUG nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.440 349552 DEBUG nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.446 349552 DEBUG nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.448 349552 DEBUG nova.virt.libvirt.driver [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.453 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  5 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.527 349552 INFO nova.compute.manager [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Took 12.00 seconds to spawn the instance on the hypervisor.#033[00m
Dec  5 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.528 349552 DEBUG nova.compute.manager [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.633 349552 INFO nova.compute.manager [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Took 13.12 seconds to build instance.#033[00m
Dec  5 01:48:05 compute-0 nova_compute[349548]: 2025-12-05 01:48:05.678 349552 DEBUG oslo_concurrency.lockutils [None req-52632562-fd7b-439f-a6d6-d92cdac7a1a4 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.280s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:48:05 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:05.699 412744 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:48:05 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:05.700 412744 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:48:05 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:05.701 412744 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:48:06 compute-0 nova_compute[349548]: 2025-12-05 01:48:06.308 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:06 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:06.364 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[0c817f9d-0be6-411c-983a-21a0ee91a1ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:48:06 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:06.366 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap49f7d2f1-f1 in ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  5 01:48:06 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:06.369 412744 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap49f7d2f1-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  5 01:48:06 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:06.369 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[ca746c4f-833d-4d5f-898b-191c6811646b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:48:06 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:06.374 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[cd01e1c7-7be7-48e0-bdb3-3935f187443f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:48:06 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:06.420 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[982b3858-14b6-42f2-bb91-882ad80b3ba7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:48:06 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:06.462 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[d346d983-f575-4cbf-9874-04779dc8c4c3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:48:06 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:06.465 287122 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpq2x94x5c/privsep.sock']#033[00m
Dec  5 01:48:07 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:07.246 287122 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec  5 01:48:07 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:07.248 287122 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpq2x94x5c/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec  5 01:48:07 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:07.127 412758 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  5 01:48:07 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:07.134 412758 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  5 01:48:07 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:07.138 412758 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Dec  5 01:48:07 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:07.138 412758 INFO oslo.privsep.daemon [-] privsep daemon running as pid 412758#033[00m
Dec  5 01:48:07 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:07.254 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[30d33942-75d9-486f-9671-55ba3c07e2ef]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:48:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1161: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 116 KiB/s rd, 1.6 MiB/s wr, 62 op/s
Dec  5 01:48:07 compute-0 nova_compute[349548]: 2025-12-05 01:48:07.495 349552 DEBUG nova.compute.manager [req-108e38ca-599d-4b1c-8917-580740a35e71 req-a2299452-9c43-4dd8-bb3b-95a83f57f46c a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Received event network-vif-plugged-68143c81-65a4-4ed0-8902-dbe0c8d89224 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 01:48:07 compute-0 nova_compute[349548]: 2025-12-05 01:48:07.495 349552 DEBUG oslo_concurrency.lockutils [req-108e38ca-599d-4b1c-8917-580740a35e71 req-a2299452-9c43-4dd8-bb3b-95a83f57f46c a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:48:07 compute-0 nova_compute[349548]: 2025-12-05 01:48:07.496 349552 DEBUG oslo_concurrency.lockutils [req-108e38ca-599d-4b1c-8917-580740a35e71 req-a2299452-9c43-4dd8-bb3b-95a83f57f46c a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:48:07 compute-0 nova_compute[349548]: 2025-12-05 01:48:07.496 349552 DEBUG oslo_concurrency.lockutils [req-108e38ca-599d-4b1c-8917-580740a35e71 req-a2299452-9c43-4dd8-bb3b-95a83f57f46c a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:48:07 compute-0 nova_compute[349548]: 2025-12-05 01:48:07.496 349552 DEBUG nova.compute.manager [req-108e38ca-599d-4b1c-8917-580740a35e71 req-a2299452-9c43-4dd8-bb3b-95a83f57f46c a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] No waiting events found dispatching network-vif-plugged-68143c81-65a4-4ed0-8902-dbe0c8d89224 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 01:48:07 compute-0 nova_compute[349548]: 2025-12-05 01:48:07.497 349552 WARNING nova.compute.manager [req-108e38ca-599d-4b1c-8917-580740a35e71 req-a2299452-9c43-4dd8-bb3b-95a83f57f46c a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Received unexpected event network-vif-plugged-68143c81-65a4-4ed0-8902-dbe0c8d89224 for instance with vm_state active and task_state None.#033[00m
Dec  5 01:48:07 compute-0 nova_compute[349548]: 2025-12-05 01:48:07.599 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:48:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Dec  5 01:48:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Dec  5 01:48:07 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Dec  5 01:48:07 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:07.741 412758 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:48:07 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:07.741 412758 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:48:07 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:07.741 412758 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:08.352 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[1a4edc2d-cbfa-4168-a4b6-34ae2baa4e9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:08.386 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[3e44a0ba-0f1f-4077-b2f6-8bd65ba65715]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:48:08 compute-0 NetworkManager[49092]: <info>  [1764899288.3889] manager: (tap49f7d2f1-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/23)
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:08.428 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[bfafd821-313e-4785-913d-28f5994ede25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:08.432 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[ff92f567-6d06-419a-954d-9e8956e94c2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:48:08 compute-0 systemd-udevd[412771]: Network interface NamePolicy= disabled on kernel command line.
Dec  5 01:48:08 compute-0 NetworkManager[49092]: <info>  [1764899288.4803] device (tap49f7d2f1-f0): carrier: link connected
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:08.488 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[30a991ab-53e9-4445-867c-2ce9a782e927]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:08.518 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[fd0307eb-7b02-4e2e-808f-bd8e22392c71]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap49f7d2f1-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:8a:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537514, 'reachable_time': 20575, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 412788, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:08.543 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[f3b6e3c0-09e5-4427-b688-9e85f9387c01]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec6:8a33'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 537514, 'tstamp': 537514}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 412789, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:08.568 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[97cb49d7-4daf-431b-8a94-4cd34fc93031]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap49f7d2f1-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:8a:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537514, 'reachable_time': 20575, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 412790, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:08.614 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[b00aefed-bc7e-404f-a94a-2bf14a8c92be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:08.710 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[e4c6050a-bb8e-4d57-a8ef-440d26b15487]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:08.713 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap49f7d2f1-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:08.714 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:08.714 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap49f7d2f1-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 01:48:08 compute-0 nova_compute[349548]: 2025-12-05 01:48:08.718 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:08 compute-0 kernel: tap49f7d2f1-f0: entered promiscuous mode
Dec  5 01:48:08 compute-0 NetworkManager[49092]: <info>  [1764899288.7246] manager: (tap49f7d2f1-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/24)
Dec  5 01:48:08 compute-0 nova_compute[349548]: 2025-12-05 01:48:08.726 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:08.731 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap49f7d2f1-f0, col_values=(('external_ids', {'iface-id': '35b0af3f-4a87-44c5-9b77-2f08261b9985'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 01:48:08 compute-0 ovn_controller[89286]: 2025-12-05T01:48:08Z|00031|binding|INFO|Releasing lport 35b0af3f-4a87-44c5-9b77-2f08261b9985 from this chassis (sb_readonly=0)
Dec  5 01:48:08 compute-0 nova_compute[349548]: 2025-12-05 01:48:08.734 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:08 compute-0 nova_compute[349548]: 2025-12-05 01:48:08.737 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:08.739 287122 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/49f7d2f1-f1ff-4dcc-94db-d088dc8d3183.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/49f7d2f1-f1ff-4dcc-94db-d088dc8d3183.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:08.741 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[15ceeff3-e992-47d2-aa4c-d52722bf6123]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:08.743 287122 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]: global
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]:    log         /dev/log local0 debug
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]:    log-tag     haproxy-metadata-proxy-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]:    user        root
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]:    group       root
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]:    maxconn     1024
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]:    pidfile     /var/lib/neutron/external/pids/49f7d2f1-f1ff-4dcc-94db-d088dc8d3183.pid.haproxy
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]:    daemon
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]: 
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]: defaults
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]:    log global
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]:    mode http
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]:    option httplog
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]:    option dontlognull
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]:    option http-server-close
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]:    option forwardfor
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]:    retries                 3
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]:    timeout http-request    30s
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]:    timeout connect         30s
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]:    timeout client          32s
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]:    timeout server          32s
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]:    timeout http-keep-alive 30s
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]: 
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]: 
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]: listen listener
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]:    bind 169.254.169.254:80
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]:    server metadata /var/lib/neutron/metadata_proxy
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]:    http-request add-header X-OVN-Network-ID 49f7d2f1-f1ff-4dcc-94db-d088dc8d3183
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  5 01:48:08 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:08.745 287122 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'env', 'PROCESS_TAG=haproxy-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/49f7d2f1-f1ff-4dcc-94db-d088dc8d3183.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  5 01:48:08 compute-0 nova_compute[349548]: 2025-12-05 01:48:08.757 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:09 compute-0 podman[412823]: 2025-12-05 01:48:09.323516519 +0000 UTC m=+0.122749932 container create 70e46b28e6d55043e4ffa93fc50c9225b06cb6223f5ded4fca4e2ac8c241f8fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  5 01:48:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1163: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 116 KiB/s rd, 1.6 MiB/s wr, 62 op/s
Dec  5 01:48:09 compute-0 podman[412823]: 2025-12-05 01:48:09.250009872 +0000 UTC m=+0.049243335 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  5 01:48:09 compute-0 systemd[1]: Started libpod-conmon-70e46b28e6d55043e4ffa93fc50c9225b06cb6223f5ded4fca4e2ac8c241f8fe.scope.
Dec  5 01:48:09 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:48:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a820a613b1e07df1e33c546156b70839ccd983fd42dcef40eb3db4bae4f3e023/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  5 01:48:09 compute-0 podman[412823]: 2025-12-05 01:48:09.492791206 +0000 UTC m=+0.292024679 container init 70e46b28e6d55043e4ffa93fc50c9225b06cb6223f5ded4fca4e2ac8c241f8fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  5 01:48:09 compute-0 podman[412823]: 2025-12-05 01:48:09.508617521 +0000 UTC m=+0.307850924 container start 70e46b28e6d55043e4ffa93fc50c9225b06cb6223f5ded4fca4e2ac8c241f8fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  5 01:48:09 compute-0 neutron-haproxy-ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183[412838]: [NOTICE]   (412842) : New worker (412844) forked
Dec  5 01:48:09 compute-0 neutron-haproxy-ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183[412838]: [NOTICE]   (412842) : Loading success.
Dec  5 01:48:11 compute-0 nova_compute[349548]: 2025-12-05 01:48:11.313 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1164: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 460 KiB/s rd, 1.3 MiB/s wr, 46 op/s
Dec  5 01:48:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:48:12 compute-0 nova_compute[349548]: 2025-12-05 01:48:12.605 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1165: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 15 KiB/s wr, 72 op/s
Dec  5 01:48:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1166: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 15 KiB/s wr, 61 op/s
Dec  5 01:48:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:48:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:48:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:48:16
Dec  5 01:48:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 01:48:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 01:48:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['images', 'default.rgw.control', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'vms']
Dec  5 01:48:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 01:48:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:48:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:48:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:48:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:48:16 compute-0 nova_compute[349548]: 2025-12-05 01:48:16.315 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:16 compute-0 NetworkManager[49092]: <info>  [1764899296.3531] manager: (patch-br-int-to-provnet-f36f4e0f-0425-4742-afb6-bfffeac36335): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/25)
Dec  5 01:48:16 compute-0 nova_compute[349548]: 2025-12-05 01:48:16.351 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:16 compute-0 ovn_controller[89286]: 2025-12-05T01:48:16Z|00032|binding|INFO|Releasing lport 35b0af3f-4a87-44c5-9b77-2f08261b9985 from this chassis (sb_readonly=0)
Dec  5 01:48:16 compute-0 NetworkManager[49092]: <info>  [1764899296.3575] device (patch-br-int-to-provnet-f36f4e0f-0425-4742-afb6-bfffeac36335)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  5 01:48:16 compute-0 NetworkManager[49092]: <info>  [1764899296.3746] manager: (patch-provnet-f36f4e0f-0425-4742-afb6-bfffeac36335-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/26)
Dec  5 01:48:16 compute-0 NetworkManager[49092]: <info>  [1764899296.3785] device (patch-provnet-f36f4e0f-0425-4742-afb6-bfffeac36335-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  5 01:48:16 compute-0 NetworkManager[49092]: <info>  [1764899296.3868] manager: (patch-provnet-f36f4e0f-0425-4742-afb6-bfffeac36335-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Dec  5 01:48:16 compute-0 NetworkManager[49092]: <info>  [1764899296.3911] manager: (patch-br-int-to-provnet-f36f4e0f-0425-4742-afb6-bfffeac36335): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Dec  5 01:48:16 compute-0 NetworkManager[49092]: <info>  [1764899296.3937] device (patch-br-int-to-provnet-f36f4e0f-0425-4742-afb6-bfffeac36335)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec  5 01:48:16 compute-0 NetworkManager[49092]: <info>  [1764899296.3961] device (patch-provnet-f36f4e0f-0425-4742-afb6-bfffeac36335-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec  5 01:48:16 compute-0 ovn_controller[89286]: 2025-12-05T01:48:16Z|00033|binding|INFO|Releasing lport 35b0af3f-4a87-44c5-9b77-2f08261b9985 from this chassis (sb_readonly=0)
Dec  5 01:48:16 compute-0 nova_compute[349548]: 2025-12-05 01:48:16.411 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:16 compute-0 nova_compute[349548]: 2025-12-05 01:48:16.419 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:16 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:16.528 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:c8:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:b5:45:4f:f9:d2'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 01:48:16 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:16.530 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  5 01:48:16 compute-0 nova_compute[349548]: 2025-12-05 01:48:16.534 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:16 compute-0 nova_compute[349548]: 2025-12-05 01:48:16.643 349552 DEBUG nova.compute.manager [req-05d4582e-fce2-4d13-aca6-0fb2a415ceda req-e02e655a-33bb-47b6-925c-f0d859dcc66a a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Received event network-changed-68143c81-65a4-4ed0-8902-dbe0c8d89224 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 01:48:16 compute-0 nova_compute[349548]: 2025-12-05 01:48:16.643 349552 DEBUG nova.compute.manager [req-05d4582e-fce2-4d13-aca6-0fb2a415ceda req-e02e655a-33bb-47b6-925c-f0d859dcc66a a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Refreshing instance network info cache due to event network-changed-68143c81-65a4-4ed0-8902-dbe0c8d89224. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  5 01:48:16 compute-0 nova_compute[349548]: 2025-12-05 01:48:16.644 349552 DEBUG oslo_concurrency.lockutils [req-05d4582e-fce2-4d13-aca6-0fb2a415ceda req-e02e655a-33bb-47b6-925c-f0d859dcc66a a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 01:48:16 compute-0 nova_compute[349548]: 2025-12-05 01:48:16.644 349552 DEBUG oslo_concurrency.lockutils [req-05d4582e-fce2-4d13-aca6-0fb2a415ceda req-e02e655a-33bb-47b6-925c-f0d859dcc66a a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 01:48:16 compute-0 nova_compute[349548]: 2025-12-05 01:48:16.645 349552 DEBUG nova.network.neutron [req-05d4582e-fce2-4d13-aca6-0fb2a415ceda req-e02e655a-33bb-47b6-925c-f0d859dcc66a a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Refreshing network info cache for port 68143c81-65a4-4ed0-8902-dbe0c8d89224 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  5 01:48:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 01:48:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:48:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 01:48:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:48:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:48:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:48:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:48:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:48:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:48:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:48:17 compute-0 nova_compute[349548]: 2025-12-05 01:48:17.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:48:17 compute-0 nova_compute[349548]: 2025-12-05 01:48:17.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:48:17 compute-0 nova_compute[349548]: 2025-12-05 01:48:17.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 01:48:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1167: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 55 op/s
Dec  5 01:48:17 compute-0 nova_compute[349548]: 2025-12-05 01:48:17.608 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:48:18 compute-0 nova_compute[349548]: 2025-12-05 01:48:18.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:48:18 compute-0 nova_compute[349548]: 2025-12-05 01:48:18.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:48:19 compute-0 nova_compute[349548]: 2025-12-05 01:48:19.062 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:48:19 compute-0 nova_compute[349548]: 2025-12-05 01:48:19.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:48:19 compute-0 nova_compute[349548]: 2025-12-05 01:48:19.066 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 01:48:19 compute-0 nova_compute[349548]: 2025-12-05 01:48:19.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  5 01:48:19 compute-0 nova_compute[349548]: 2025-12-05 01:48:19.171 349552 DEBUG nova.network.neutron [req-05d4582e-fce2-4d13-aca6-0fb2a415ceda req-e02e655a-33bb-47b6-925c-f0d859dcc66a a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updated VIF entry in instance network info cache for port 68143c81-65a4-4ed0-8902-dbe0c8d89224. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  5 01:48:19 compute-0 nova_compute[349548]: 2025-12-05 01:48:19.172 349552 DEBUG nova.network.neutron [req-05d4582e-fce2-4d13-aca6-0fb2a415ceda req-e02e655a-33bb-47b6-925c-f0d859dcc66a a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updating instance_info_cache with network_info: [{"id": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "address": "fa:16:3e:0c:12:24", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.48", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68143c81-65", "ovs_interfaceid": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 01:48:19 compute-0 nova_compute[349548]: 2025-12-05 01:48:19.215 349552 DEBUG oslo_concurrency.lockutils [req-05d4582e-fce2-4d13-aca6-0fb2a415ceda req-e02e655a-33bb-47b6-925c-f0d859dcc66a a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 01:48:19 compute-0 nova_compute[349548]: 2025-12-05 01:48:19.326 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 01:48:19 compute-0 nova_compute[349548]: 2025-12-05 01:48:19.327 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 01:48:19 compute-0 nova_compute[349548]: 2025-12-05 01:48:19.328 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  5 01:48:19 compute-0 nova_compute[349548]: 2025-12-05 01:48:19.328 349552 DEBUG nova.objects.instance [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b69a0e24-1bc4-46a5-92d7-367c1efd53df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 01:48:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1168: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 47 op/s
Dec  5 01:48:19 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:19.535 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8dd76c1c-ab01-42af-b35e-2e870841b6ad, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 01:48:19 compute-0 podman[412856]: 2025-12-05 01:48:19.828101649 +0000 UTC m=+0.233564435 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:48:19 compute-0 podman[412857]: 2025-12-05 01:48:19.842832473 +0000 UTC m=+0.245206202 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 01:48:21 compute-0 nova_compute[349548]: 2025-12-05 01:48:21.318 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1169: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 45 op/s
Dec  5 01:48:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:48:21 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:48:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 01:48:21 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:48:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 01:48:21 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:48:21 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev ee7136f9-95a2-4986-81c3-45835f3aaf37 does not exist
Dec  5 01:48:21 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev aa8bd3f3-9d0d-4de5-9d5b-a0a667ab5e28 does not exist
Dec  5 01:48:21 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 81d1ad86-45f6-4b57-a160-50329b987400 does not exist
Dec  5 01:48:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 01:48:21 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 01:48:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 01:48:21 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:48:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:48:21 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:48:21 compute-0 podman[413047]: 2025-12-05 01:48:21.720294662 +0000 UTC m=+0.123639406 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm)
Dec  5 01:48:21 compute-0 podman[413051]: 2025-12-05 01:48:21.750553052 +0000 UTC m=+0.149220115 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  5 01:48:22 compute-0 nova_compute[349548]: 2025-12-05 01:48:22.018 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updating instance_info_cache with network_info: [{"id": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "address": "fa:16:3e:0c:12:24", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.48", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68143c81-65", "ovs_interfaceid": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 01:48:22 compute-0 nova_compute[349548]: 2025-12-05 01:48:22.032 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 01:48:22 compute-0 nova_compute[349548]: 2025-12-05 01:48:22.034 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  5 01:48:22 compute-0 nova_compute[349548]: 2025-12-05 01:48:22.035 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:48:22 compute-0 nova_compute[349548]: 2025-12-05 01:48:22.036 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:48:22 compute-0 nova_compute[349548]: 2025-12-05 01:48:22.037 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:48:22 compute-0 nova_compute[349548]: 2025-12-05 01:48:22.061 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:48:22 compute-0 nova_compute[349548]: 2025-12-05 01:48:22.064 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:48:22 compute-0 nova_compute[349548]: 2025-12-05 01:48:22.066 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:48:22 compute-0 nova_compute[349548]: 2025-12-05 01:48:22.067 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 01:48:22 compute-0 nova_compute[349548]: 2025-12-05 01:48:22.068 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:48:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:48:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:48:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:48:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:48:22 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/472524412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:48:22 compute-0 nova_compute[349548]: 2025-12-05 01:48:22.594 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:48:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:48:22 compute-0 nova_compute[349548]: 2025-12-05 01:48:22.612 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:22 compute-0 podman[413220]: 2025-12-05 01:48:22.614668219 +0000 UTC m=+0.095506085 container create df55db5150e14674db6a1a1f36a59201c65d19065bdba61c5e6d71827d165272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:48:22 compute-0 podman[413220]: 2025-12-05 01:48:22.574109779 +0000 UTC m=+0.054947695 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:48:22 compute-0 nova_compute[349548]: 2025-12-05 01:48:22.687 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:48:22 compute-0 nova_compute[349548]: 2025-12-05 01:48:22.688 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:48:22 compute-0 nova_compute[349548]: 2025-12-05 01:48:22.689 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:48:22 compute-0 systemd[1]: Started libpod-conmon-df55db5150e14674db6a1a1f36a59201c65d19065bdba61c5e6d71827d165272.scope.
Dec  5 01:48:22 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:48:22 compute-0 podman[413220]: 2025-12-05 01:48:22.780069488 +0000 UTC m=+0.260907314 container init df55db5150e14674db6a1a1f36a59201c65d19065bdba61c5e6d71827d165272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_brattain, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  5 01:48:22 compute-0 podman[413220]: 2025-12-05 01:48:22.799928336 +0000 UTC m=+0.280766172 container start df55db5150e14674db6a1a1f36a59201c65d19065bdba61c5e6d71827d165272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_brattain, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:48:22 compute-0 podman[413220]: 2025-12-05 01:48:22.805080241 +0000 UTC m=+0.285918097 container attach df55db5150e14674db6a1a1f36a59201c65d19065bdba61c5e6d71827d165272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:48:22 compute-0 silly_brattain[413237]: 167 167
Dec  5 01:48:22 compute-0 systemd[1]: libpod-df55db5150e14674db6a1a1f36a59201c65d19065bdba61c5e6d71827d165272.scope: Deactivated successfully.
Dec  5 01:48:22 compute-0 podman[413220]: 2025-12-05 01:48:22.816873742 +0000 UTC m=+0.297711608 container died df55db5150e14674db6a1a1f36a59201c65d19065bdba61c5e6d71827d165272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:48:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-ebc0bc3565cc8b7ba7657194d6b5207e8024d58085b7ef9b3fbcef3703dbf568-merged.mount: Deactivated successfully.
Dec  5 01:48:22 compute-0 podman[413220]: 2025-12-05 01:48:22.902409906 +0000 UTC m=+0.383247772 container remove df55db5150e14674db6a1a1f36a59201c65d19065bdba61c5e6d71827d165272 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_brattain, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Dec  5 01:48:22 compute-0 systemd[1]: libpod-conmon-df55db5150e14674db6a1a1f36a59201c65d19065bdba61c5e6d71827d165272.scope: Deactivated successfully.
Dec  5 01:48:23 compute-0 podman[413260]: 2025-12-05 01:48:23.163495914 +0000 UTC m=+0.064128353 container create 768f702c717820b02470096f5baed7d19277ca00f79556f4074efdda07e5648e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:48:23 compute-0 nova_compute[349548]: 2025-12-05 01:48:23.221 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 01:48:23 compute-0 nova_compute[349548]: 2025-12-05 01:48:23.223 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4056MB free_disk=59.97224044799805GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 01:48:23 compute-0 nova_compute[349548]: 2025-12-05 01:48:23.224 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:48:23 compute-0 nova_compute[349548]: 2025-12-05 01:48:23.224 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:48:23 compute-0 podman[413260]: 2025-12-05 01:48:23.137151424 +0000 UTC m=+0.037783883 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:48:23 compute-0 systemd[1]: Started libpod-conmon-768f702c717820b02470096f5baed7d19277ca00f79556f4074efdda07e5648e.scope.
Dec  5 01:48:23 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:48:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2aba23301f83851e4754989e0da5e2c83ff1740f2b42d2ad2061b6f39af6a924/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:48:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2aba23301f83851e4754989e0da5e2c83ff1740f2b42d2ad2061b6f39af6a924/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:48:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2aba23301f83851e4754989e0da5e2c83ff1740f2b42d2ad2061b6f39af6a924/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:48:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2aba23301f83851e4754989e0da5e2c83ff1740f2b42d2ad2061b6f39af6a924/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:48:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2aba23301f83851e4754989e0da5e2c83ff1740f2b42d2ad2061b6f39af6a924/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:48:23 compute-0 podman[413260]: 2025-12-05 01:48:23.301872713 +0000 UTC m=+0.202505252 container init 768f702c717820b02470096f5baed7d19277ca00f79556f4074efdda07e5648e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_clarke, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  5 01:48:23 compute-0 podman[413260]: 2025-12-05 01:48:23.322307547 +0000 UTC m=+0.222939986 container start 768f702c717820b02470096f5baed7d19277ca00f79556f4074efdda07e5648e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_clarke, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  5 01:48:23 compute-0 nova_compute[349548]: 2025-12-05 01:48:23.326 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b69a0e24-1bc4-46a5-92d7-367c1efd53df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 01:48:23 compute-0 nova_compute[349548]: 2025-12-05 01:48:23.326 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 01:48:23 compute-0 nova_compute[349548]: 2025-12-05 01:48:23.327 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 01:48:23 compute-0 podman[413260]: 2025-12-05 01:48:23.327348689 +0000 UTC m=+0.227981148 container attach 768f702c717820b02470096f5baed7d19277ca00f79556f4074efdda07e5648e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_clarke, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  5 01:48:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1170: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 36 op/s
Dec  5 01:48:23 compute-0 nova_compute[349548]: 2025-12-05 01:48:23.359 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:48:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:48:23 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1638324648' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:48:23 compute-0 nova_compute[349548]: 2025-12-05 01:48:23.867 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:48:23 compute-0 nova_compute[349548]: 2025-12-05 01:48:23.877 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating inventory in ProviderTree for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  5 01:48:23 compute-0 nova_compute[349548]: 2025-12-05 01:48:23.921 349552 ERROR nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [req-c40066fa-ef9c-4584-9a01-3e6b37f078e5] Failed to update inventory to [{'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID acf26aa2-2fef-4a53-8a44-6cfa2eb15d17.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-c40066fa-ef9c-4584-9a01-3e6b37f078e5"}]}#033[00m
Dec  5 01:48:23 compute-0 nova_compute[349548]: 2025-12-05 01:48:23.945 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing inventories for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  5 01:48:23 compute-0 nova_compute[349548]: 2025-12-05 01:48:23.965 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating ProviderTree inventory for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  5 01:48:23 compute-0 nova_compute[349548]: 2025-12-05 01:48:23.965 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating inventory in ProviderTree for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  5 01:48:23 compute-0 nova_compute[349548]: 2025-12-05 01:48:23.983 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing aggregate associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  5 01:48:24 compute-0 nova_compute[349548]: 2025-12-05 01:48:24.005 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing trait associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, traits: HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,HW_CPU_X86_ABM,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE41,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI2,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  5 01:48:24 compute-0 nova_compute[349548]: 2025-12-05 01:48:24.051 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:48:24 compute-0 busy_clarke[413277]: --> passed data devices: 0 physical, 3 LVM
Dec  5 01:48:24 compute-0 busy_clarke[413277]: --> relative data size: 1.0
Dec  5 01:48:24 compute-0 busy_clarke[413277]: --> All data devices are unavailable
Dec  5 01:48:24 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:48:24 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1810754302' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:48:24 compute-0 systemd[1]: libpod-768f702c717820b02470096f5baed7d19277ca00f79556f4074efdda07e5648e.scope: Deactivated successfully.
Dec  5 01:48:24 compute-0 systemd[1]: libpod-768f702c717820b02470096f5baed7d19277ca00f79556f4074efdda07e5648e.scope: Consumed 1.205s CPU time.
Dec  5 01:48:24 compute-0 podman[413260]: 2025-12-05 01:48:24.585548532 +0000 UTC m=+1.486181011 container died 768f702c717820b02470096f5baed7d19277ca00f79556f4074efdda07e5648e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_clarke, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  5 01:48:24 compute-0 nova_compute[349548]: 2025-12-05 01:48:24.598 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:48:24 compute-0 nova_compute[349548]: 2025-12-05 01:48:24.607 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating inventory in ProviderTree for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  5 01:48:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-2aba23301f83851e4754989e0da5e2c83ff1740f2b42d2ad2061b6f39af6a924-merged.mount: Deactivated successfully.
Dec  5 01:48:24 compute-0 podman[413260]: 2025-12-05 01:48:24.6591352 +0000 UTC m=+1.559767639 container remove 768f702c717820b02470096f5baed7d19277ca00f79556f4074efdda07e5648e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:48:24 compute-0 systemd[1]: libpod-conmon-768f702c717820b02470096f5baed7d19277ca00f79556f4074efdda07e5648e.scope: Deactivated successfully.
Dec  5 01:48:24 compute-0 nova_compute[349548]: 2025-12-05 01:48:24.718 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updated inventory for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Dec  5 01:48:24 compute-0 nova_compute[349548]: 2025-12-05 01:48:24.719 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Dec  5 01:48:24 compute-0 nova_compute[349548]: 2025-12-05 01:48:24.719 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating inventory in ProviderTree for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  5 01:48:24 compute-0 nova_compute[349548]: 2025-12-05 01:48:24.742 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 01:48:24 compute-0 nova_compute[349548]: 2025-12-05 01:48:24.743 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.519s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:48:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1171: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:48:25 compute-0 podman[413499]: 2025-12-05 01:48:25.846442441 +0000 UTC m=+0.095954368 container create 848710801ee4bb4447f3636eb1a5c29385ceec03767a02c5d297a175153fcd88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hawking, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:48:25 compute-0 podman[413499]: 2025-12-05 01:48:25.810610574 +0000 UTC m=+0.060122541 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:48:25 compute-0 systemd[1]: Started libpod-conmon-848710801ee4bb4447f3636eb1a5c29385ceec03767a02c5d297a175153fcd88.scope.
Dec  5 01:48:25 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:48:26 compute-0 podman[413499]: 2025-12-05 01:48:26.000533922 +0000 UTC m=+0.250045829 container init 848710801ee4bb4447f3636eb1a5c29385ceec03767a02c5d297a175153fcd88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  5 01:48:26 compute-0 podman[413499]: 2025-12-05 01:48:26.012111778 +0000 UTC m=+0.261623705 container start 848710801ee4bb4447f3636eb1a5c29385ceec03767a02c5d297a175153fcd88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:48:26 compute-0 podman[413499]: 2025-12-05 01:48:26.019159476 +0000 UTC m=+0.268671383 container attach 848710801ee4bb4447f3636eb1a5c29385ceec03767a02c5d297a175153fcd88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hawking, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  5 01:48:26 compute-0 hardcore_hawking[413521]: 167 167
Dec  5 01:48:26 compute-0 systemd[1]: libpod-848710801ee4bb4447f3636eb1a5c29385ceec03767a02c5d297a175153fcd88.scope: Deactivated successfully.
Dec  5 01:48:26 compute-0 conmon[413521]: conmon 848710801ee4bb4447f3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-848710801ee4bb4447f3636eb1a5c29385ceec03767a02c5d297a175153fcd88.scope/container/memory.events
Dec  5 01:48:26 compute-0 podman[413499]: 2025-12-05 01:48:26.025751421 +0000 UTC m=+0.275263368 container died 848710801ee4bb4447f3636eb1a5c29385ceec03767a02c5d297a175153fcd88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hawking, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  5 01:48:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-f73d82ef0dbed4c9b2c5acf5fbb5fc964e2f668b5cc6f3cbdcb6003eb80691c9-merged.mount: Deactivated successfully.
Dec  5 01:48:26 compute-0 podman[413513]: 2025-12-05 01:48:26.096003585 +0000 UTC m=+0.168595339 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, io.openshift.expose-services=, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  5 01:48:26 compute-0 podman[413499]: 2025-12-05 01:48:26.115001109 +0000 UTC m=+0.364512996 container remove 848710801ee4bb4447f3636eb1a5c29385ceec03767a02c5d297a175153fcd88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:48:26 compute-0 systemd[1]: libpod-conmon-848710801ee4bb4447f3636eb1a5c29385ceec03767a02c5d297a175153fcd88.scope: Deactivated successfully.
Dec  5 01:48:26 compute-0 nova_compute[349548]: 2025-12-05 01:48:26.318 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:26 compute-0 podman[413557]: 2025-12-05 01:48:26.401556743 +0000 UTC m=+0.118586074 container create 900193d71d2eab9987d2fff9645d45b63c84deb786c6f35c5c15cec4a3e52e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:48:26 compute-0 podman[413557]: 2025-12-05 01:48:26.349045217 +0000 UTC m=+0.066074628 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:48:26 compute-0 systemd[1]: Started libpod-conmon-900193d71d2eab9987d2fff9645d45b63c84deb786c6f35c5c15cec4a3e52e21.scope.
Dec  5 01:48:26 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:48:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a87e930bccd022c58fdd81e61c29313295d7503ac3b56866fd8e9b7c0e7c4570/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:48:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a87e930bccd022c58fdd81e61c29313295d7503ac3b56866fd8e9b7c0e7c4570/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:48:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a87e930bccd022c58fdd81e61c29313295d7503ac3b56866fd8e9b7c0e7c4570/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:48:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a87e930bccd022c58fdd81e61c29313295d7503ac3b56866fd8e9b7c0e7c4570/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:48:26 compute-0 podman[413557]: 2025-12-05 01:48:26.583421385 +0000 UTC m=+0.300450806 container init 900193d71d2eab9987d2fff9645d45b63c84deb786c6f35c5c15cec4a3e52e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_bell, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:48:26 compute-0 podman[413557]: 2025-12-05 01:48:26.607166392 +0000 UTC m=+0.324195753 container start 900193d71d2eab9987d2fff9645d45b63c84deb786c6f35c5c15cec4a3e52e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  5 01:48:26 compute-0 podman[413557]: 2025-12-05 01:48:26.61386828 +0000 UTC m=+0.330897641 container attach 900193d71d2eab9987d2fff9645d45b63c84deb786c6f35c5c15cec4a3e52e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_bell, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0002673989263853617 of space, bias 1.0, pg target 0.08021967791560852 quantized to 32 (current 32)
Dec  5 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  5 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:48:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 01:48:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1172: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:48:27 compute-0 kind_bell[413574]: {
Dec  5 01:48:27 compute-0 kind_bell[413574]:    "0": [
Dec  5 01:48:27 compute-0 kind_bell[413574]:        {
Dec  5 01:48:27 compute-0 kind_bell[413574]:            "devices": [
Dec  5 01:48:27 compute-0 kind_bell[413574]:                "/dev/loop3"
Dec  5 01:48:27 compute-0 kind_bell[413574]:            ],
Dec  5 01:48:27 compute-0 kind_bell[413574]:            "lv_name": "ceph_lv0",
Dec  5 01:48:27 compute-0 kind_bell[413574]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:48:27 compute-0 kind_bell[413574]:            "lv_size": "21470642176",
Dec  5 01:48:27 compute-0 kind_bell[413574]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:48:27 compute-0 kind_bell[413574]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:48:27 compute-0 kind_bell[413574]:            "name": "ceph_lv0",
Dec  5 01:48:27 compute-0 kind_bell[413574]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:48:27 compute-0 kind_bell[413574]:            "tags": {
Dec  5 01:48:27 compute-0 kind_bell[413574]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:48:27 compute-0 kind_bell[413574]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:48:27 compute-0 kind_bell[413574]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:48:27 compute-0 kind_bell[413574]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:48:27 compute-0 kind_bell[413574]:                "ceph.cluster_name": "ceph",
Dec  5 01:48:27 compute-0 kind_bell[413574]:                "ceph.crush_device_class": "",
Dec  5 01:48:27 compute-0 kind_bell[413574]:                "ceph.encrypted": "0",
Dec  5 01:48:27 compute-0 kind_bell[413574]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:48:27 compute-0 kind_bell[413574]:                "ceph.osd_id": "0",
Dec  5 01:48:27 compute-0 kind_bell[413574]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:48:27 compute-0 kind_bell[413574]:                "ceph.type": "block",
Dec  5 01:48:27 compute-0 kind_bell[413574]:                "ceph.vdo": "0"
Dec  5 01:48:27 compute-0 kind_bell[413574]:            },
Dec  5 01:48:27 compute-0 kind_bell[413574]:            "type": "block",
Dec  5 01:48:27 compute-0 kind_bell[413574]:            "vg_name": "ceph_vg0"
Dec  5 01:48:27 compute-0 kind_bell[413574]:        }
Dec  5 01:48:27 compute-0 kind_bell[413574]:    ],
Dec  5 01:48:27 compute-0 kind_bell[413574]:    "1": [
Dec  5 01:48:27 compute-0 kind_bell[413574]:        {
Dec  5 01:48:27 compute-0 kind_bell[413574]:            "devices": [
Dec  5 01:48:27 compute-0 kind_bell[413574]:                "/dev/loop4"
Dec  5 01:48:27 compute-0 kind_bell[413574]:            ],
Dec  5 01:48:27 compute-0 kind_bell[413574]:            "lv_name": "ceph_lv1",
Dec  5 01:48:27 compute-0 kind_bell[413574]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:48:27 compute-0 kind_bell[413574]:            "lv_size": "21470642176",
Dec  5 01:48:27 compute-0 kind_bell[413574]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:48:27 compute-0 kind_bell[413574]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:48:27 compute-0 kind_bell[413574]:            "name": "ceph_lv1",
Dec  5 01:48:27 compute-0 kind_bell[413574]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:48:27 compute-0 kind_bell[413574]:            "tags": {
Dec  5 01:48:27 compute-0 kind_bell[413574]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:48:27 compute-0 kind_bell[413574]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:48:27 compute-0 kind_bell[413574]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:48:27 compute-0 kind_bell[413574]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:48:27 compute-0 kind_bell[413574]:                "ceph.cluster_name": "ceph",
Dec  5 01:48:27 compute-0 kind_bell[413574]:                "ceph.crush_device_class": "",
Dec  5 01:48:27 compute-0 kind_bell[413574]:                "ceph.encrypted": "0",
Dec  5 01:48:27 compute-0 kind_bell[413574]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:48:27 compute-0 kind_bell[413574]:                "ceph.osd_id": "1",
Dec  5 01:48:27 compute-0 kind_bell[413574]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:48:27 compute-0 kind_bell[413574]:                "ceph.type": "block",
Dec  5 01:48:27 compute-0 kind_bell[413574]:                "ceph.vdo": "0"
Dec  5 01:48:27 compute-0 kind_bell[413574]:            },
Dec  5 01:48:27 compute-0 kind_bell[413574]:            "type": "block",
Dec  5 01:48:27 compute-0 kind_bell[413574]:            "vg_name": "ceph_vg1"
Dec  5 01:48:27 compute-0 kind_bell[413574]:        }
Dec  5 01:48:27 compute-0 kind_bell[413574]:    ],
Dec  5 01:48:27 compute-0 kind_bell[413574]:    "2": [
Dec  5 01:48:27 compute-0 kind_bell[413574]:        {
Dec  5 01:48:27 compute-0 kind_bell[413574]:            "devices": [
Dec  5 01:48:27 compute-0 kind_bell[413574]:                "/dev/loop5"
Dec  5 01:48:27 compute-0 kind_bell[413574]:            ],
Dec  5 01:48:27 compute-0 kind_bell[413574]:            "lv_name": "ceph_lv2",
Dec  5 01:48:27 compute-0 kind_bell[413574]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:48:27 compute-0 kind_bell[413574]:            "lv_size": "21470642176",
Dec  5 01:48:27 compute-0 kind_bell[413574]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:48:27 compute-0 kind_bell[413574]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:48:27 compute-0 kind_bell[413574]:            "name": "ceph_lv2",
Dec  5 01:48:27 compute-0 kind_bell[413574]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:48:27 compute-0 kind_bell[413574]:            "tags": {
Dec  5 01:48:27 compute-0 kind_bell[413574]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:48:27 compute-0 kind_bell[413574]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:48:27 compute-0 kind_bell[413574]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:48:27 compute-0 kind_bell[413574]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:48:27 compute-0 kind_bell[413574]:                "ceph.cluster_name": "ceph",
Dec  5 01:48:27 compute-0 kind_bell[413574]:                "ceph.crush_device_class": "",
Dec  5 01:48:27 compute-0 kind_bell[413574]:                "ceph.encrypted": "0",
Dec  5 01:48:27 compute-0 kind_bell[413574]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:48:27 compute-0 kind_bell[413574]:                "ceph.osd_id": "2",
Dec  5 01:48:27 compute-0 kind_bell[413574]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:48:27 compute-0 kind_bell[413574]:                "ceph.type": "block",
Dec  5 01:48:27 compute-0 kind_bell[413574]:                "ceph.vdo": "0"
Dec  5 01:48:27 compute-0 kind_bell[413574]:            },
Dec  5 01:48:27 compute-0 kind_bell[413574]:            "type": "block",
Dec  5 01:48:27 compute-0 kind_bell[413574]:            "vg_name": "ceph_vg2"
Dec  5 01:48:27 compute-0 kind_bell[413574]:        }
Dec  5 01:48:27 compute-0 kind_bell[413574]:    ]
Dec  5 01:48:27 compute-0 kind_bell[413574]: }
Dec  5 01:48:27 compute-0 systemd[1]: libpod-900193d71d2eab9987d2fff9645d45b63c84deb786c6f35c5c15cec4a3e52e21.scope: Deactivated successfully.
Dec  5 01:48:27 compute-0 podman[413557]: 2025-12-05 01:48:27.493389329 +0000 UTC m=+1.210418690 container died 900193d71d2eab9987d2fff9645d45b63c84deb786c6f35c5c15cec4a3e52e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_bell, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:48:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-a87e930bccd022c58fdd81e61c29313295d7503ac3b56866fd8e9b7c0e7c4570-merged.mount: Deactivated successfully.
Dec  5 01:48:27 compute-0 podman[413557]: 2025-12-05 01:48:27.596999522 +0000 UTC m=+1.314028863 container remove 900193d71d2eab9987d2fff9645d45b63c84deb786c6f35c5c15cec4a3e52e21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  5 01:48:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:48:27 compute-0 nova_compute[349548]: 2025-12-05 01:48:27.614 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:27 compute-0 systemd[1]: libpod-conmon-900193d71d2eab9987d2fff9645d45b63c84deb786c6f35c5c15cec4a3e52e21.scope: Deactivated successfully.
Dec  5 01:48:28 compute-0 podman[413729]: 2025-12-05 01:48:28.86249271 +0000 UTC m=+0.089718503 container create d36d24ddf748b34ead4c1945f13de21d39d38f8dd381a6380c757bd2adf628e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:48:28 compute-0 podman[413729]: 2025-12-05 01:48:28.825410207 +0000 UTC m=+0.052635950 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:48:28 compute-0 systemd[1]: Started libpod-conmon-d36d24ddf748b34ead4c1945f13de21d39d38f8dd381a6380c757bd2adf628e0.scope.
Dec  5 01:48:29 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:48:29 compute-0 podman[413729]: 2025-12-05 01:48:29.048199689 +0000 UTC m=+0.275425462 container init d36d24ddf748b34ead4c1945f13de21d39d38f8dd381a6380c757bd2adf628e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_blackburn, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  5 01:48:29 compute-0 podman[413729]: 2025-12-05 01:48:29.064134237 +0000 UTC m=+0.291359960 container start d36d24ddf748b34ead4c1945f13de21d39d38f8dd381a6380c757bd2adf628e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_blackburn, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:48:29 compute-0 podman[413729]: 2025-12-05 01:48:29.070589558 +0000 UTC m=+0.297815341 container attach d36d24ddf748b34ead4c1945f13de21d39d38f8dd381a6380c757bd2adf628e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_blackburn, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:48:29 compute-0 determined_blackburn[413745]: 167 167
Dec  5 01:48:29 compute-0 systemd[1]: libpod-d36d24ddf748b34ead4c1945f13de21d39d38f8dd381a6380c757bd2adf628e0.scope: Deactivated successfully.
Dec  5 01:48:29 compute-0 podman[413729]: 2025-12-05 01:48:29.088744859 +0000 UTC m=+0.315970592 container died d36d24ddf748b34ead4c1945f13de21d39d38f8dd381a6380c757bd2adf628e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  5 01:48:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea9fa6ec47294c4443c7524fd8c5da632fb5647a526e82093d07c2a86de4edbe-merged.mount: Deactivated successfully.
Dec  5 01:48:29 compute-0 podman[413729]: 2025-12-05 01:48:29.189803019 +0000 UTC m=+0.417028722 container remove d36d24ddf748b34ead4c1945f13de21d39d38f8dd381a6380c757bd2adf628e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_blackburn, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  5 01:48:29 compute-0 systemd[1]: libpod-conmon-d36d24ddf748b34ead4c1945f13de21d39d38f8dd381a6380c757bd2adf628e0.scope: Deactivated successfully.
Dec  5 01:48:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1173: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:48:29 compute-0 podman[413770]: 2025-12-05 01:48:29.506756758 +0000 UTC m=+0.094098106 container create 6a582f2ab40cbc34349e093fd4dc345ce25da18e291b3bef25c238170e0ca37b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_darwin, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  5 01:48:29 compute-0 podman[413770]: 2025-12-05 01:48:29.472220677 +0000 UTC m=+0.059562045 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:48:29 compute-0 systemd[1]: Started libpod-conmon-6a582f2ab40cbc34349e093fd4dc345ce25da18e291b3bef25c238170e0ca37b.scope.
Dec  5 01:48:29 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b64a75317d6c81dc8749a0a85117cd7b19dd5836129b179db629cf7a12a54c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b64a75317d6c81dc8749a0a85117cd7b19dd5836129b179db629cf7a12a54c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b64a75317d6c81dc8749a0a85117cd7b19dd5836129b179db629cf7a12a54c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b64a75317d6c81dc8749a0a85117cd7b19dd5836129b179db629cf7a12a54c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:48:29 compute-0 podman[413770]: 2025-12-05 01:48:29.687030404 +0000 UTC m=+0.274371762 container init 6a582f2ab40cbc34349e093fd4dc345ce25da18e291b3bef25c238170e0ca37b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_darwin, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:48:29 compute-0 podman[413770]: 2025-12-05 01:48:29.706228364 +0000 UTC m=+0.293569712 container start 6a582f2ab40cbc34349e093fd4dc345ce25da18e291b3bef25c238170e0ca37b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_darwin, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:48:29 compute-0 podman[413770]: 2025-12-05 01:48:29.713634572 +0000 UTC m=+0.300975990 container attach 6a582f2ab40cbc34349e093fd4dc345ce25da18e291b3bef25c238170e0ca37b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  5 01:48:29 compute-0 podman[158197]: time="2025-12-05T01:48:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:48:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:48:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45383 "" "Go-http-client/1.1"
Dec  5 01:48:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:48:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9027 "" "Go-http-client/1.1"
Dec  5 01:48:30 compute-0 vigorous_darwin[413786]: {
Dec  5 01:48:30 compute-0 vigorous_darwin[413786]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 01:48:30 compute-0 vigorous_darwin[413786]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:48:30 compute-0 vigorous_darwin[413786]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 01:48:30 compute-0 vigorous_darwin[413786]:        "osd_id": 0,
Dec  5 01:48:30 compute-0 vigorous_darwin[413786]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:48:30 compute-0 vigorous_darwin[413786]:        "type": "bluestore"
Dec  5 01:48:30 compute-0 vigorous_darwin[413786]:    },
Dec  5 01:48:30 compute-0 vigorous_darwin[413786]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 01:48:30 compute-0 vigorous_darwin[413786]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:48:30 compute-0 vigorous_darwin[413786]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 01:48:30 compute-0 vigorous_darwin[413786]:        "osd_id": 1,
Dec  5 01:48:30 compute-0 vigorous_darwin[413786]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:48:30 compute-0 vigorous_darwin[413786]:        "type": "bluestore"
Dec  5 01:48:30 compute-0 vigorous_darwin[413786]:    },
Dec  5 01:48:30 compute-0 vigorous_darwin[413786]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 01:48:30 compute-0 vigorous_darwin[413786]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:48:30 compute-0 vigorous_darwin[413786]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 01:48:30 compute-0 vigorous_darwin[413786]:        "osd_id": 2,
Dec  5 01:48:30 compute-0 vigorous_darwin[413786]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:48:30 compute-0 vigorous_darwin[413786]:        "type": "bluestore"
Dec  5 01:48:30 compute-0 vigorous_darwin[413786]:    }
Dec  5 01:48:30 compute-0 vigorous_darwin[413786]: }
Dec  5 01:48:30 compute-0 systemd[1]: libpod-6a582f2ab40cbc34349e093fd4dc345ce25da18e291b3bef25c238170e0ca37b.scope: Deactivated successfully.
Dec  5 01:48:30 compute-0 systemd[1]: libpod-6a582f2ab40cbc34349e093fd4dc345ce25da18e291b3bef25c238170e0ca37b.scope: Consumed 1.133s CPU time.
Dec  5 01:48:30 compute-0 podman[413770]: 2025-12-05 01:48:30.843080226 +0000 UTC m=+1.430421554 container died 6a582f2ab40cbc34349e093fd4dc345ce25da18e291b3bef25c238170e0ca37b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_darwin, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:48:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9b64a75317d6c81dc8749a0a85117cd7b19dd5836129b179db629cf7a12a54c-merged.mount: Deactivated successfully.
Dec  5 01:48:30 compute-0 podman[413770]: 2025-12-05 01:48:30.927950521 +0000 UTC m=+1.515291849 container remove 6a582f2ab40cbc34349e093fd4dc345ce25da18e291b3bef25c238170e0ca37b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_darwin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  5 01:48:30 compute-0 systemd[1]: libpod-conmon-6a582f2ab40cbc34349e093fd4dc345ce25da18e291b3bef25c238170e0ca37b.scope: Deactivated successfully.
Dec  5 01:48:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:48:31 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:48:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:48:31 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:48:31 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev b09f8d77-cab0-491f-a0b9-7bc9b220c36f does not exist
Dec  5 01:48:31 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 61364117-7920-4225-8895-0e7a5c661ec9 does not exist
Dec  5 01:48:31 compute-0 nova_compute[349548]: 2025-12-05 01:48:31.320 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1174: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:48:31 compute-0 openstack_network_exporter[366555]: ERROR   01:48:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:48:31 compute-0 openstack_network_exporter[366555]: ERROR   01:48:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:48:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:48:31 compute-0 openstack_network_exporter[366555]: ERROR   01:48:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:48:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:48:31 compute-0 openstack_network_exporter[366555]: ERROR   01:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:48:31 compute-0 openstack_network_exporter[366555]: ERROR   01:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:48:32 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:48:32 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:48:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:48:32 compute-0 nova_compute[349548]: 2025-12-05 01:48:32.618 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1175: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:48:33 compute-0 podman[413879]: 2025-12-05 01:48:33.668340703 +0000 UTC m=+0.082822669 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 01:48:33 compute-0 podman[413878]: 2025-12-05 01:48:33.701091613 +0000 UTC m=+0.108659005 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  5 01:48:33 compute-0 podman[413880]: 2025-12-05 01:48:33.778202681 +0000 UTC m=+0.174609649 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  5 01:48:33 compute-0 podman[413881]: 2025-12-05 01:48:33.781952736 +0000 UTC m=+0.173345533 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, vcs-type=git, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, release=1755695350, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9)
Dec  5 01:48:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1176: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:48:36 compute-0 nova_compute[349548]: 2025-12-05 01:48:36.324 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1177: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:48:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:48:37 compute-0 nova_compute[349548]: 2025-12-05 01:48:37.622 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.315 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  5 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.315 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  5 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.316 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.325 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance b69a0e24-1bc4-46a5-92d7-367c1efd53df from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  5 01:48:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:38.726 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/b69a0e24-1bc4-46a5-92d7-367c1efd53df -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}03a5c5085f72a10a14834caf2c8f725d7bea9761ee1da0af3d318eb89d91a8ae" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  5 01:48:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1178: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.315 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1849 Content-Type: application/json Date: Fri, 05 Dec 2025 01:48:39 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-73605b56-4a3c-4695-81c0-2207f68d2fe0 x-openstack-request-id: req-73605b56-4a3c-4695-81c0-2207f68d2fe0 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.315 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "b69a0e24-1bc4-46a5-92d7-367c1efd53df", "name": "test_0", "status": "ACTIVE", "tenant_id": "6ad982b73954486390215862ee62239f", "user_id": "ff880837791d4f49a54672b8d0e705ff", "metadata": {}, "hostId": "c00078154b620f81ef3acab090afa15b914aca6c57286253be564282", "image": {"id": "aa58c1e9-bdcc-4e60-9cee-eaeee0741251", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/aa58c1e9-bdcc-4e60-9cee-eaeee0741251"}]}, "flavor": {"id": "7d473820-6f66-40b4-b8d1-decd466d7dd2", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/7d473820-6f66-40b4-b8d1-decd466d7dd2"}]}, "created": "2025-12-05T01:47:49Z", "updated": "2025-12-05T01:48:05Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.48", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:0c:12:24"}, {"version": 4, "addr": "192.168.122.212", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:0c:12:24"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/b69a0e24-1bc4-46a5-92d7-367c1efd53df"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/b69a0e24-1bc4-46a5-92d7-367c1efd53df"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-05T01:48:05.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.315 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/b69a0e24-1bc4-46a5-92d7-367c1efd53df used request id req-73605b56-4a3c-4695-81c0-2207f68d2fe0 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.318 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b69a0e24-1bc4-46a5-92d7-367c1efd53df', 'name': 'test_0', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.318 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.318 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd61438050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.319 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd61438050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.320 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.322 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.324 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.324 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.325 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.325 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.326 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-05T01:48:41.319300) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.327 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-05T01:48:41.325173) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:48:41 compute-0 nova_compute[349548]: 2025-12-05 01:48:41.326 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1179: 321 pgs: 321 active+clean; 49 MiB data, 180 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 0 op/s
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.356 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.357 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.357 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.358 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.358 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.359 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.359 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.359 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.359 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.360 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-05T01:48:41.359756) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.361 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.361 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.361 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.361 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.361 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.363 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.363 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.363 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-05T01:48:41.362672) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.364 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>]
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.367 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.368 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.368 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.368 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.368 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-05T01:48:41.368636) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.368 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:48:41 compute-0 ovn_controller[89286]: 2025-12-05T01:48:41Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0c:12:24 192.168.0.48
Dec  5 01:48:41 compute-0 ovn_controller[89286]: 2025-12-05T01:48:41Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0c:12:24 192.168.0.48
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.461 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 21187584 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.462 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 2160128 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.462 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 221518 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.463 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.463 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.463 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.463 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.464 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.464 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.464 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 1851644501 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.465 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-05T01:48:41.464445) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.467 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 233035127 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.467 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 163808441 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.468 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.468 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.468 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.469 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.469 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.469 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.469 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 716 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.470 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 114 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.470 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 95 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.471 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.471 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.472 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.472 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.472 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.472 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.472 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.473 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.473 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.474 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.475 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.475 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.479 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.479 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.480 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.480 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 17543168 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.480 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-05T01:48:41.469428) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.480 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.481 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.482 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.482 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-05T01:48:41.472565) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.482 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.482 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.483 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.483 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.483 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.484 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-05T01:48:41.480131) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.484 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-05T01:48:41.483489) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.528 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.529 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.530 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.530 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.530 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.531 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.531 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.531 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 5533603682 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.532 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 28454640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.532 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-05T01:48:41.531315) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.532 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.533 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.533 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.534 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.534 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.534 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.534 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.535 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 130 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.535 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-05T01:48:41.534858) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.536 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.536 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.537 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.537 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.538 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.538 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.538 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.538 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.539 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-05T01:48:41.538917) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.544 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for b69a0e24-1bc4-46a5-92d7-367c1efd53df / tap68143c81-65 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.545 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.545 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.546 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.546 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.546 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.547 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.547 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.547 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.548 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-05T01:48:41.547385) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.548 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.549 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.549 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.549 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.550 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.550 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.550 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.550 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-05T01:48:41.550327) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.551 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.551 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.552 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.552 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.553 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.553 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.553 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.554 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.554 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.554 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-05T01:48:41.554124) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.555 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.555 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.555 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.556 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.556 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.556 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.557 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.bytes volume: 903 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.557 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-05T01:48:41.556817) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.558 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.558 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.559 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.559 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.560 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.560 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.561 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.561 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.561 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-05T01:48:41.560074) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.561 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.562 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.562 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.563 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.563 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.563 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-05T01:48:41.563169) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.564 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>]
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.564 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.565 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.565 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.565 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.566 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.566 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/memory.usage volume: 33.296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.566 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-05T01:48:41.566119) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.567 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.567 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.568 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.568 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.568 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.569 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.569 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.bytes volume: 1786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.569 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-05T01:48:41.569055) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.570 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.570 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.570 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.571 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.571 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.572 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.572 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets volume: 5 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.572 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-05T01:48:41.571923) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.573 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.573 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.573 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.574 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.574 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.574 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.574 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.575 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.575 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.575 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-05T01:48:41.574708) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.576 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.576 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.576 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.576 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.577 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/cpu volume: 33670000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.577 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-05T01:48:41.576803) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.577 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.578 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.578 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.578 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.578 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.578 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.579 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.579 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.579 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-05T01:48:41.578794) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.580 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.580 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.580 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.580 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.580 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.581 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.581 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-05T01:48:41.580853) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.581 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.582 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.582 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.583 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.583 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.583 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.584 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.585 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.585 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.585 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.585 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.585 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.586 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.586 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.586 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.586 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.586 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.586 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.587 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.587 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.587 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.587 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.587 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.587 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.588 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.588 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:48:41 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:48:41.588 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:48:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:48:42 compute-0 nova_compute[349548]: 2025-12-05 01:48:42.625 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1180: 321 pgs: 321 active+clean; 63 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 1.2 MiB/s wr, 34 op/s
Dec  5 01:48:44 compute-0 systemd[1]: Starting dnf makecache...
Dec  5 01:48:44 compute-0 dnf[413960]: Metadata cache refreshed recently.
Dec  5 01:48:44 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec  5 01:48:44 compute-0 systemd[1]: Finished dnf makecache.
Dec  5 01:48:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 01:48:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1683311609' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 01:48:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 01:48:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1683311609' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 01:48:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1181: 321 pgs: 321 active+clean; 71 MiB data, 197 MiB used, 60 GiB / 60 GiB avail; 111 KiB/s rd, 1.4 MiB/s wr, 38 op/s
Dec  5 01:48:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:48:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:48:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:48:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:48:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:48:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:48:46 compute-0 nova_compute[349548]: 2025-12-05 01:48:46.330 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:46 compute-0 ovn_controller[89286]: 2025-12-05T01:48:46Z|00034|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Dec  5 01:48:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1182: 321 pgs: 321 active+clean; 77 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Dec  5 01:48:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:48:47 compute-0 nova_compute[349548]: 2025-12-05 01:48:47.627 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1183: 321 pgs: 321 active+clean; 77 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Dec  5 01:48:50 compute-0 podman[413962]: 2025-12-05 01:48:50.71094188 +0000 UTC m=+0.116836494 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 01:48:50 compute-0 podman[413963]: 2025-12-05 01:48:50.737362203 +0000 UTC m=+0.136951020 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 01:48:51 compute-0 nova_compute[349548]: 2025-12-05 01:48:51.334 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1184: 321 pgs: 321 active+clean; 77 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 1.5 MiB/s wr, 56 op/s
Dec  5 01:48:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:48:52 compute-0 nova_compute[349548]: 2025-12-05 01:48:52.630 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:52 compute-0 podman[414004]: 2025-12-05 01:48:52.735582112 +0000 UTC m=+0.140565952 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  5 01:48:52 compute-0 podman[414005]: 2025-12-05 01:48:52.750266635 +0000 UTC m=+0.146522199 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  5 01:48:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1185: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 153 KiB/s rd, 1.5 MiB/s wr, 55 op/s
Dec  5 01:48:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1186: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 333 KiB/s wr, 21 op/s
Dec  5 01:48:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:56.176 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:48:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:56.178 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:48:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:48:56.179 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:48:56 compute-0 nova_compute[349548]: 2025-12-05 01:48:56.338 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:56 compute-0 podman[414040]: 2025-12-05 01:48:56.734441457 +0000 UTC m=+0.134619495 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-type=git, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, managed_by=edpm_ansible, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, release=1214.1726694543, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9)
Dec  5 01:48:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1187: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 72 KiB/s wr, 18 op/s
Dec  5 01:48:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:48:57 compute-0 nova_compute[349548]: 2025-12-05 01:48:57.635 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:48:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1188: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s wr, 0 op/s
Dec  5 01:48:59 compute-0 podman[158197]: time="2025-12-05T01:48:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:48:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:48:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 01:48:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:48:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8595 "" "Go-http-client/1.1"
Dec  5 01:49:01 compute-0 nova_compute[349548]: 2025-12-05 01:49:01.341 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:49:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1189: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s wr, 0 op/s
Dec  5 01:49:01 compute-0 openstack_network_exporter[366555]: ERROR   01:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:49:01 compute-0 openstack_network_exporter[366555]: ERROR   01:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:49:01 compute-0 openstack_network_exporter[366555]: ERROR   01:49:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:49:01 compute-0 openstack_network_exporter[366555]: ERROR   01:49:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:49:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:49:01 compute-0 openstack_network_exporter[366555]: ERROR   01:49:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:49:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:49:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:49:02 compute-0 nova_compute[349548]: 2025-12-05 01:49:02.638 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:49:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1190: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s wr, 0 op/s
Dec  5 01:49:04 compute-0 podman[414063]: 2025-12-05 01:49:04.754294819 +0000 UTC m=+0.125620462 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, config_id=edpm, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., container_name=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  5 01:49:04 compute-0 podman[414060]: 2025-12-05 01:49:04.764308081 +0000 UTC m=+0.156004416 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd)
Dec  5 01:49:04 compute-0 podman[414062]: 2025-12-05 01:49:04.765581396 +0000 UTC m=+0.142445404 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  5 01:49:04 compute-0 podman[414061]: 2025-12-05 01:49:04.773374045 +0000 UTC m=+0.158385382 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 01:49:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1191: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:49:06 compute-0 nova_compute[349548]: 2025-12-05 01:49:06.345 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:49:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1192: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:49:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:49:07 compute-0 nova_compute[349548]: 2025-12-05 01:49:07.643 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:49:08 compute-0 nova_compute[349548]: 2025-12-05 01:49:08.129 349552 DEBUG oslo_concurrency.lockutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:49:08 compute-0 nova_compute[349548]: 2025-12-05 01:49:08.130 349552 DEBUG oslo_concurrency.lockutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:49:08 compute-0 nova_compute[349548]: 2025-12-05 01:49:08.158 349552 DEBUG nova.compute.manager [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  5 01:49:08 compute-0 nova_compute[349548]: 2025-12-05 01:49:08.380 349552 DEBUG oslo_concurrency.lockutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:49:08 compute-0 nova_compute[349548]: 2025-12-05 01:49:08.380 349552 DEBUG oslo_concurrency.lockutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:49:08 compute-0 nova_compute[349548]: 2025-12-05 01:49:08.393 349552 DEBUG nova.virt.hardware [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  5 01:49:08 compute-0 nova_compute[349548]: 2025-12-05 01:49:08.393 349552 INFO nova.compute.claims [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  5 01:49:08 compute-0 nova_compute[349548]: 2025-12-05 01:49:08.587 349552 DEBUG oslo_concurrency.processutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:49:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:49:09 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/932633543' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.052 349552 DEBUG oslo_concurrency.processutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.064 349552 DEBUG nova.compute.provider_tree [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.082 349552 DEBUG nova.scheduler.client.report [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.113 349552 DEBUG oslo_concurrency.lockutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.114 349552 DEBUG nova.compute.manager [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  5 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.172 349552 DEBUG nova.compute.manager [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  5 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.174 349552 DEBUG nova.network.neutron [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  5 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.205 349552 INFO nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  5 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.242 349552 DEBUG nova.compute.manager [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  5 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.339 349552 DEBUG nova.compute.manager [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  5 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.341 349552 DEBUG nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  5 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.342 349552 INFO nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Creating image(s)#033[00m
Dec  5 01:49:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1193: 321 pgs: 321 active+clean; 78 MiB data, 198 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.398 349552 DEBUG nova.storage.rbd_utils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image b82c3f0e-6d6a-4a7b-9556-b609ad63e497_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.452 349552 DEBUG nova.storage.rbd_utils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image b82c3f0e-6d6a-4a7b-9556-b609ad63e497_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.507 349552 DEBUG nova.storage.rbd_utils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image b82c3f0e-6d6a-4a7b-9556-b609ad63e497_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.516 349552 DEBUG oslo_concurrency.processutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.609 349552 DEBUG oslo_concurrency.processutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.611 349552 DEBUG oslo_concurrency.lockutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "af0f6d73e40706411141d751e7ebef271f1a5b42" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.613 349552 DEBUG oslo_concurrency.lockutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "af0f6d73e40706411141d751e7ebef271f1a5b42" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.614 349552 DEBUG oslo_concurrency.lockutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "af0f6d73e40706411141d751e7ebef271f1a5b42" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.667 349552 DEBUG nova.storage.rbd_utils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image b82c3f0e-6d6a-4a7b-9556-b609ad63e497_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 01:49:09 compute-0 nova_compute[349548]: 2025-12-05 01:49:09.681 349552 DEBUG oslo_concurrency.processutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42 b82c3f0e-6d6a-4a7b-9556-b609ad63e497_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:49:10 compute-0 nova_compute[349548]: 2025-12-05 01:49:10.137 349552 DEBUG oslo_concurrency.processutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42 b82c3f0e-6d6a-4a7b-9556-b609ad63e497_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:49:10 compute-0 nova_compute[349548]: 2025-12-05 01:49:10.314 349552 DEBUG nova.storage.rbd_utils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] resizing rbd image b82c3f0e-6d6a-4a7b-9556-b609ad63e497_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  5 01:49:10 compute-0 nova_compute[349548]: 2025-12-05 01:49:10.574 349552 DEBUG nova.objects.instance [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lazy-loading 'migration_context' on Instance uuid b82c3f0e-6d6a-4a7b-9556-b609ad63e497 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 01:49:10 compute-0 nova_compute[349548]: 2025-12-05 01:49:10.640 349552 DEBUG nova.storage.rbd_utils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image b82c3f0e-6d6a-4a7b-9556-b609ad63e497_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 01:49:10 compute-0 nova_compute[349548]: 2025-12-05 01:49:10.695 349552 DEBUG nova.storage.rbd_utils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image b82c3f0e-6d6a-4a7b-9556-b609ad63e497_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 01:49:10 compute-0 nova_compute[349548]: 2025-12-05 01:49:10.713 349552 DEBUG oslo_concurrency.processutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:49:10 compute-0 nova_compute[349548]: 2025-12-05 01:49:10.813 349552 DEBUG oslo_concurrency.processutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:49:10 compute-0 nova_compute[349548]: 2025-12-05 01:49:10.816 349552 DEBUG oslo_concurrency.lockutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:49:10 compute-0 nova_compute[349548]: 2025-12-05 01:49:10.818 349552 DEBUG oslo_concurrency.lockutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:49:10 compute-0 nova_compute[349548]: 2025-12-05 01:49:10.820 349552 DEBUG oslo_concurrency.lockutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:49:10 compute-0 nova_compute[349548]: 2025-12-05 01:49:10.875 349552 DEBUG nova.storage.rbd_utils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image b82c3f0e-6d6a-4a7b-9556-b609ad63e497_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 01:49:10 compute-0 nova_compute[349548]: 2025-12-05 01:49:10.903 349552 DEBUG oslo_concurrency.processutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 b82c3f0e-6d6a-4a7b-9556-b609ad63e497_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:49:11 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec  5 01:49:11 compute-0 nova_compute[349548]: 2025-12-05 01:49:11.349 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:49:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1194: 321 pgs: 321 active+clean; 79 MiB data, 198 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 1.8 KiB/s wr, 0 op/s
Dec  5 01:49:11 compute-0 nova_compute[349548]: 2025-12-05 01:49:11.523 349552 DEBUG oslo_concurrency.processutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 b82c3f0e-6d6a-4a7b-9556-b609ad63e497_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:49:11 compute-0 nova_compute[349548]: 2025-12-05 01:49:11.744 349552 DEBUG nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  5 01:49:11 compute-0 nova_compute[349548]: 2025-12-05 01:49:11.745 349552 DEBUG nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Ensure instance console log exists: /var/lib/nova/instances/b82c3f0e-6d6a-4a7b-9556-b609ad63e497/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  5 01:49:11 compute-0 nova_compute[349548]: 2025-12-05 01:49:11.746 349552 DEBUG oslo_concurrency.lockutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:49:11 compute-0 nova_compute[349548]: 2025-12-05 01:49:11.747 349552 DEBUG oslo_concurrency.lockutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:49:11 compute-0 nova_compute[349548]: 2025-12-05 01:49:11.747 349552 DEBUG oslo_concurrency.lockutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:49:12 compute-0 nova_compute[349548]: 2025-12-05 01:49:12.550 349552 DEBUG nova.network.neutron [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Successfully updated port: 554930d3-ff53-4ef1-af0a-bad6acef1456 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  5 01:49:12 compute-0 nova_compute[349548]: 2025-12-05 01:49:12.566 349552 DEBUG oslo_concurrency.lockutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 01:49:12 compute-0 nova_compute[349548]: 2025-12-05 01:49:12.566 349552 DEBUG oslo_concurrency.lockutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquired lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 01:49:12 compute-0 nova_compute[349548]: 2025-12-05 01:49:12.566 349552 DEBUG nova.network.neutron [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  5 01:49:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:49:12 compute-0 nova_compute[349548]: 2025-12-05 01:49:12.648 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:49:13 compute-0 nova_compute[349548]: 2025-12-05 01:49:13.013 349552 DEBUG nova.compute.manager [req-096b245d-d16e-499c-8ec6-ed988b956ed1 req-a087135e-135c-4997-9c09-7943a3616e80 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Received event network-changed-554930d3-ff53-4ef1-af0a-bad6acef1456 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 01:49:13 compute-0 nova_compute[349548]: 2025-12-05 01:49:13.014 349552 DEBUG nova.compute.manager [req-096b245d-d16e-499c-8ec6-ed988b956ed1 req-a087135e-135c-4997-9c09-7943a3616e80 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Refreshing instance network info cache due to event network-changed-554930d3-ff53-4ef1-af0a-bad6acef1456. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  5 01:49:13 compute-0 nova_compute[349548]: 2025-12-05 01:49:13.015 349552 DEBUG oslo_concurrency.lockutils [req-096b245d-d16e-499c-8ec6-ed988b956ed1 req-a087135e-135c-4997-9c09-7943a3616e80 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 01:49:13 compute-0 nova_compute[349548]: 2025-12-05 01:49:13.358 349552 DEBUG nova.network.neutron [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  5 01:49:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1195: 321 pgs: 321 active+clean; 106 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 MiB/s wr, 26 op/s
Dec  5 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.524 349552 DEBUG nova.network.neutron [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Updating instance_info_cache with network_info: [{"id": "554930d3-ff53-4ef1-af0a-bad6acef1456", "address": "fa:16:3e:43:63:18", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554930d3-ff", "ovs_interfaceid": "554930d3-ff53-4ef1-af0a-bad6acef1456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.549 349552 DEBUG oslo_concurrency.lockutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Releasing lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.549 349552 DEBUG nova.compute.manager [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Instance network_info: |[{"id": "554930d3-ff53-4ef1-af0a-bad6acef1456", "address": "fa:16:3e:43:63:18", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554930d3-ff", "ovs_interfaceid": "554930d3-ff53-4ef1-af0a-bad6acef1456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  5 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.550 349552 DEBUG oslo_concurrency.lockutils [req-096b245d-d16e-499c-8ec6-ed988b956ed1 req-a087135e-135c-4997-9c09-7943a3616e80 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.551 349552 DEBUG nova.network.neutron [req-096b245d-d16e-499c-8ec6-ed988b956ed1 req-a087135e-135c-4997-9c09-7943a3616e80 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Refreshing network info cache for port 554930d3-ff53-4ef1-af0a-bad6acef1456 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  5 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.557 349552 DEBUG nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Start _get_guest_xml network_info=[{"id": "554930d3-ff53-4ef1-af0a-bad6acef1456", "address": "fa:16:3e:43:63:18", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554930d3-ff", "ovs_interfaceid": "554930d3-ff53-4ef1-af0a-bad6acef1456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-05T01:46:34Z,direct_url=<?>,disk_format='qcow2',id=aa58c1e9-bdcc-4e60-9cee-eaeee0741251,min_disk=0,min_ram=0,name='cirros',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-05T01:46:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'image_id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}], 'ephemerals': [{'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'device_name': '/dev/vdb', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 1}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  5 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.569 349552 WARNING nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.588 349552 DEBUG nova.virt.libvirt.host [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  5 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.589 349552 DEBUG nova.virt.libvirt.host [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  5 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.595 349552 DEBUG nova.virt.libvirt.host [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  5 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.596 349552 DEBUG nova.virt.libvirt.host [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  5 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.596 349552 DEBUG nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  5 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.597 349552 DEBUG nova.virt.hardware [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-05T01:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='7d473820-6f66-40b4-b8d1-decd466d7dd2',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-05T01:46:34Z,direct_url=<?>,disk_format='qcow2',id=aa58c1e9-bdcc-4e60-9cee-eaeee0741251,min_disk=0,min_ram=0,name='cirros',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-05T01:46:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  5 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.597 349552 DEBUG nova.virt.hardware [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  5 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.598 349552 DEBUG nova.virt.hardware [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  5 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.598 349552 DEBUG nova.virt.hardware [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  5 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.598 349552 DEBUG nova.virt.hardware [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  5 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.599 349552 DEBUG nova.virt.hardware [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  5 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.599 349552 DEBUG nova.virt.hardware [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  5 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.600 349552 DEBUG nova.virt.hardware [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  5 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.600 349552 DEBUG nova.virt.hardware [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  5 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.600 349552 DEBUG nova.virt.hardware [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  5 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.601 349552 DEBUG nova.virt.hardware [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  5 01:49:14 compute-0 nova_compute[349548]: 2025-12-05 01:49:14.605 349552 DEBUG oslo_concurrency.processutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:49:15 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  5 01:49:15 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1781996239' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  5 01:49:15 compute-0 nova_compute[349548]: 2025-12-05 01:49:15.217 349552 DEBUG oslo_concurrency.processutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:49:15 compute-0 nova_compute[349548]: 2025-12-05 01:49:15.219 349552 DEBUG oslo_concurrency.processutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:49:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1196: 321 pgs: 321 active+clean; 110 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 MiB/s wr, 37 op/s
Dec  5 01:49:15 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  5 01:49:15 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3888621187' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  5 01:49:15 compute-0 rsyslogd[188644]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  5 01:49:15 compute-0 nova_compute[349548]: 2025-12-05 01:49:15.728 349552 DEBUG oslo_concurrency.processutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:49:15 compute-0 nova_compute[349548]: 2025-12-05 01:49:15.785 349552 DEBUG nova.storage.rbd_utils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image b82c3f0e-6d6a-4a7b-9556-b609ad63e497_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 01:49:15 compute-0 nova_compute[349548]: 2025-12-05 01:49:15.798 349552 DEBUG oslo_concurrency.processutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.010 349552 DEBUG nova.network.neutron [req-096b245d-d16e-499c-8ec6-ed988b956ed1 req-a087135e-135c-4997-9c09-7943a3616e80 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Updated VIF entry in instance network info cache for port 554930d3-ff53-4ef1-af0a-bad6acef1456. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  5 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.011 349552 DEBUG nova.network.neutron [req-096b245d-d16e-499c-8ec6-ed988b956ed1 req-a087135e-135c-4997-9c09-7943a3616e80 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Updating instance_info_cache with network_info: [{"id": "554930d3-ff53-4ef1-af0a-bad6acef1456", "address": "fa:16:3e:43:63:18", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554930d3-ff", "ovs_interfaceid": "554930d3-ff53-4ef1-af0a-bad6acef1456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.031 349552 DEBUG oslo_concurrency.lockutils [req-096b245d-d16e-499c-8ec6-ed988b956ed1 req-a087135e-135c-4997-9c09-7943a3616e80 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 01:49:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:49:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:49:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:49:16
Dec  5 01:49:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 01:49:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 01:49:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'vms', 'default.rgw.control', 'volumes', 'default.rgw.log', 'backups', 'images', '.rgw.root']
Dec  5 01:49:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 01:49:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:49:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:49:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:49:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:49:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  5 01:49:16 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3081362992' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  5 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.351 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.369 349552 DEBUG oslo_concurrency.processutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.371 349552 DEBUG nova.virt.libvirt.vif [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T01:49:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-4ysdpfw-vozvkqjb7v2u-n3c5nyx5kkkm-vnf-x5qm3qqtonfj',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-4ysdpfw-vozvkqjb7v2u-n3c5nyx5kkkm-vnf-x5qm3qqtonfj',id=2,image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ad982b73954486390215862ee62239f',ramdisk_id='',reservation_id='r-rt9976xc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T01:49:09Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01NzYxMzI0NDc4NDAzNTAzNjkyPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTU3NjEzMjQ0Nzg0MDM1MDM2OTI9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTc2MTMyNDQ3ODQwMzUwMzY5Mj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTU3NjEzMjQ0Nzg0MDM1MDM2OTI9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01NzYxMzI0NDc4NDAzNTAzNjkyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01NzYxMzI0NDc4NDAzNTAzNjkyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjI
Dec  5 01:49:16 compute-0 nova_compute[349548]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTc2MTMyNDQ3ODQwMzUwMzY5Mj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTU3NjEzMjQ0Nzg0MDM1MDM2OTI9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01NzYxMzI0NDc4NDAzNTAzNjkyPT0tLQo=',user_id='ff880837791d4f49a54672b8d0e705ff',uuid=b82c3f0e-6d6a-4a7b-9556-b609ad63e497,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "554930d3-ff53-4ef1-af0a-bad6acef1456", "address": "fa:16:3e:43:63:18", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554930d3-ff", "ovs_interfaceid": "554930d3-ff53-4ef1-af0a-bad6acef1456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  5 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.371 349552 DEBUG nova.network.os_vif_util [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converting VIF {"id": "554930d3-ff53-4ef1-af0a-bad6acef1456", "address": "fa:16:3e:43:63:18", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554930d3-ff", "ovs_interfaceid": "554930d3-ff53-4ef1-af0a-bad6acef1456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.373 349552 DEBUG nova.network.os_vif_util [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:63:18,bridge_name='br-int',has_traffic_filtering=True,id=554930d3-ff53-4ef1-af0a-bad6acef1456,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap554930d3-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.375 349552 DEBUG nova.objects.instance [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lazy-loading 'pci_devices' on Instance uuid b82c3f0e-6d6a-4a7b-9556-b609ad63e497 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.392 349552 DEBUG nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] End _get_guest_xml xml=<domain type="kvm">
Dec  5 01:49:16 compute-0 nova_compute[349548]:  <uuid>b82c3f0e-6d6a-4a7b-9556-b609ad63e497</uuid>
Dec  5 01:49:16 compute-0 nova_compute[349548]:  <name>instance-00000002</name>
Dec  5 01:49:16 compute-0 nova_compute[349548]:  <memory>524288</memory>
Dec  5 01:49:16 compute-0 nova_compute[349548]:  <vcpu>1</vcpu>
Dec  5 01:49:16 compute-0 nova_compute[349548]:  <metadata>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  5 01:49:16 compute-0 nova_compute[349548]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:      <nova:name>vn-4ysdpfw-vozvkqjb7v2u-n3c5nyx5kkkm-vnf-x5qm3qqtonfj</nova:name>
Dec  5 01:49:16 compute-0 nova_compute[349548]:      <nova:creationTime>2025-12-05 01:49:14</nova:creationTime>
Dec  5 01:49:16 compute-0 nova_compute[349548]:      <nova:flavor name="m1.small">
Dec  5 01:49:16 compute-0 nova_compute[349548]:        <nova:memory>512</nova:memory>
Dec  5 01:49:16 compute-0 nova_compute[349548]:        <nova:disk>1</nova:disk>
Dec  5 01:49:16 compute-0 nova_compute[349548]:        <nova:swap>0</nova:swap>
Dec  5 01:49:16 compute-0 nova_compute[349548]:        <nova:ephemeral>1</nova:ephemeral>
Dec  5 01:49:16 compute-0 nova_compute[349548]:        <nova:vcpus>1</nova:vcpus>
Dec  5 01:49:16 compute-0 nova_compute[349548]:      </nova:flavor>
Dec  5 01:49:16 compute-0 nova_compute[349548]:      <nova:owner>
Dec  5 01:49:16 compute-0 nova_compute[349548]:        <nova:user uuid="ff880837791d4f49a54672b8d0e705ff">admin</nova:user>
Dec  5 01:49:16 compute-0 nova_compute[349548]:        <nova:project uuid="6ad982b73954486390215862ee62239f">admin</nova:project>
Dec  5 01:49:16 compute-0 nova_compute[349548]:      </nova:owner>
Dec  5 01:49:16 compute-0 nova_compute[349548]:      <nova:root type="image" uuid="aa58c1e9-bdcc-4e60-9cee-eaeee0741251"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:      <nova:ports>
Dec  5 01:49:16 compute-0 nova_compute[349548]:        <nova:port uuid="554930d3-ff53-4ef1-af0a-bad6acef1456">
Dec  5 01:49:16 compute-0 nova_compute[349548]:          <nova:ip type="fixed" address="192.168.0.23" ipVersion="4"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:        </nova:port>
Dec  5 01:49:16 compute-0 nova_compute[349548]:      </nova:ports>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    </nova:instance>
Dec  5 01:49:16 compute-0 nova_compute[349548]:  </metadata>
Dec  5 01:49:16 compute-0 nova_compute[349548]:  <sysinfo type="smbios">
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <system>
Dec  5 01:49:16 compute-0 nova_compute[349548]:      <entry name="manufacturer">RDO</entry>
Dec  5 01:49:16 compute-0 nova_compute[349548]:      <entry name="product">OpenStack Compute</entry>
Dec  5 01:49:16 compute-0 nova_compute[349548]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  5 01:49:16 compute-0 nova_compute[349548]:      <entry name="serial">b82c3f0e-6d6a-4a7b-9556-b609ad63e497</entry>
Dec  5 01:49:16 compute-0 nova_compute[349548]:      <entry name="uuid">b82c3f0e-6d6a-4a7b-9556-b609ad63e497</entry>
Dec  5 01:49:16 compute-0 nova_compute[349548]:      <entry name="family">Virtual Machine</entry>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    </system>
Dec  5 01:49:16 compute-0 nova_compute[349548]:  </sysinfo>
Dec  5 01:49:16 compute-0 nova_compute[349548]:  <os>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <boot dev="hd"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <smbios mode="sysinfo"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:  </os>
Dec  5 01:49:16 compute-0 nova_compute[349548]:  <features>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <acpi/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <apic/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <vmcoreinfo/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:  </features>
Dec  5 01:49:16 compute-0 nova_compute[349548]:  <clock offset="utc">
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <timer name="pit" tickpolicy="delay"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <timer name="hpet" present="no"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:  </clock>
Dec  5 01:49:16 compute-0 nova_compute[349548]:  <cpu mode="host-model" match="exact">
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <topology sockets="1" cores="1" threads="1"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:  </cpu>
Dec  5 01:49:16 compute-0 nova_compute[349548]:  <devices>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <disk type="network" device="disk">
Dec  5 01:49:16 compute-0 nova_compute[349548]:      <driver type="raw" cache="none"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:      <source protocol="rbd" name="vms/b82c3f0e-6d6a-4a7b-9556-b609ad63e497_disk">
Dec  5 01:49:16 compute-0 nova_compute[349548]:        <host name="192.168.122.100" port="6789"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:      </source>
Dec  5 01:49:16 compute-0 nova_compute[349548]:      <auth username="openstack">
Dec  5 01:49:16 compute-0 nova_compute[349548]:        <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:      </auth>
Dec  5 01:49:16 compute-0 nova_compute[349548]:      <target dev="vda" bus="virtio"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    </disk>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <disk type="network" device="disk">
Dec  5 01:49:16 compute-0 nova_compute[349548]:      <driver type="raw" cache="none"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:      <source protocol="rbd" name="vms/b82c3f0e-6d6a-4a7b-9556-b609ad63e497_disk.eph0">
Dec  5 01:49:16 compute-0 nova_compute[349548]:        <host name="192.168.122.100" port="6789"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:      </source>
Dec  5 01:49:16 compute-0 nova_compute[349548]:      <auth username="openstack">
Dec  5 01:49:16 compute-0 nova_compute[349548]:        <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:      </auth>
Dec  5 01:49:16 compute-0 nova_compute[349548]:      <target dev="vdb" bus="virtio"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    </disk>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <disk type="network" device="cdrom">
Dec  5 01:49:16 compute-0 nova_compute[349548]:      <driver type="raw" cache="none"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:      <source protocol="rbd" name="vms/b82c3f0e-6d6a-4a7b-9556-b609ad63e497_disk.config">
Dec  5 01:49:16 compute-0 nova_compute[349548]:        <host name="192.168.122.100" port="6789"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:      </source>
Dec  5 01:49:16 compute-0 nova_compute[349548]:      <auth username="openstack">
Dec  5 01:49:16 compute-0 nova_compute[349548]:        <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:      </auth>
Dec  5 01:49:16 compute-0 nova_compute[349548]:      <target dev="sda" bus="sata"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    </disk>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <interface type="ethernet">
Dec  5 01:49:16 compute-0 nova_compute[349548]:      <mac address="fa:16:3e:43:63:18"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:      <model type="virtio"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:      <driver name="vhost" rx_queue_size="512"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:      <mtu size="1442"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:      <target dev="tap554930d3-ff"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    </interface>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <serial type="pty">
Dec  5 01:49:16 compute-0 nova_compute[349548]:      <log file="/var/lib/nova/instances/b82c3f0e-6d6a-4a7b-9556-b609ad63e497/console.log" append="off"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    </serial>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <video>
Dec  5 01:49:16 compute-0 nova_compute[349548]:      <model type="virtio"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    </video>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <input type="tablet" bus="usb"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <rng model="virtio">
Dec  5 01:49:16 compute-0 nova_compute[349548]:      <backend model="random">/dev/urandom</backend>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    </rng>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <controller type="usb" index="0"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    <memballoon model="virtio">
Dec  5 01:49:16 compute-0 nova_compute[349548]:      <stats period="10"/>
Dec  5 01:49:16 compute-0 nova_compute[349548]:    </memballoon>
Dec  5 01:49:16 compute-0 nova_compute[349548]:  </devices>
Dec  5 01:49:16 compute-0 nova_compute[349548]: </domain>
Dec  5 01:49:16 compute-0 nova_compute[349548]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  5 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.392 349552 DEBUG nova.compute.manager [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Preparing to wait for external event network-vif-plugged-554930d3-ff53-4ef1-af0a-bad6acef1456 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  5 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.393 349552 DEBUG oslo_concurrency.lockutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.393 349552 DEBUG oslo_concurrency.lockutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.394 349552 DEBUG oslo_concurrency.lockutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.395 349552 DEBUG nova.virt.libvirt.vif [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T01:49:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-4ysdpfw-vozvkqjb7v2u-n3c5nyx5kkkm-vnf-x5qm3qqtonfj',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-4ysdpfw-vozvkqjb7v2u-n3c5nyx5kkkm-vnf-x5qm3qqtonfj',id=2,image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ad982b73954486390215862ee62239f',ramdisk_id='',reservation_id='r-rt9976xc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T01:49:09Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01NzYxMzI0NDc4NDAzNTAzNjkyPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTU3NjEzMjQ0Nzg0MDM1MDM2OTI9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTc2MTMyNDQ3ODQwMzUwMzY5Mj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTU3NjEzMjQ0Nzg0MDM1MDM2OTI9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01NzYxMzI0NDc4NDAzNTAzNjkyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01NzYxMzI0NDc4NDAzNTAzNjkyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJ
Dec  5 01:49:16 compute-0 nova_compute[349548]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTc2MTMyNDQ3ODQwMzUwMzY5Mj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTU3NjEzMjQ0Nzg0MDM1MDM2OTI9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01NzYxMzI0NDc4NDAzNTAzNjkyPT0tLQo=',user_id='ff880837791d4f49a54672b8d0e705ff',uuid=b82c3f0e-6d6a-4a7b-9556-b609ad63e497,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "554930d3-ff53-4ef1-af0a-bad6acef1456", "address": "fa:16:3e:43:63:18", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554930d3-ff", "ovs_interfaceid": "554930d3-ff53-4ef1-af0a-bad6acef1456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  5 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.395 349552 DEBUG nova.network.os_vif_util [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converting VIF {"id": "554930d3-ff53-4ef1-af0a-bad6acef1456", "address": "fa:16:3e:43:63:18", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554930d3-ff", "ovs_interfaceid": "554930d3-ff53-4ef1-af0a-bad6acef1456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.396 349552 DEBUG nova.network.os_vif_util [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:63:18,bridge_name='br-int',has_traffic_filtering=True,id=554930d3-ff53-4ef1-af0a-bad6acef1456,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap554930d3-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.397 349552 DEBUG os_vif [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:63:18,bridge_name='br-int',has_traffic_filtering=True,id=554930d3-ff53-4ef1-af0a-bad6acef1456,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap554930d3-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  5 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.398 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.398 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.399 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.403 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.403 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap554930d3-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.404 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap554930d3-ff, col_values=(('external_ids', {'iface-id': '554930d3-ff53-4ef1-af0a-bad6acef1456', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:43:63:18', 'vm-uuid': 'b82c3f0e-6d6a-4a7b-9556-b609ad63e497'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.407 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:49:16 compute-0 NetworkManager[49092]: <info>  [1764899356.4093] manager: (tap554930d3-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Dec  5 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.410 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  5 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.421 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.422 349552 INFO os_vif [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:63:18,bridge_name='br-int',has_traffic_filtering=True,id=554930d3-ff53-4ef1-af0a-bad6acef1456,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap554930d3-ff')#033[00m
Dec  5 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.483 349552 DEBUG nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  5 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.484 349552 DEBUG nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  5 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.484 349552 DEBUG nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  5 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.485 349552 DEBUG nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] No VIF found with MAC fa:16:3e:43:63:18, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  5 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.485 349552 INFO nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Using config drive#033[00m
Dec  5 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.542 349552 DEBUG nova.storage.rbd_utils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image b82c3f0e-6d6a-4a7b-9556-b609ad63e497_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 01:49:16 compute-0 rsyslogd[188644]: message too long (8192) with configured size 8096, begin of message is: 2025-12-05 01:49:16.371 349552 DEBUG nova.virt.libvirt.vif [None req-4272eb25-2b [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  5 01:49:16 compute-0 rsyslogd[188644]: message too long (8192) with configured size 8096, begin of message is: 2025-12-05 01:49:16.395 349552 DEBUG nova.virt.libvirt.vif [None req-4272eb25-2b [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  5 01:49:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 01:49:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:49:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 01:49:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:49:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:49:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:49:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:49:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:49:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:49:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.924 349552 INFO nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Creating config drive at /var/lib/nova/instances/b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.config#033[00m
Dec  5 01:49:16 compute-0 nova_compute[349548]: 2025-12-05 01:49:16.937 349552 DEBUG oslo_concurrency.processutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6cs05sit execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:49:17 compute-0 nova_compute[349548]: 2025-12-05 01:49:17.088 349552 DEBUG oslo_concurrency.processutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6cs05sit" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:49:17 compute-0 nova_compute[349548]: 2025-12-05 01:49:17.142 349552 DEBUG nova.storage.rbd_utils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image b82c3f0e-6d6a-4a7b-9556-b609ad63e497_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 01:49:17 compute-0 nova_compute[349548]: 2025-12-05 01:49:17.154 349552 DEBUG oslo_concurrency.processutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.config b82c3f0e-6d6a-4a7b-9556-b609ad63e497_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:49:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1197: 321 pgs: 321 active+clean; 110 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 MiB/s wr, 37 op/s
Dec  5 01:49:17 compute-0 nova_compute[349548]: 2025-12-05 01:49:17.478 349552 DEBUG oslo_concurrency.processutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.config b82c3f0e-6d6a-4a7b-9556-b609ad63e497_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.324s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:49:17 compute-0 nova_compute[349548]: 2025-12-05 01:49:17.479 349552 INFO nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Deleting local config drive /var/lib/nova/instances/b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.config because it was imported into RBD.#033[00m
Dec  5 01:49:17 compute-0 kernel: tap554930d3-ff: entered promiscuous mode
Dec  5 01:49:17 compute-0 NetworkManager[49092]: <info>  [1764899357.5721] manager: (tap554930d3-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Dec  5 01:49:17 compute-0 nova_compute[349548]: 2025-12-05 01:49:17.578 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:49:17 compute-0 ovn_controller[89286]: 2025-12-05T01:49:17Z|00035|binding|INFO|Claiming lport 554930d3-ff53-4ef1-af0a-bad6acef1456 for this chassis.
Dec  5 01:49:17 compute-0 ovn_controller[89286]: 2025-12-05T01:49:17Z|00036|binding|INFO|554930d3-ff53-4ef1-af0a-bad6acef1456: Claiming fa:16:3e:43:63:18 192.168.0.23
Dec  5 01:49:17 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:49:17.594 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:63:18 192.168.0.23'], port_security=['fa:16:3e:43:63:18 192.168.0.23'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-qkgif4ysdpfw-vozvkqjb7v2u-n3c5nyx5kkkm-port-nevnpfznt6pg', 'neutron:cidrs': '192.168.0.23/24', 'neutron:device_id': 'b82c3f0e-6d6a-4a7b-9556-b609ad63e497', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-qkgif4ysdpfw-vozvkqjb7v2u-n3c5nyx5kkkm-port-nevnpfznt6pg', 'neutron:project_id': '6ad982b73954486390215862ee62239f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cf07c149-4b4f-4cc9-a5b5-cfd139acbede', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8440543a-d57d-422f-b491-49a678c2776e, chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=554930d3-ff53-4ef1-af0a-bad6acef1456) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 01:49:17 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:49:17.595 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 554930d3-ff53-4ef1-af0a-bad6acef1456 in datapath 49f7d2f1-f1ff-4dcc-94db-d088dc8d3183 bound to our chassis#033[00m
Dec  5 01:49:17 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:49:17.596 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 49f7d2f1-f1ff-4dcc-94db-d088dc8d3183#033[00m
Dec  5 01:49:17 compute-0 systemd-udevd[414624]: Network interface NamePolicy= disabled on kernel command line.
Dec  5 01:49:17 compute-0 nova_compute[349548]: 2025-12-05 01:49:17.620 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:49:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:49:17 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:49:17.621 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[ecd80830-593a-46fc-96e4-75f54308fef5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:49:17 compute-0 ovn_controller[89286]: 2025-12-05T01:49:17Z|00037|binding|INFO|Setting lport 554930d3-ff53-4ef1-af0a-bad6acef1456 ovn-installed in OVS
Dec  5 01:49:17 compute-0 ovn_controller[89286]: 2025-12-05T01:49:17Z|00038|binding|INFO|Setting lport 554930d3-ff53-4ef1-af0a-bad6acef1456 up in Southbound
Dec  5 01:49:17 compute-0 NetworkManager[49092]: <info>  [1764899357.6334] device (tap554930d3-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  5 01:49:17 compute-0 nova_compute[349548]: 2025-12-05 01:49:17.636 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:49:17 compute-0 NetworkManager[49092]: <info>  [1764899357.6409] device (tap554930d3-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  5 01:49:17 compute-0 systemd-machined[138700]: New machine qemu-2-instance-00000002.
Dec  5 01:49:17 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Dec  5 01:49:17 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:49:17.670 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[df2872c2-5314-4172-8e89-3996591b3054]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:49:17 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:49:17.674 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[f9fb9944-03d8-47ef-a5ab-be1341796ec9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:49:17 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:49:17.714 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[b05f7c0e-60a7-4451-bbeb-783b36d8593f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:49:17 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:49:17.736 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[97d323b3-90f8-473e-a43d-af3397a2e937]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap49f7d2f1-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:8a:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 532, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 532, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537514, 'reachable_time': 20575, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 414639, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:49:17 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:49:17.760 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[427c8aa9-fa59-42f0-9154-f1d301f0daf4]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap49f7d2f1-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 537531, 'tstamp': 537531}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 414640, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap49f7d2f1-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 537536, 'tstamp': 537536}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 414640, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:49:17 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:49:17.762 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap49f7d2f1-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 01:49:17 compute-0 nova_compute[349548]: 2025-12-05 01:49:17.766 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:49:17 compute-0 nova_compute[349548]: 2025-12-05 01:49:17.769 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:49:17 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:49:17.770 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap49f7d2f1-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 01:49:17 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:49:17.770 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 01:49:17 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:49:17.771 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap49f7d2f1-f0, col_values=(('external_ids', {'iface-id': '35b0af3f-4a87-44c5-9b77-2f08261b9985'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 01:49:17 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:49:17.772 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 01:49:18 compute-0 nova_compute[349548]: 2025-12-05 01:49:18.437 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764899358.4372408, b82c3f0e-6d6a-4a7b-9556-b609ad63e497 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 01:49:18 compute-0 nova_compute[349548]: 2025-12-05 01:49:18.438 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] VM Started (Lifecycle Event)#033[00m
Dec  5 01:49:18 compute-0 nova_compute[349548]: 2025-12-05 01:49:18.467 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 01:49:18 compute-0 nova_compute[349548]: 2025-12-05 01:49:18.476 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764899358.437429, b82c3f0e-6d6a-4a7b-9556-b609ad63e497 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 01:49:18 compute-0 nova_compute[349548]: 2025-12-05 01:49:18.477 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] VM Paused (Lifecycle Event)#033[00m
Dec  5 01:49:18 compute-0 nova_compute[349548]: 2025-12-05 01:49:18.530 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 01:49:18 compute-0 nova_compute[349548]: 2025-12-05 01:49:18.540 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  5 01:49:18 compute-0 nova_compute[349548]: 2025-12-05 01:49:18.562 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  5 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.196 349552 DEBUG nova.compute.manager [req-cffbf402-ae4e-40ce-bc5e-80c9e25379cf req-af011fa0-8536-4c28-b083-752c01359f14 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Received event network-vif-plugged-554930d3-ff53-4ef1-af0a-bad6acef1456 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.196 349552 DEBUG oslo_concurrency.lockutils [req-cffbf402-ae4e-40ce-bc5e-80c9e25379cf req-af011fa0-8536-4c28-b083-752c01359f14 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.197 349552 DEBUG oslo_concurrency.lockutils [req-cffbf402-ae4e-40ce-bc5e-80c9e25379cf req-af011fa0-8536-4c28-b083-752c01359f14 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.197 349552 DEBUG oslo_concurrency.lockutils [req-cffbf402-ae4e-40ce-bc5e-80c9e25379cf req-af011fa0-8536-4c28-b083-752c01359f14 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.198 349552 DEBUG nova.compute.manager [req-cffbf402-ae4e-40ce-bc5e-80c9e25379cf req-af011fa0-8536-4c28-b083-752c01359f14 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Processing event network-vif-plugged-554930d3-ff53-4ef1-af0a-bad6acef1456 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  5 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.199 349552 DEBUG nova.compute.manager [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  5 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.206 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764899359.2056804, b82c3f0e-6d6a-4a7b-9556-b609ad63e497 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.207 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] VM Resumed (Lifecycle Event)#033[00m
Dec  5 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.212 349552 DEBUG nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  5 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.221 349552 INFO nova.virt.libvirt.driver [-] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Instance spawned successfully.#033[00m
Dec  5 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.221 349552 DEBUG nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  5 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.225 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.239 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  5 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.250 349552 DEBUG nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.251 349552 DEBUG nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.252 349552 DEBUG nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.253 349552 DEBUG nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.254 349552 DEBUG nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.255 349552 DEBUG nova.virt.libvirt.driver [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 01:49:19 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:49:19.259 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:c8:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:b5:45:4f:f9:d2'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 01:49:19 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:49:19.261 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  5 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.267 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.270 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  5 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.312 349552 INFO nova.compute.manager [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Took 9.97 seconds to spawn the instance on the hypervisor.#033[00m
Dec  5 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.312 349552 DEBUG nova.compute.manager [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 01:49:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1198: 321 pgs: 321 active+clean; 110 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 MiB/s wr, 37 op/s
Dec  5 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.381 349552 INFO nova.compute.manager [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Took 11.04 seconds to build instance.#033[00m
Dec  5 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.398 349552 DEBUG oslo_concurrency.lockutils [None req-4272eb25-2bfb-4e4c-aac8-bdc0954d54c3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.268s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.774 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.775 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.775 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.776 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:49:19 compute-0 nova_compute[349548]: 2025-12-05 01:49:19.776 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 01:49:20 compute-0 nova_compute[349548]: 2025-12-05 01:49:20.064 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:49:20 compute-0 nova_compute[349548]: 2025-12-05 01:49:20.064 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:49:21 compute-0 nova_compute[349548]: 2025-12-05 01:49:21.061 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:49:21 compute-0 nova_compute[349548]: 2025-12-05 01:49:21.113 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:49:21 compute-0 nova_compute[349548]: 2025-12-05 01:49:21.113 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 01:49:21 compute-0 nova_compute[349548]: 2025-12-05 01:49:21.114 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  5 01:49:21 compute-0 nova_compute[349548]: 2025-12-05 01:49:21.289 349552 DEBUG nova.compute.manager [req-51baec92-7a32-4423-98fd-4b11c5bca7f7 req-ae5aaa2a-5e98-4b45-9107-e3d2b687b424 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Received event network-vif-plugged-554930d3-ff53-4ef1-af0a-bad6acef1456 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 01:49:21 compute-0 nova_compute[349548]: 2025-12-05 01:49:21.290 349552 DEBUG oslo_concurrency.lockutils [req-51baec92-7a32-4423-98fd-4b11c5bca7f7 req-ae5aaa2a-5e98-4b45-9107-e3d2b687b424 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:49:21 compute-0 nova_compute[349548]: 2025-12-05 01:49:21.290 349552 DEBUG oslo_concurrency.lockutils [req-51baec92-7a32-4423-98fd-4b11c5bca7f7 req-ae5aaa2a-5e98-4b45-9107-e3d2b687b424 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:49:21 compute-0 nova_compute[349548]: 2025-12-05 01:49:21.290 349552 DEBUG oslo_concurrency.lockutils [req-51baec92-7a32-4423-98fd-4b11c5bca7f7 req-ae5aaa2a-5e98-4b45-9107-e3d2b687b424 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:49:21 compute-0 nova_compute[349548]: 2025-12-05 01:49:21.291 349552 DEBUG nova.compute.manager [req-51baec92-7a32-4423-98fd-4b11c5bca7f7 req-ae5aaa2a-5e98-4b45-9107-e3d2b687b424 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] No waiting events found dispatching network-vif-plugged-554930d3-ff53-4ef1-af0a-bad6acef1456 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 01:49:21 compute-0 nova_compute[349548]: 2025-12-05 01:49:21.291 349552 WARNING nova.compute.manager [req-51baec92-7a32-4423-98fd-4b11c5bca7f7 req-ae5aaa2a-5e98-4b45-9107-e3d2b687b424 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Received unexpected event network-vif-plugged-554930d3-ff53-4ef1-af0a-bad6acef1456 for instance with vm_state active and task_state None.#033[00m
Dec  5 01:49:21 compute-0 nova_compute[349548]: 2025-12-05 01:49:21.355 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:49:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1199: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.4 MiB/s wr, 41 op/s
Dec  5 01:49:21 compute-0 nova_compute[349548]: 2025-12-05 01:49:21.407 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:49:21 compute-0 nova_compute[349548]: 2025-12-05 01:49:21.581 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 01:49:21 compute-0 nova_compute[349548]: 2025-12-05 01:49:21.581 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 01:49:21 compute-0 nova_compute[349548]: 2025-12-05 01:49:21.582 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  5 01:49:21 compute-0 nova_compute[349548]: 2025-12-05 01:49:21.582 349552 DEBUG nova.objects.instance [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b69a0e24-1bc4-46a5-92d7-367c1efd53df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 01:49:21 compute-0 podman[414703]: 2025-12-05 01:49:21.697524949 +0000 UTC m=+0.113776049 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  5 01:49:21 compute-0 podman[414704]: 2025-12-05 01:49:21.708783505 +0000 UTC m=+0.109786677 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  5 01:49:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:49:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1200: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 669 KiB/s rd, 1.4 MiB/s wr, 69 op/s
Dec  5 01:49:23 compute-0 podman[414741]: 2025-12-05 01:49:23.711198695 +0000 UTC m=+0.121435644 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  5 01:49:23 compute-0 podman[414740]: 2025-12-05 01:49:23.729742397 +0000 UTC m=+0.140330056 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  5 01:49:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1201: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 744 KiB/s rd, 23 KiB/s wr, 46 op/s
Dec  5 01:49:25 compute-0 nova_compute[349548]: 2025-12-05 01:49:25.402 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updating instance_info_cache with network_info: [{"id": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "address": "fa:16:3e:0c:12:24", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.48", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68143c81-65", "ovs_interfaceid": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 01:49:25 compute-0 nova_compute[349548]: 2025-12-05 01:49:25.417 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 01:49:25 compute-0 nova_compute[349548]: 2025-12-05 01:49:25.418 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  5 01:49:25 compute-0 nova_compute[349548]: 2025-12-05 01:49:25.419 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:49:25 compute-0 nova_compute[349548]: 2025-12-05 01:49:25.419 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:49:25 compute-0 nova_compute[349548]: 2025-12-05 01:49:25.455 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:49:25 compute-0 nova_compute[349548]: 2025-12-05 01:49:25.456 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:49:25 compute-0 nova_compute[349548]: 2025-12-05 01:49:25.456 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:49:25 compute-0 nova_compute[349548]: 2025-12-05 01:49:25.457 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 01:49:25 compute-0 nova_compute[349548]: 2025-12-05 01:49:25.457 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:49:25 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:49:25 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3601159938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:49:26 compute-0 nova_compute[349548]: 2025-12-05 01:49:26.009 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:49:26 compute-0 nova_compute[349548]: 2025-12-05 01:49:26.158 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:49:26 compute-0 nova_compute[349548]: 2025-12-05 01:49:26.158 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:49:26 compute-0 nova_compute[349548]: 2025-12-05 01:49:26.158 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:49:26 compute-0 nova_compute[349548]: 2025-12-05 01:49:26.166 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:49:26 compute-0 nova_compute[349548]: 2025-12-05 01:49:26.167 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:49:26 compute-0 nova_compute[349548]: 2025-12-05 01:49:26.167 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:49:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:49:26.264 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8dd76c1c-ab01-42af-b35e-2e870841b6ad, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 01:49:26 compute-0 nova_compute[349548]: 2025-12-05 01:49:26.357 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:49:26 compute-0 nova_compute[349548]: 2025-12-05 01:49:26.411 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008201929494692974 of space, bias 1.0, pg target 0.24605788484078922 quantized to 32 (current 32)
Dec  5 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  5 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:49:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 01:49:26 compute-0 nova_compute[349548]: 2025-12-05 01:49:26.684 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 01:49:26 compute-0 nova_compute[349548]: 2025-12-05 01:49:26.686 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3818MB free_disk=59.93907928466797GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 01:49:26 compute-0 nova_compute[349548]: 2025-12-05 01:49:26.686 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:49:26 compute-0 nova_compute[349548]: 2025-12-05 01:49:26.686 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:49:26 compute-0 nova_compute[349548]: 2025-12-05 01:49:26.786 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b69a0e24-1bc4-46a5-92d7-367c1efd53df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 01:49:26 compute-0 nova_compute[349548]: 2025-12-05 01:49:26.787 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b82c3f0e-6d6a-4a7b-9556-b609ad63e497 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 01:49:26 compute-0 nova_compute[349548]: 2025-12-05 01:49:26.787 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 01:49:26 compute-0 nova_compute[349548]: 2025-12-05 01:49:26.787 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 01:49:26 compute-0 nova_compute[349548]: 2025-12-05 01:49:26.868 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:49:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:49:27 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1772842204' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:49:27 compute-0 nova_compute[349548]: 2025-12-05 01:49:27.365 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:49:27 compute-0 nova_compute[349548]: 2025-12-05 01:49:27.378 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 01:49:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1202: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 21 KiB/s wr, 61 op/s
Dec  5 01:49:27 compute-0 nova_compute[349548]: 2025-12-05 01:49:27.405 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 01:49:27 compute-0 nova_compute[349548]: 2025-12-05 01:49:27.459 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 01:49:27 compute-0 nova_compute[349548]: 2025-12-05 01:49:27.460 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:49:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:49:27 compute-0 podman[414823]: 2025-12-05 01:49:27.729496409 +0000 UTC m=+0.145400668 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.openshift.tags=base rhel9, vcs-type=git, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, config_id=edpm, managed_by=edpm_ansible, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30)
Dec  5 01:49:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1203: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 60 op/s
Dec  5 01:49:29 compute-0 podman[158197]: time="2025-12-05T01:49:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:49:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:49:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 01:49:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:49:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8611 "" "Go-http-client/1.1"
Dec  5 01:49:31 compute-0 nova_compute[349548]: 2025-12-05 01:49:31.360 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:49:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1204: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 60 op/s
Dec  5 01:49:31 compute-0 nova_compute[349548]: 2025-12-05 01:49:31.413 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:49:31 compute-0 openstack_network_exporter[366555]: ERROR   01:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:49:31 compute-0 openstack_network_exporter[366555]: ERROR   01:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:49:31 compute-0 openstack_network_exporter[366555]: ERROR   01:49:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:49:31 compute-0 openstack_network_exporter[366555]: ERROR   01:49:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:49:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:49:31 compute-0 openstack_network_exporter[366555]: ERROR   01:49:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:49:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:49:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:49:32 compute-0 podman[415010]: 2025-12-05 01:49:32.64665229 +0000 UTC m=+0.120641122 container exec aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:49:32 compute-0 podman[415010]: 2025-12-05 01:49:32.760794628 +0000 UTC m=+0.234783380 container exec_died aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  5 01:49:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1205: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 170 B/s wr, 56 op/s
Dec  5 01:49:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:49:33 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:49:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:49:34 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:49:34 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:49:34 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:49:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:49:35 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:49:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 01:49:35 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:49:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 01:49:35 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:49:35 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 7285b0d1-e9ca-4284-9a45-7209c8c782d0 does not exist
Dec  5 01:49:35 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 89d86a9f-4042-4959-9281-aed5b031a10c does not exist
Dec  5 01:49:35 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev ead88ac9-311b-43f7-87b9-d2a5461e4377 does not exist
Dec  5 01:49:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 01:49:35 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 01:49:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 01:49:35 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:49:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:49:35 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:49:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1206: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 876 KiB/s rd, 27 op/s
Dec  5 01:49:35 compute-0 podman[415317]: 2025-12-05 01:49:35.487432123 +0000 UTC m=+0.112965716 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., release=1755695350, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, distribution-scope=public)
Dec  5 01:49:35 compute-0 podman[415308]: 2025-12-05 01:49:35.49087871 +0000 UTC m=+0.143442623 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd)
Dec  5 01:49:35 compute-0 podman[415309]: 2025-12-05 01:49:35.494473041 +0000 UTC m=+0.136279612 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 01:49:35 compute-0 podman[415310]: 2025-12-05 01:49:35.524030131 +0000 UTC m=+0.151871229 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Dec  5 01:49:35 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:49:35 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:49:35 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:49:36 compute-0 podman[415506]: 2025-12-05 01:49:36.148494293 +0000 UTC m=+0.080018910 container create 8b9d8cedc77bde7ee349e718c660319255ece5bea18bf54dcf0f6c7733a86d23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wilson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:49:36 compute-0 podman[415506]: 2025-12-05 01:49:36.121057171 +0000 UTC m=+0.052581778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:49:36 compute-0 systemd[1]: Started libpod-conmon-8b9d8cedc77bde7ee349e718c660319255ece5bea18bf54dcf0f6c7733a86d23.scope.
Dec  5 01:49:36 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:49:36 compute-0 podman[415506]: 2025-12-05 01:49:36.30138146 +0000 UTC m=+0.232906057 container init 8b9d8cedc77bde7ee349e718c660319255ece5bea18bf54dcf0f6c7733a86d23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Dec  5 01:49:36 compute-0 podman[415506]: 2025-12-05 01:49:36.31169985 +0000 UTC m=+0.243224467 container start 8b9d8cedc77bde7ee349e718c660319255ece5bea18bf54dcf0f6c7733a86d23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wilson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:49:36 compute-0 podman[415506]: 2025-12-05 01:49:36.318361057 +0000 UTC m=+0.249885664 container attach 8b9d8cedc77bde7ee349e718c660319255ece5bea18bf54dcf0f6c7733a86d23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:49:36 compute-0 focused_wilson[415522]: 167 167
Dec  5 01:49:36 compute-0 systemd[1]: libpod-8b9d8cedc77bde7ee349e718c660319255ece5bea18bf54dcf0f6c7733a86d23.scope: Deactivated successfully.
Dec  5 01:49:36 compute-0 podman[415506]: 2025-12-05 01:49:36.326760783 +0000 UTC m=+0.258285370 container died 8b9d8cedc77bde7ee349e718c660319255ece5bea18bf54dcf0f6c7733a86d23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wilson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  5 01:49:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-3fec9c873bd35a411320e8d9d147f8929e7783c7d13c037acc97340b58aff708-merged.mount: Deactivated successfully.
Dec  5 01:49:36 compute-0 nova_compute[349548]: 2025-12-05 01:49:36.368 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:49:36 compute-0 podman[415506]: 2025-12-05 01:49:36.3946231 +0000 UTC m=+0.326147687 container remove 8b9d8cedc77bde7ee349e718c660319255ece5bea18bf54dcf0f6c7733a86d23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_wilson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  5 01:49:36 compute-0 systemd[1]: libpod-conmon-8b9d8cedc77bde7ee349e718c660319255ece5bea18bf54dcf0f6c7733a86d23.scope: Deactivated successfully.
Dec  5 01:49:36 compute-0 nova_compute[349548]: 2025-12-05 01:49:36.417 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:49:36 compute-0 podman[415543]: 2025-12-05 01:49:36.675541686 +0000 UTC m=+0.091888134 container create 2758df6ea9af5712b74482241efbef8a9d8c7cb03877da3bac3d91c2945d372b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  5 01:49:36 compute-0 podman[415543]: 2025-12-05 01:49:36.64118628 +0000 UTC m=+0.057532778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:49:36 compute-0 systemd[1]: Started libpod-conmon-2758df6ea9af5712b74482241efbef8a9d8c7cb03877da3bac3d91c2945d372b.scope.
Dec  5 01:49:36 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:49:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e42a0e41f7d658d7dab3ea6bd33f9fdd54b5a8db8cc564482b0cd85845de8c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:49:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e42a0e41f7d658d7dab3ea6bd33f9fdd54b5a8db8cc564482b0cd85845de8c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:49:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e42a0e41f7d658d7dab3ea6bd33f9fdd54b5a8db8cc564482b0cd85845de8c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:49:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e42a0e41f7d658d7dab3ea6bd33f9fdd54b5a8db8cc564482b0cd85845de8c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:49:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7e42a0e41f7d658d7dab3ea6bd33f9fdd54b5a8db8cc564482b0cd85845de8c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:49:36 compute-0 podman[415543]: 2025-12-05 01:49:36.884159419 +0000 UTC m=+0.300505877 container init 2758df6ea9af5712b74482241efbef8a9d8c7cb03877da3bac3d91c2945d372b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:49:36 compute-0 podman[415543]: 2025-12-05 01:49:36.908211975 +0000 UTC m=+0.324558433 container start 2758df6ea9af5712b74482241efbef8a9d8c7cb03877da3bac3d91c2945d372b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hertz, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  5 01:49:36 compute-0 podman[415543]: 2025-12-05 01:49:36.915942023 +0000 UTC m=+0.332288521 container attach 2758df6ea9af5712b74482241efbef8a9d8c7cb03877da3bac3d91c2945d372b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  5 01:49:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1207: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 784 KiB/s rd, 24 op/s
Dec  5 01:49:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:49:38 compute-0 focused_hertz[415559]: --> passed data devices: 0 physical, 3 LVM
Dec  5 01:49:38 compute-0 focused_hertz[415559]: --> relative data size: 1.0
Dec  5 01:49:38 compute-0 focused_hertz[415559]: --> All data devices are unavailable
Dec  5 01:49:38 compute-0 systemd[1]: libpod-2758df6ea9af5712b74482241efbef8a9d8c7cb03877da3bac3d91c2945d372b.scope: Deactivated successfully.
Dec  5 01:49:38 compute-0 systemd[1]: libpod-2758df6ea9af5712b74482241efbef8a9d8c7cb03877da3bac3d91c2945d372b.scope: Consumed 1.342s CPU time.
Dec  5 01:49:38 compute-0 podman[415543]: 2025-12-05 01:49:38.334389159 +0000 UTC m=+1.750735607 container died 2758df6ea9af5712b74482241efbef8a9d8c7cb03877da3bac3d91c2945d372b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:49:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7e42a0e41f7d658d7dab3ea6bd33f9fdd54b5a8db8cc564482b0cd85845de8c-merged.mount: Deactivated successfully.
Dec  5 01:49:38 compute-0 podman[415543]: 2025-12-05 01:49:38.447067976 +0000 UTC m=+1.863414404 container remove 2758df6ea9af5712b74482241efbef8a9d8c7cb03877da3bac3d91c2945d372b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hertz, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:49:38 compute-0 systemd[1]: libpod-conmon-2758df6ea9af5712b74482241efbef8a9d8c7cb03877da3bac3d91c2945d372b.scope: Deactivated successfully.
Dec  5 01:49:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1208: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:49:39 compute-0 podman[415738]: 2025-12-05 01:49:39.585523863 +0000 UTC m=+0.067518879 container create cef8519ab84f02928b161a17255fe2ee127914ebf0bdd5b31299c4a5504ddb88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_goodall, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  5 01:49:39 compute-0 podman[415738]: 2025-12-05 01:49:39.559468561 +0000 UTC m=+0.041463586 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:49:39 compute-0 systemd[1]: Started libpod-conmon-cef8519ab84f02928b161a17255fe2ee127914ebf0bdd5b31299c4a5504ddb88.scope.
Dec  5 01:49:39 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:49:39 compute-0 podman[415738]: 2025-12-05 01:49:39.749297297 +0000 UTC m=+0.231292382 container init cef8519ab84f02928b161a17255fe2ee127914ebf0bdd5b31299c4a5504ddb88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:49:39 compute-0 podman[415738]: 2025-12-05 01:49:39.767088527 +0000 UTC m=+0.249083522 container start cef8519ab84f02928b161a17255fe2ee127914ebf0bdd5b31299c4a5504ddb88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_goodall, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:49:39 compute-0 podman[415738]: 2025-12-05 01:49:39.772361645 +0000 UTC m=+0.254356640 container attach cef8519ab84f02928b161a17255fe2ee127914ebf0bdd5b31299c4a5504ddb88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Dec  5 01:49:39 compute-0 happy_goodall[415754]: 167 167
Dec  5 01:49:39 compute-0 systemd[1]: libpod-cef8519ab84f02928b161a17255fe2ee127914ebf0bdd5b31299c4a5504ddb88.scope: Deactivated successfully.
Dec  5 01:49:39 compute-0 podman[415738]: 2025-12-05 01:49:39.778814876 +0000 UTC m=+0.260809871 container died cef8519ab84f02928b161a17255fe2ee127914ebf0bdd5b31299c4a5504ddb88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:49:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-39500b64761faec6ba4e81ae37e495dca72438a94211e052c4db8908e5283eb0-merged.mount: Deactivated successfully.
Dec  5 01:49:39 compute-0 podman[415738]: 2025-12-05 01:49:39.847143577 +0000 UTC m=+0.329138572 container remove cef8519ab84f02928b161a17255fe2ee127914ebf0bdd5b31299c4a5504ddb88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_goodall, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:49:39 compute-0 systemd[1]: libpod-conmon-cef8519ab84f02928b161a17255fe2ee127914ebf0bdd5b31299c4a5504ddb88.scope: Deactivated successfully.
Dec  5 01:49:40 compute-0 podman[415780]: 2025-12-05 01:49:40.115493009 +0000 UTC m=+0.060310226 container create ffea9984a50f13179cf92dbcd37fb51875e9de9a3e8683eac78c72903d8806da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  5 01:49:40 compute-0 systemd[1]: Started libpod-conmon-ffea9984a50f13179cf92dbcd37fb51875e9de9a3e8683eac78c72903d8806da.scope.
Dec  5 01:49:40 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:49:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/871b8c987ce21e8eaf09d34c48acc1a3bb015bade895392034f5712d26be8319/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:49:40 compute-0 podman[415780]: 2025-12-05 01:49:40.095852687 +0000 UTC m=+0.040669904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:49:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/871b8c987ce21e8eaf09d34c48acc1a3bb015bade895392034f5712d26be8319/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:49:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/871b8c987ce21e8eaf09d34c48acc1a3bb015bade895392034f5712d26be8319/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:49:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/871b8c987ce21e8eaf09d34c48acc1a3bb015bade895392034f5712d26be8319/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:49:40 compute-0 podman[415780]: 2025-12-05 01:49:40.206811175 +0000 UTC m=+0.151628462 container init ffea9984a50f13179cf92dbcd37fb51875e9de9a3e8683eac78c72903d8806da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  5 01:49:40 compute-0 podman[415780]: 2025-12-05 01:49:40.228603318 +0000 UTC m=+0.173420535 container start ffea9984a50f13179cf92dbcd37fb51875e9de9a3e8683eac78c72903d8806da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:49:40 compute-0 podman[415780]: 2025-12-05 01:49:40.241434739 +0000 UTC m=+0.186252046 container attach ffea9984a50f13179cf92dbcd37fb51875e9de9a3e8683eac78c72903d8806da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]: {
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:    "0": [
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:        {
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:            "devices": [
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:                "/dev/loop3"
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:            ],
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:            "lv_name": "ceph_lv0",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:            "lv_size": "21470642176",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:            "name": "ceph_lv0",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:            "tags": {
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:                "ceph.cluster_name": "ceph",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:                "ceph.crush_device_class": "",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:                "ceph.encrypted": "0",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:                "ceph.osd_id": "0",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:                "ceph.type": "block",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:                "ceph.vdo": "0"
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:            },
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:            "type": "block",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:            "vg_name": "ceph_vg0"
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:        }
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:    ],
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:    "1": [
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:        {
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:            "devices": [
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:                "/dev/loop4"
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:            ],
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:            "lv_name": "ceph_lv1",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:            "lv_size": "21470642176",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:            "name": "ceph_lv1",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:            "tags": {
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:                "ceph.cluster_name": "ceph",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:                "ceph.crush_device_class": "",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:                "ceph.encrypted": "0",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:                "ceph.osd_id": "1",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:                "ceph.type": "block",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:                "ceph.vdo": "0"
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:            },
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:            "type": "block",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:            "vg_name": "ceph_vg1"
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:        }
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:    ],
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:    "2": [
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:        {
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:            "devices": [
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:                "/dev/loop5"
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:            ],
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:            "lv_name": "ceph_lv2",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:            "lv_size": "21470642176",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:            "name": "ceph_lv2",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:            "tags": {
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:                "ceph.cluster_name": "ceph",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:                "ceph.crush_device_class": "",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:                "ceph.encrypted": "0",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:                "ceph.osd_id": "2",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:                "ceph.type": "block",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:                "ceph.vdo": "0"
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:            },
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:            "type": "block",
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:            "vg_name": "ceph_vg2"
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:        }
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]:    ]
Dec  5 01:49:40 compute-0 great_proskuriakova[415796]: }
Dec  5 01:49:41 compute-0 systemd[1]: libpod-ffea9984a50f13179cf92dbcd37fb51875e9de9a3e8683eac78c72903d8806da.scope: Deactivated successfully.
Dec  5 01:49:41 compute-0 podman[415805]: 2025-12-05 01:49:41.113106608 +0000 UTC m=+0.064736040 container died ffea9984a50f13179cf92dbcd37fb51875e9de9a3e8683eac78c72903d8806da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  5 01:49:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-871b8c987ce21e8eaf09d34c48acc1a3bb015bade895392034f5712d26be8319-merged.mount: Deactivated successfully.
Dec  5 01:49:41 compute-0 podman[415805]: 2025-12-05 01:49:41.196188493 +0000 UTC m=+0.147817895 container remove ffea9984a50f13179cf92dbcd37fb51875e9de9a3e8683eac78c72903d8806da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_proskuriakova, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  5 01:49:41 compute-0 systemd[1]: libpod-conmon-ffea9984a50f13179cf92dbcd37fb51875e9de9a3e8683eac78c72903d8806da.scope: Deactivated successfully.
Dec  5 01:49:41 compute-0 nova_compute[349548]: 2025-12-05 01:49:41.365 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:49:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1209: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:49:41 compute-0 nova_compute[349548]: 2025-12-05 01:49:41.425 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:49:42 compute-0 podman[415959]: 2025-12-05 01:49:42.438709965 +0000 UTC m=+0.090680630 container create d2fa6e350c5eae094c12b554feb4fcc254543778c7c93e7bc83ef545d88ebc06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_poitras, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:49:42 compute-0 podman[415959]: 2025-12-05 01:49:42.406079588 +0000 UTC m=+0.058050353 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:49:42 compute-0 systemd[1]: Started libpod-conmon-d2fa6e350c5eae094c12b554feb4fcc254543778c7c93e7bc83ef545d88ebc06.scope.
Dec  5 01:49:42 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:49:42 compute-0 podman[415959]: 2025-12-05 01:49:42.600725288 +0000 UTC m=+0.252695993 container init d2fa6e350c5eae094c12b554feb4fcc254543778c7c93e7bc83ef545d88ebc06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:49:42 compute-0 podman[415959]: 2025-12-05 01:49:42.619165157 +0000 UTC m=+0.271135832 container start d2fa6e350c5eae094c12b554feb4fcc254543778c7c93e7bc83ef545d88ebc06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_poitras, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec  5 01:49:42 compute-0 podman[415959]: 2025-12-05 01:49:42.625489795 +0000 UTC m=+0.277460510 container attach d2fa6e350c5eae094c12b554feb4fcc254543778c7c93e7bc83ef545d88ebc06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:49:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:49:42 compute-0 brave_poitras[415974]: 167 167
Dec  5 01:49:42 compute-0 systemd[1]: libpod-d2fa6e350c5eae094c12b554feb4fcc254543778c7c93e7bc83ef545d88ebc06.scope: Deactivated successfully.
Dec  5 01:49:42 compute-0 podman[415959]: 2025-12-05 01:49:42.636105963 +0000 UTC m=+0.288076628 container died d2fa6e350c5eae094c12b554feb4fcc254543778c7c93e7bc83ef545d88ebc06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  5 01:49:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3ddd612ad0bf1d7028392304452fc09319f1a601464d802644fb7ff5fe91856-merged.mount: Deactivated successfully.
Dec  5 01:49:42 compute-0 podman[415959]: 2025-12-05 01:49:42.711390399 +0000 UTC m=+0.363361034 container remove d2fa6e350c5eae094c12b554feb4fcc254543778c7c93e7bc83ef545d88ebc06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_poitras, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:49:42 compute-0 systemd[1]: libpod-conmon-d2fa6e350c5eae094c12b554feb4fcc254543778c7c93e7bc83ef545d88ebc06.scope: Deactivated successfully.
Dec  5 01:49:42 compute-0 podman[415997]: 2025-12-05 01:49:42.973214928 +0000 UTC m=+0.099204420 container create 1fd699c29055334a988589f8e65e611a98d013985b0517974237ca24b3f7d3ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_einstein, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:49:43 compute-0 podman[415997]: 2025-12-05 01:49:42.923875071 +0000 UTC m=+0.049864633 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:49:43 compute-0 systemd[1]: Started libpod-conmon-1fd699c29055334a988589f8e65e611a98d013985b0517974237ca24b3f7d3ca.scope.
Dec  5 01:49:43 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:49:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3a59d9a4af107811c14eb5cdf5f5892f9ba5f97867a45f767928e58459808d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:49:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3a59d9a4af107811c14eb5cdf5f5892f9ba5f97867a45f767928e58459808d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:49:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3a59d9a4af107811c14eb5cdf5f5892f9ba5f97867a45f767928e58459808d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:49:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3a59d9a4af107811c14eb5cdf5f5892f9ba5f97867a45f767928e58459808d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:49:43 compute-0 podman[415997]: 2025-12-05 01:49:43.145034927 +0000 UTC m=+0.271024509 container init 1fd699c29055334a988589f8e65e611a98d013985b0517974237ca24b3f7d3ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_einstein, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  5 01:49:43 compute-0 podman[415997]: 2025-12-05 01:49:43.155855211 +0000 UTC m=+0.281844713 container start 1fd699c29055334a988589f8e65e611a98d013985b0517974237ca24b3f7d3ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  5 01:49:43 compute-0 podman[415997]: 2025-12-05 01:49:43.161490399 +0000 UTC m=+0.287479921 container attach 1fd699c29055334a988589f8e65e611a98d013985b0517974237ca24b3f7d3ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:49:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1210: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:49:44 compute-0 gallant_einstein[416013]: {
Dec  5 01:49:44 compute-0 gallant_einstein[416013]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 01:49:44 compute-0 gallant_einstein[416013]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:49:44 compute-0 gallant_einstein[416013]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 01:49:44 compute-0 gallant_einstein[416013]:        "osd_id": 0,
Dec  5 01:49:44 compute-0 gallant_einstein[416013]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:49:44 compute-0 gallant_einstein[416013]:        "type": "bluestore"
Dec  5 01:49:44 compute-0 gallant_einstein[416013]:    },
Dec  5 01:49:44 compute-0 gallant_einstein[416013]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 01:49:44 compute-0 gallant_einstein[416013]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:49:44 compute-0 gallant_einstein[416013]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 01:49:44 compute-0 gallant_einstein[416013]:        "osd_id": 1,
Dec  5 01:49:44 compute-0 gallant_einstein[416013]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:49:44 compute-0 gallant_einstein[416013]:        "type": "bluestore"
Dec  5 01:49:44 compute-0 gallant_einstein[416013]:    },
Dec  5 01:49:44 compute-0 gallant_einstein[416013]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 01:49:44 compute-0 gallant_einstein[416013]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:49:44 compute-0 gallant_einstein[416013]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 01:49:44 compute-0 gallant_einstein[416013]:        "osd_id": 2,
Dec  5 01:49:44 compute-0 gallant_einstein[416013]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:49:44 compute-0 gallant_einstein[416013]:        "type": "bluestore"
Dec  5 01:49:44 compute-0 gallant_einstein[416013]:    }
Dec  5 01:49:44 compute-0 gallant_einstein[416013]: }
Dec  5 01:49:44 compute-0 systemd[1]: libpod-1fd699c29055334a988589f8e65e611a98d013985b0517974237ca24b3f7d3ca.scope: Deactivated successfully.
Dec  5 01:49:44 compute-0 podman[415997]: 2025-12-05 01:49:44.505720531 +0000 UTC m=+1.631710023 container died 1fd699c29055334a988589f8e65e611a98d013985b0517974237ca24b3f7d3ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:49:44 compute-0 systemd[1]: libpod-1fd699c29055334a988589f8e65e611a98d013985b0517974237ca24b3f7d3ca.scope: Consumed 1.283s CPU time.
Dec  5 01:49:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3a59d9a4af107811c14eb5cdf5f5892f9ba5f97867a45f767928e58459808d7-merged.mount: Deactivated successfully.
Dec  5 01:49:44 compute-0 podman[415997]: 2025-12-05 01:49:44.59501836 +0000 UTC m=+1.721007852 container remove 1fd699c29055334a988589f8e65e611a98d013985b0517974237ca24b3f7d3ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  5 01:49:44 compute-0 systemd[1]: libpod-conmon-1fd699c29055334a988589f8e65e611a98d013985b0517974237ca24b3f7d3ca.scope: Deactivated successfully.
Dec  5 01:49:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:49:44 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:49:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:49:44 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:49:44 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 2736901e-ce09-4f1e-b309-d2c3aed9c45a does not exist
Dec  5 01:49:44 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 4247f875-db79-4afd-9855-45478bd00643 does not exist
Dec  5 01:49:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 01:49:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1921548264' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 01:49:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 01:49:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1921548264' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 01:49:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1211: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:49:45 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:49:45 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:49:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:49:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:49:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:49:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:49:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:49:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:49:46 compute-0 nova_compute[349548]: 2025-12-05 01:49:46.366 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:49:46 compute-0 nova_compute[349548]: 2025-12-05 01:49:46.427 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:49:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1212: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Dec  5 01:49:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:49:47 compute-0 ovn_controller[89286]: 2025-12-05T01:49:47Z|00039|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory
Dec  5 01:49:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1213: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Dec  5 01:49:51 compute-0 nova_compute[349548]: 2025-12-05 01:49:51.368 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:49:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1214: 321 pgs: 321 active+clean; 111 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s wr, 0 op/s
Dec  5 01:49:51 compute-0 nova_compute[349548]: 2025-12-05 01:49:51.429 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:49:51 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Dec  5 01:49:51 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:49:51.757501) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  5 01:49:51 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Dec  5 01:49:51 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899391757554, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1430, "num_deletes": 251, "total_data_size": 2187945, "memory_usage": 2227528, "flush_reason": "Manual Compaction"}
Dec  5 01:49:51 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Dec  5 01:49:51 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899391775372, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 2144357, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24102, "largest_seqno": 25531, "table_properties": {"data_size": 2137662, "index_size": 3830, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14161, "raw_average_key_size": 20, "raw_value_size": 2124095, "raw_average_value_size": 3004, "num_data_blocks": 171, "num_entries": 707, "num_filter_entries": 707, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764899253, "oldest_key_time": 1764899253, "file_creation_time": 1764899391, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Dec  5 01:49:51 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 17908 microseconds, and 8046 cpu microseconds.
Dec  5 01:49:51 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 01:49:51 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:49:51.775417) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 2144357 bytes OK
Dec  5 01:49:51 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:49:51.775434) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Dec  5 01:49:51 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:49:51.779799) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Dec  5 01:49:51 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:49:51.779813) EVENT_LOG_v1 {"time_micros": 1764899391779808, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  5 01:49:51 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:49:51.779830) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  5 01:49:51 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 2181608, prev total WAL file size 2181608, number of live WAL files 2.
Dec  5 01:49:51 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 01:49:51 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:49:51.780979) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Dec  5 01:49:51 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  5 01:49:51 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(2094KB)], [56(6798KB)]
Dec  5 01:49:51 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899391781060, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 9105876, "oldest_snapshot_seqno": -1}
Dec  5 01:49:51 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 4614 keys, 7358865 bytes, temperature: kUnknown
Dec  5 01:49:51 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899391842466, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 7358865, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7327968, "index_size": 18243, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11589, "raw_key_size": 115515, "raw_average_key_size": 25, "raw_value_size": 7244292, "raw_average_value_size": 1570, "num_data_blocks": 756, "num_entries": 4614, "num_filter_entries": 4614, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764899391, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Dec  5 01:49:51 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 01:49:51 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:49:51.842826) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7358865 bytes
Dec  5 01:49:51 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:49:51.846793) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 148.1 rd, 119.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 6.6 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(7.7) write-amplify(3.4) OK, records in: 5132, records dropped: 518 output_compression: NoCompression
Dec  5 01:49:51 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:49:51.846835) EVENT_LOG_v1 {"time_micros": 1764899391846813, "job": 30, "event": "compaction_finished", "compaction_time_micros": 61497, "compaction_time_cpu_micros": 34178, "output_level": 6, "num_output_files": 1, "total_output_size": 7358865, "num_input_records": 5132, "num_output_records": 4614, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  5 01:49:51 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 01:49:51 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899391848367, "job": 30, "event": "table_file_deletion", "file_number": 58}
Dec  5 01:49:51 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 01:49:51 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899391851627, "job": 30, "event": "table_file_deletion", "file_number": 56}
Dec  5 01:49:51 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:49:51.780621) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:49:51 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:49:51.852219) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:49:51 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:49:51.852224) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:49:51 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:49:51.852226) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:49:51 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:49:51.852228) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:49:51 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:49:51.852230) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:49:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:49:52 compute-0 podman[416110]: 2025-12-05 01:49:52.733226902 +0000 UTC m=+0.134445399 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec  5 01:49:52 compute-0 podman[416111]: 2025-12-05 01:49:52.737376578 +0000 UTC m=+0.129649694 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  5 01:49:52 compute-0 ovn_controller[89286]: 2025-12-05T01:49:52Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:43:63:18 192.168.0.23
Dec  5 01:49:52 compute-0 ovn_controller[89286]: 2025-12-05T01:49:52Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:43:63:18 192.168.0.23
Dec  5 01:49:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1215: 321 pgs: 321 active+clean; 115 MiB data, 234 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 259 KiB/s wr, 7 op/s
Dec  5 01:49:54 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec  5 01:49:54 compute-0 podman[416152]: 2025-12-05 01:49:54.688044044 +0000 UTC m=+0.096197224 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, org.label-schema.vendor=CentOS)
Dec  5 01:49:54 compute-0 podman[416153]: 2025-12-05 01:49:54.699503936 +0000 UTC m=+0.099623572 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  5 01:49:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1216: 321 pgs: 321 active+clean; 123 MiB data, 237 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 509 KiB/s wr, 30 op/s
Dec  5 01:49:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:49:56.178 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:49:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:49:56.179 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:49:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:49:56.180 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:49:56 compute-0 nova_compute[349548]: 2025-12-05 01:49:56.371 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:49:56 compute-0 nova_compute[349548]: 2025-12-05 01:49:56.432 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:49:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1217: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 58 op/s
Dec  5 01:49:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:49:58 compute-0 podman[416190]: 2025-12-05 01:49:58.726570821 +0000 UTC m=+0.126333112 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.expose-services=, config_id=edpm, release=1214.1726694543, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9, version=9.4, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container)
Dec  5 01:49:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1218: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 58 op/s
Dec  5 01:49:59 compute-0 podman[158197]: time="2025-12-05T01:49:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:49:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:49:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 01:49:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:49:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8617 "" "Go-http-client/1.1"
Dec  5 01:50:01 compute-0 nova_compute[349548]: 2025-12-05 01:50:01.375 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:50:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1219: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 58 op/s
Dec  5 01:50:01 compute-0 openstack_network_exporter[366555]: ERROR   01:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:50:01 compute-0 openstack_network_exporter[366555]: ERROR   01:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:50:01 compute-0 openstack_network_exporter[366555]: ERROR   01:50:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:50:01 compute-0 openstack_network_exporter[366555]: ERROR   01:50:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:50:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:50:01 compute-0 openstack_network_exporter[366555]: ERROR   01:50:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:50:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:50:01 compute-0 nova_compute[349548]: 2025-12-05 01:50:01.434 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:50:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:50:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1220: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 58 op/s
Dec  5 01:50:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1221: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 143 KiB/s rd, 1.2 MiB/s wr, 51 op/s
Dec  5 01:50:05 compute-0 podman[416210]: 2025-12-05 01:50:05.72316048 +0000 UTC m=+0.128128722 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  5 01:50:05 compute-0 podman[416217]: 2025-12-05 01:50:05.74450569 +0000 UTC m=+0.122022170 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., config_id=edpm, managed_by=edpm_ansible, release=1755695350, vendor=Red Hat, Inc., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, container_name=openstack_network_exporter, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container)
Dec  5 01:50:05 compute-0 podman[416211]: 2025-12-05 01:50:05.74664507 +0000 UTC m=+0.141019324 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 01:50:05 compute-0 podman[416212]: 2025-12-05 01:50:05.764757759 +0000 UTC m=+0.155249184 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Dec  5 01:50:06 compute-0 nova_compute[349548]: 2025-12-05 01:50:06.378 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:50:06 compute-0 nova_compute[349548]: 2025-12-05 01:50:06.437 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:50:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1222: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 1016 KiB/s wr, 28 op/s
Dec  5 01:50:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:50:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1223: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Dec  5 01:50:10 compute-0 nova_compute[349548]: 2025-12-05 01:50:10.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:50:10 compute-0 nova_compute[349548]: 2025-12-05 01:50:10.066 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  5 01:50:11 compute-0 nova_compute[349548]: 2025-12-05 01:50:11.089 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:50:11 compute-0 nova_compute[349548]: 2025-12-05 01:50:11.089 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  5 01:50:11 compute-0 nova_compute[349548]: 2025-12-05 01:50:11.104 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  5 01:50:11 compute-0 nova_compute[349548]: 2025-12-05 01:50:11.381 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:50:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1224: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Dec  5 01:50:11 compute-0 nova_compute[349548]: 2025-12-05 01:50:11.440 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:50:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:50:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1225: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Dec  5 01:50:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1226: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec  5 01:50:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:50:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:50:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:50:16
Dec  5 01:50:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 01:50:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 01:50:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'vms', 'default.rgw.log', 'default.rgw.meta', 'backups', 'images', '.mgr']
Dec  5 01:50:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 01:50:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:50:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:50:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:50:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:50:16 compute-0 nova_compute[349548]: 2025-12-05 01:50:16.385 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:50:16 compute-0 nova_compute[349548]: 2025-12-05 01:50:16.444 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:50:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 01:50:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:50:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 01:50:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:50:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:50:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:50:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:50:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:50:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:50:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:50:17 compute-0 nova_compute[349548]: 2025-12-05 01:50:17.084 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:50:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1227: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:50:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:50:19 compute-0 nova_compute[349548]: 2025-12-05 01:50:19.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:50:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1228: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:50:20 compute-0 nova_compute[349548]: 2025-12-05 01:50:20.089 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:50:20 compute-0 nova_compute[349548]: 2025-12-05 01:50:20.090 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:50:20 compute-0 nova_compute[349548]: 2025-12-05 01:50:20.091 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:50:21 compute-0 nova_compute[349548]: 2025-12-05 01:50:21.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:50:21 compute-0 nova_compute[349548]: 2025-12-05 01:50:21.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:50:21 compute-0 nova_compute[349548]: 2025-12-05 01:50:21.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 01:50:21 compute-0 nova_compute[349548]: 2025-12-05 01:50:21.388 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:50:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1229: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:50:21 compute-0 nova_compute[349548]: 2025-12-05 01:50:21.448 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:50:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:50:23 compute-0 nova_compute[349548]: 2025-12-05 01:50:23.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:50:23 compute-0 nova_compute[349548]: 2025-12-05 01:50:23.068 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 01:50:23 compute-0 nova_compute[349548]: 2025-12-05 01:50:23.362 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 01:50:23 compute-0 nova_compute[349548]: 2025-12-05 01:50:23.363 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 01:50:23 compute-0 nova_compute[349548]: 2025-12-05 01:50:23.364 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  5 01:50:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1230: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:50:23 compute-0 podman[416297]: 2025-12-05 01:50:23.698277551 +0000 UTC m=+0.099655412 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  5 01:50:23 compute-0 podman[416298]: 2025-12-05 01:50:23.706651926 +0000 UTC m=+0.100830265 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  5 01:50:24 compute-0 nova_compute[349548]: 2025-12-05 01:50:24.884 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Updating instance_info_cache with network_info: [{"id": "554930d3-ff53-4ef1-af0a-bad6acef1456", "address": "fa:16:3e:43:63:18", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554930d3-ff", "ovs_interfaceid": "554930d3-ff53-4ef1-af0a-bad6acef1456", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 01:50:24 compute-0 nova_compute[349548]: 2025-12-05 01:50:24.911 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 01:50:24 compute-0 nova_compute[349548]: 2025-12-05 01:50:24.912 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  5 01:50:24 compute-0 nova_compute[349548]: 2025-12-05 01:50:24.913 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:50:24 compute-0 nova_compute[349548]: 2025-12-05 01:50:24.914 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:50:24 compute-0 nova_compute[349548]: 2025-12-05 01:50:24.952 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:50:24 compute-0 nova_compute[349548]: 2025-12-05 01:50:24.953 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:50:24 compute-0 nova_compute[349548]: 2025-12-05 01:50:24.954 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:50:24 compute-0 nova_compute[349548]: 2025-12-05 01:50:24.955 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 01:50:24 compute-0 nova_compute[349548]: 2025-12-05 01:50:24.956 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:50:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1231: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  5 01:50:25 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:50:25 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1982490388' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:50:25 compute-0 nova_compute[349548]: 2025-12-05 01:50:25.467 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:50:25 compute-0 nova_compute[349548]: 2025-12-05 01:50:25.608 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:50:25 compute-0 nova_compute[349548]: 2025-12-05 01:50:25.609 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:50:25 compute-0 nova_compute[349548]: 2025-12-05 01:50:25.609 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:50:25 compute-0 nova_compute[349548]: 2025-12-05 01:50:25.618 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:50:25 compute-0 nova_compute[349548]: 2025-12-05 01:50:25.618 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:50:25 compute-0 nova_compute[349548]: 2025-12-05 01:50:25.619 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:50:25 compute-0 podman[416362]: 2025-12-05 01:50:25.720712142 +0000 UTC m=+0.134156921 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  5 01:50:25 compute-0 podman[416363]: 2025-12-05 01:50:25.771188231 +0000 UTC m=+0.179771844 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Dec  5 01:50:26 compute-0 nova_compute[349548]: 2025-12-05 01:50:26.234 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 01:50:26 compute-0 nova_compute[349548]: 2025-12-05 01:50:26.235 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3751MB free_disk=59.92203903198242GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 01:50:26 compute-0 nova_compute[349548]: 2025-12-05 01:50:26.235 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:50:26 compute-0 nova_compute[349548]: 2025-12-05 01:50:26.235 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:50:26 compute-0 nova_compute[349548]: 2025-12-05 01:50:26.392 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:50:26 compute-0 nova_compute[349548]: 2025-12-05 01:50:26.408 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b69a0e24-1bc4-46a5-92d7-367c1efd53df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 01:50:26 compute-0 nova_compute[349548]: 2025-12-05 01:50:26.409 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b82c3f0e-6d6a-4a7b-9556-b609ad63e497 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 01:50:26 compute-0 nova_compute[349548]: 2025-12-05 01:50:26.409 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 01:50:26 compute-0 nova_compute[349548]: 2025-12-05 01:50:26.410 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 01:50:26 compute-0 nova_compute[349548]: 2025-12-05 01:50:26.450 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:50:26 compute-0 nova_compute[349548]: 2025-12-05 01:50:26.550 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00110425264130364 of space, bias 1.0, pg target 0.331275792391092 quantized to 32 (current 32)
Dec  5 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  5 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:50:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 01:50:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:50:27 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1960956354' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:50:27 compute-0 nova_compute[349548]: 2025-12-05 01:50:27.139 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:50:27 compute-0 nova_compute[349548]: 2025-12-05 01:50:27.150 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 01:50:27 compute-0 nova_compute[349548]: 2025-12-05 01:50:27.172 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 01:50:27 compute-0 nova_compute[349548]: 2025-12-05 01:50:27.174 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 01:50:27 compute-0 nova_compute[349548]: 2025-12-05 01:50:27.174 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.938s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:50:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1232: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Dec  5 01:50:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:50:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1233: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Dec  5 01:50:29 compute-0 podman[416422]: 2025-12-05 01:50:29.72014345 +0000 UTC m=+0.124185871 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, version=9.4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, distribution-scope=public, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.openshift.tags=base rhel9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  5 01:50:29 compute-0 podman[158197]: time="2025-12-05T01:50:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:50:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:50:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 01:50:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:50:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8619 "" "Go-http-client/1.1"
Dec  5 01:50:31 compute-0 nova_compute[349548]: 2025-12-05 01:50:31.395 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:50:31 compute-0 openstack_network_exporter[366555]: ERROR   01:50:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:50:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1234: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Dec  5 01:50:31 compute-0 openstack_network_exporter[366555]: ERROR   01:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:50:31 compute-0 openstack_network_exporter[366555]: ERROR   01:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:50:31 compute-0 openstack_network_exporter[366555]: ERROR   01:50:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:50:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:50:31 compute-0 openstack_network_exporter[366555]: ERROR   01:50:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:50:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:50:31 compute-0 nova_compute[349548]: 2025-12-05 01:50:31.453 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:50:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:50:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1235: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Dec  5 01:50:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1236: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Dec  5 01:50:36 compute-0 nova_compute[349548]: 2025-12-05 01:50:36.398 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:50:36 compute-0 nova_compute[349548]: 2025-12-05 01:50:36.456 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:50:36 compute-0 podman[416444]: 2025-12-05 01:50:36.727843487 +0000 UTC m=+0.121909277 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, managed_by=edpm_ansible, io.buildah.version=1.33.7, name=ubi9-minimal, config_id=edpm, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter)
Dec  5 01:50:36 compute-0 podman[416442]: 2025-12-05 01:50:36.740995397 +0000 UTC m=+0.136887829 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  5 01:50:36 compute-0 podman[416441]: 2025-12-05 01:50:36.742135249 +0000 UTC m=+0.145032818 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec  5 01:50:36 compute-0 podman[416443]: 2025-12-05 01:50:36.784583862 +0000 UTC m=+0.172107369 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Dec  5 01:50:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1237: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 1.2 KiB/s wr, 0 op/s
Dec  5 01:50:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.315 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  5 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.316 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  5 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.317 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.326 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.326 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b69a0e24-1bc4-46a5-92d7-367c1efd53df', 'name': 'test_0', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  5 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.344 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance b82c3f0e-6d6a-4a7b-9556-b609ad63e497 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  5 01:50:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:38.346 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/b82c3f0e-6d6a-4a7b-9556-b609ad63e497 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}03a5c5085f72a10a14834caf2c8f725d7bea9761ee1da0af3d318eb89d91a8ae" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.147 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Fri, 05 Dec 2025 01:50:38 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-58f198cb-93ad-46c1-bb96-e853ba57f9d0 x-openstack-request-id: req-58f198cb-93ad-46c1-bb96-e853ba57f9d0 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.148 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "b82c3f0e-6d6a-4a7b-9556-b609ad63e497", "name": "vn-4ysdpfw-vozvkqjb7v2u-n3c5nyx5kkkm-vnf-x5qm3qqtonfj", "status": "ACTIVE", "tenant_id": "6ad982b73954486390215862ee62239f", "user_id": "ff880837791d4f49a54672b8d0e705ff", "metadata": {"metering.server_group": "b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1"}, "hostId": "c00078154b620f81ef3acab090afa15b914aca6c57286253be564282", "image": {"id": "aa58c1e9-bdcc-4e60-9cee-eaeee0741251", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/aa58c1e9-bdcc-4e60-9cee-eaeee0741251"}]}, "flavor": {"id": "7d473820-6f66-40b4-b8d1-decd466d7dd2", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/7d473820-6f66-40b4-b8d1-decd466d7dd2"}]}, "created": "2025-12-05T01:49:06Z", "updated": "2025-12-05T01:49:19Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.23", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:43:63:18"}, {"version": 4, "addr": "192.168.122.213", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:43:63:18"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/b82c3f0e-6d6a-4a7b-9556-b609ad63e497"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/b82c3f0e-6d6a-4a7b-9556-b609ad63e497"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-05T01:49:19.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.149 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/b82c3f0e-6d6a-4a7b-9556-b609ad63e497 used request id req-58f198cb-93ad-46c1-bb96-e853ba57f9d0 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.151 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b82c3f0e-6d6a-4a7b-9556-b609ad63e497', 'name': 'vn-4ysdpfw-vozvkqjb7v2u-n3c5nyx5kkkm-vnf-x5qm3qqtonfj', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {'metering.server_group': 'b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.152 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.153 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd61438050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.154 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd61438050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.154 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.155 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-05T01:50:39.154686) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.156 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.157 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.157 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.158 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.158 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.159 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.160 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-05T01:50:39.159501) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.197 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.198 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.199 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.235 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.236 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.238 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.239 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.240 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.241 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.242 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.242 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.243 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.244 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-05T01:50:39.243355) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.245 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.246 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.247 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.247 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.248 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.249 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.250 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.250 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-05T01:50:39.249310) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.250 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-4ysdpfw-vozvkqjb7v2u-n3c5nyx5kkkm-vnf-x5qm3qqtonfj>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-4ysdpfw-vozvkqjb7v2u-n3c5nyx5kkkm-vnf-x5qm3qqtonfj>]
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.251 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.252 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.253 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.253 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.254 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.255 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-05T01:50:39.254553) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.346 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.347 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.348 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1238: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 85 B/s wr, 0 op/s
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.427 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.428 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.429 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.430 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.431 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.432 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.432 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.433 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.433 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.434 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 2043636416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.434 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-05T01:50:39.433651) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.435 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 325714825 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.435 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 190759187 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.436 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.latency volume: 2069488567 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.437 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.latency volume: 288882839 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.438 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.latency volume: 182154388 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.439 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.440 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.440 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.441 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.441 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.442 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.442 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.443 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-05T01:50:39.442112) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.443 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.444 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.445 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.445 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.446 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.448 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.448 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.449 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.449 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.450 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.450 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.451 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.452 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.452 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.453 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-05T01:50:39.450590) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.453 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.454 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.455 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.456 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.457 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.457 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.458 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.458 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.459 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.460 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 41762816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.460 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.461 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.461 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-05T01:50:39.459479) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.462 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.bytes volume: 41799680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.463 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.464 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.465 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.465 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.466 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.467 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.467 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.468 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.469 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-05T01:50:39.468228) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.509 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.558 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.559 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.559 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.560 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.560 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.561 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.561 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 7524740776 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.561 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-05T01:50:39.561159) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.562 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 28454640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.563 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.563 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.latency volume: 9045351841 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.564 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.latency volume: 32028870 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.564 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.565 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.565 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.566 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.566 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.567 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.567 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.567 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.568 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-05T01:50:39.567516) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.568 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.569 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.569 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.570 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.571 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.572 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.572 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.573 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.573 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.574 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.574 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.574 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-05T01:50:39.574340) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.580 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.586 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for b82c3f0e-6d6a-4a7b-9556-b609ad63e497 / tap554930d3-ff inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.587 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.587 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.588 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.588 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.589 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.589 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.590 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.590 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.590 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-05T01:50:39.590109) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.591 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.592 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.593 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.593 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.593 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.594 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.594 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.595 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.595 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-05T01:50:39.594647) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.596 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.596 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.597 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.598 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.598 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.599 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.600 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.600 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.600 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.601 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.601 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.602 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.602 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-05T01:50:39.601811) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.603 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.604 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.604 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.605 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.605 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.605 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.606 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.606 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.bytes volume: 2202 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.606 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-05T01:50:39.606300) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.607 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.bytes volume: 4488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.608 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.608 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.609 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.609 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.609 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.610 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.610 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.bytes.delta volume: 1299 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.610 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-05T01:50:39.610250) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.611 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.612 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.612 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.613 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.613 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.613 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.614 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.614 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.614 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-05T01:50:39.614182) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.615 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-4ysdpfw-vozvkqjb7v2u-n3c5nyx5kkkm-vnf-x5qm3qqtonfj>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-4ysdpfw-vozvkqjb7v2u-n3c5nyx5kkkm-vnf-x5qm3qqtonfj>]
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.615 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.616 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.616 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.616 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.617 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.617 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/memory.usage volume: 49.03125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.617 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-05T01:50:39.617103) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.617 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/memory.usage volume: 49.1640625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.618 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.619 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.619 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.619 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.619 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.620 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.620 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.621 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.bytes volume: 4849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.620 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-05T01:50:39.620191) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.621 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.621 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.622 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.622 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.622 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.622 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.623 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.623 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.packets volume: 37 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.624 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.624 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-05T01:50:39.622932) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.624 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.624 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.625 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.625 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.625 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.625 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.bytes.delta volume: 182 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.626 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-05T01:50:39.625565) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.626 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.627 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.627 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.627 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.628 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.628 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.628 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.629 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/cpu volume: 36620000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.629 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-05T01:50:39.628580) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.629 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/cpu volume: 40610000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.630 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.630 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.630 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.630 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.631 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.631 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.631 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.632 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.632 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-05T01:50:39.631461) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.632 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.633 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.633 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.633 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.633 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.634 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.634 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.634 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.634 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-05T01:50:39.634167) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.635 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:50:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:50:39.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:50:41 compute-0 nova_compute[349548]: 2025-12-05 01:50:41.402 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:50:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1239: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 4.4 KiB/s wr, 1 op/s
Dec  5 01:50:41 compute-0 nova_compute[349548]: 2025-12-05 01:50:41.459 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:50:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:50:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1240: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.1 KiB/s wr, 1 op/s
Dec  5 01:50:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 01:50:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1333884908' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 01:50:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 01:50:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1333884908' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 01:50:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1241: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.1 KiB/s wr, 1 op/s
Dec  5 01:50:46 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:50:46 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:50:46 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 01:50:46 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:50:46 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 01:50:46 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:50:46 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev dfd76698-4d74-4eb8-a05a-9fd45bfeaa20 does not exist
Dec  5 01:50:46 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev c7330ea7-0108-4db8-9e5a-20e8f92141e7 does not exist
Dec  5 01:50:46 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 3b90d230-4a08-4aa5-8e75-27167d5bcbf4 does not exist
Dec  5 01:50:46 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 01:50:46 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 01:50:46 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 01:50:46 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:50:46 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:50:46 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:50:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:50:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:50:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:50:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:50:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:50:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:50:46 compute-0 nova_compute[349548]: 2025-12-05 01:50:46.405 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:50:46 compute-0 nova_compute[349548]: 2025-12-05 01:50:46.461 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:50:46 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:50:46 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:50:46 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:50:47 compute-0 podman[416788]: 2025-12-05 01:50:47.033589639 +0000 UTC m=+0.080504314 container create e0d24a17cdd442f02b921e81098ffff45a5f833cb0a98da88a915fb2f64e06af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:50:47 compute-0 systemd[1]: Started libpod-conmon-e0d24a17cdd442f02b921e81098ffff45a5f833cb0a98da88a915fb2f64e06af.scope.
Dec  5 01:50:47 compute-0 podman[416788]: 2025-12-05 01:50:47.008523734 +0000 UTC m=+0.055438449 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:50:47 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:50:47 compute-0 podman[416788]: 2025-12-05 01:50:47.151594826 +0000 UTC m=+0.198509521 container init e0d24a17cdd442f02b921e81098ffff45a5f833cb0a98da88a915fb2f64e06af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:50:47 compute-0 podman[416788]: 2025-12-05 01:50:47.160606729 +0000 UTC m=+0.207521404 container start e0d24a17cdd442f02b921e81098ffff45a5f833cb0a98da88a915fb2f64e06af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mirzakhani, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Dec  5 01:50:47 compute-0 podman[416788]: 2025-12-05 01:50:47.166430012 +0000 UTC m=+0.213344747 container attach e0d24a17cdd442f02b921e81098ffff45a5f833cb0a98da88a915fb2f64e06af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mirzakhani, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:50:47 compute-0 lucid_mirzakhani[416804]: 167 167
Dec  5 01:50:47 compute-0 systemd[1]: libpod-e0d24a17cdd442f02b921e81098ffff45a5f833cb0a98da88a915fb2f64e06af.scope: Deactivated successfully.
Dec  5 01:50:47 compute-0 podman[416788]: 2025-12-05 01:50:47.170413524 +0000 UTC m=+0.217328199 container died e0d24a17cdd442f02b921e81098ffff45a5f833cb0a98da88a915fb2f64e06af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mirzakhani, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:50:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-ceebd4cdcdb5e3517391ddb933f65a4c5808f61c88577f3d45ca524ffb53b903-merged.mount: Deactivated successfully.
Dec  5 01:50:47 compute-0 podman[416788]: 2025-12-05 01:50:47.234580568 +0000 UTC m=+0.281495243 container remove e0d24a17cdd442f02b921e81098ffff45a5f833cb0a98da88a915fb2f64e06af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  5 01:50:47 compute-0 systemd[1]: libpod-conmon-e0d24a17cdd442f02b921e81098ffff45a5f833cb0a98da88a915fb2f64e06af.scope: Deactivated successfully.
Dec  5 01:50:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1242: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 7.1 KiB/s wr, 1 op/s
Dec  5 01:50:47 compute-0 podman[416827]: 2025-12-05 01:50:47.508401234 +0000 UTC m=+0.091320148 container create 96aaf190738ece6c8de78420c976fb7dba10db522f4aad3cac3a27a4a7a119a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mestorf, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:50:47 compute-0 podman[416827]: 2025-12-05 01:50:47.46379292 +0000 UTC m=+0.046711854 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:50:47 compute-0 systemd[1]: Started libpod-conmon-96aaf190738ece6c8de78420c976fb7dba10db522f4aad3cac3a27a4a7a119a0.scope.
Dec  5 01:50:47 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:50:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2caafe7fcd9431722604f09be8f948776807c0fe7cc9ace0f74caa7c40465fc8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:50:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2caafe7fcd9431722604f09be8f948776807c0fe7cc9ace0f74caa7c40465fc8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:50:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2caafe7fcd9431722604f09be8f948776807c0fe7cc9ace0f74caa7c40465fc8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:50:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2caafe7fcd9431722604f09be8f948776807c0fe7cc9ace0f74caa7c40465fc8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:50:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2caafe7fcd9431722604f09be8f948776807c0fe7cc9ace0f74caa7c40465fc8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:50:47 compute-0 podman[416827]: 2025-12-05 01:50:47.640049244 +0000 UTC m=+0.222968148 container init 96aaf190738ece6c8de78420c976fb7dba10db522f4aad3cac3a27a4a7a119a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  5 01:50:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:50:47 compute-0 podman[416827]: 2025-12-05 01:50:47.659637765 +0000 UTC m=+0.242556659 container start 96aaf190738ece6c8de78420c976fb7dba10db522f4aad3cac3a27a4a7a119a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  5 01:50:47 compute-0 podman[416827]: 2025-12-05 01:50:47.666064755 +0000 UTC m=+0.248983689 container attach 96aaf190738ece6c8de78420c976fb7dba10db522f4aad3cac3a27a4a7a119a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  5 01:50:48 compute-0 vibrant_mestorf[416841]: --> passed data devices: 0 physical, 3 LVM
Dec  5 01:50:48 compute-0 vibrant_mestorf[416841]: --> relative data size: 1.0
Dec  5 01:50:48 compute-0 vibrant_mestorf[416841]: --> All data devices are unavailable
Dec  5 01:50:48 compute-0 systemd[1]: libpod-96aaf190738ece6c8de78420c976fb7dba10db522f4aad3cac3a27a4a7a119a0.scope: Deactivated successfully.
Dec  5 01:50:48 compute-0 podman[416827]: 2025-12-05 01:50:48.910375488 +0000 UTC m=+1.493294412 container died 96aaf190738ece6c8de78420c976fb7dba10db522f4aad3cac3a27a4a7a119a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  5 01:50:48 compute-0 systemd[1]: libpod-96aaf190738ece6c8de78420c976fb7dba10db522f4aad3cac3a27a4a7a119a0.scope: Consumed 1.197s CPU time.
Dec  5 01:50:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-2caafe7fcd9431722604f09be8f948776807c0fe7cc9ace0f74caa7c40465fc8-merged.mount: Deactivated successfully.
Dec  5 01:50:49 compute-0 podman[416827]: 2025-12-05 01:50:49.016226073 +0000 UTC m=+1.599144967 container remove 96aaf190738ece6c8de78420c976fb7dba10db522f4aad3cac3a27a4a7a119a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mestorf, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  5 01:50:49 compute-0 systemd[1]: libpod-conmon-96aaf190738ece6c8de78420c976fb7dba10db522f4aad3cac3a27a4a7a119a0.scope: Deactivated successfully.
Dec  5 01:50:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1243: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s wr, 0 op/s
Dec  5 01:50:50 compute-0 podman[417023]: 2025-12-05 01:50:50.135450539 +0000 UTC m=+0.088603131 container create 79e07fd37124ec0fa3b436ba271b6e24ab6b71e9dd42b63395686faf66c55a34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:50:50 compute-0 podman[417023]: 2025-12-05 01:50:50.097443881 +0000 UTC m=+0.050596513 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:50:50 compute-0 systemd[1]: Started libpod-conmon-79e07fd37124ec0fa3b436ba271b6e24ab6b71e9dd42b63395686faf66c55a34.scope.
Dec  5 01:50:50 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:50:50 compute-0 podman[417023]: 2025-12-05 01:50:50.321860409 +0000 UTC m=+0.275013051 container init 79e07fd37124ec0fa3b436ba271b6e24ab6b71e9dd42b63395686faf66c55a34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:50:50 compute-0 podman[417023]: 2025-12-05 01:50:50.338153907 +0000 UTC m=+0.291306499 container start 79e07fd37124ec0fa3b436ba271b6e24ab6b71e9dd42b63395686faf66c55a34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_torvalds, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:50:50 compute-0 hungry_torvalds[417036]: 167 167
Dec  5 01:50:50 compute-0 podman[417023]: 2025-12-05 01:50:50.346017278 +0000 UTC m=+0.299169910 container attach 79e07fd37124ec0fa3b436ba271b6e24ab6b71e9dd42b63395686faf66c55a34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:50:50 compute-0 systemd[1]: libpod-79e07fd37124ec0fa3b436ba271b6e24ab6b71e9dd42b63395686faf66c55a34.scope: Deactivated successfully.
Dec  5 01:50:50 compute-0 podman[417023]: 2025-12-05 01:50:50.349471955 +0000 UTC m=+0.302624537 container died 79e07fd37124ec0fa3b436ba271b6e24ab6b71e9dd42b63395686faf66c55a34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_torvalds, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  5 01:50:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c5f1df5ee704705dddda7b76e842b89a61f6d0ceb89d127c260db790936eb48-merged.mount: Deactivated successfully.
Dec  5 01:50:50 compute-0 podman[417023]: 2025-12-05 01:50:50.43931189 +0000 UTC m=+0.392464452 container remove 79e07fd37124ec0fa3b436ba271b6e24ab6b71e9dd42b63395686faf66c55a34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  5 01:50:50 compute-0 systemd[1]: libpod-conmon-79e07fd37124ec0fa3b436ba271b6e24ab6b71e9dd42b63395686faf66c55a34.scope: Deactivated successfully.
Dec  5 01:50:50 compute-0 podman[417059]: 2025-12-05 01:50:50.724502595 +0000 UTC m=+0.086676187 container create 0198fc6f51bd93f201176463fd2e83ea56a712d860c6fe1062a97522080dc822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cohen, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  5 01:50:50 compute-0 podman[417059]: 2025-12-05 01:50:50.685276613 +0000 UTC m=+0.047450215 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:50:50 compute-0 systemd[1]: Started libpod-conmon-0198fc6f51bd93f201176463fd2e83ea56a712d860c6fe1062a97522080dc822.scope.
Dec  5 01:50:50 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:50:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d127c47c74c8ef44b9b8876303ef525c8449a66a3f16535011a6f9ce588f906/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:50:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d127c47c74c8ef44b9b8876303ef525c8449a66a3f16535011a6f9ce588f906/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:50:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d127c47c74c8ef44b9b8876303ef525c8449a66a3f16535011a6f9ce588f906/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:50:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d127c47c74c8ef44b9b8876303ef525c8449a66a3f16535011a6f9ce588f906/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:50:50 compute-0 podman[417059]: 2025-12-05 01:50:50.922361986 +0000 UTC m=+0.284535588 container init 0198fc6f51bd93f201176463fd2e83ea56a712d860c6fe1062a97522080dc822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  5 01:50:50 compute-0 podman[417059]: 2025-12-05 01:50:50.943297855 +0000 UTC m=+0.305471437 container start 0198fc6f51bd93f201176463fd2e83ea56a712d860c6fe1062a97522080dc822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  5 01:50:50 compute-0 podman[417059]: 2025-12-05 01:50:50.951271409 +0000 UTC m=+0.313445031 container attach 0198fc6f51bd93f201176463fd2e83ea56a712d860c6fe1062a97522080dc822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  5 01:50:51 compute-0 nova_compute[349548]: 2025-12-05 01:50:51.407 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:50:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1244: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s wr, 0 op/s
Dec  5 01:50:51 compute-0 nova_compute[349548]: 2025-12-05 01:50:51.465 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]: {
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:    "0": [
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:        {
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:            "devices": [
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:                "/dev/loop3"
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:            ],
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:            "lv_name": "ceph_lv0",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:            "lv_size": "21470642176",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:            "name": "ceph_lv0",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:            "tags": {
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:                "ceph.cluster_name": "ceph",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:                "ceph.crush_device_class": "",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:                "ceph.encrypted": "0",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:                "ceph.osd_id": "0",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:                "ceph.type": "block",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:                "ceph.vdo": "0"
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:            },
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:            "type": "block",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:            "vg_name": "ceph_vg0"
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:        }
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:    ],
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:    "1": [
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:        {
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:            "devices": [
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:                "/dev/loop4"
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:            ],
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:            "lv_name": "ceph_lv1",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:            "lv_size": "21470642176",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:            "name": "ceph_lv1",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:            "tags": {
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:                "ceph.cluster_name": "ceph",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:                "ceph.crush_device_class": "",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:                "ceph.encrypted": "0",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:                "ceph.osd_id": "1",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:                "ceph.type": "block",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:                "ceph.vdo": "0"
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:            },
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:            "type": "block",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:            "vg_name": "ceph_vg1"
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:        }
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:    ],
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:    "2": [
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:        {
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:            "devices": [
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:                "/dev/loop5"
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:            ],
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:            "lv_name": "ceph_lv2",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:            "lv_size": "21470642176",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:            "name": "ceph_lv2",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:            "tags": {
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:                "ceph.cluster_name": "ceph",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:                "ceph.crush_device_class": "",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:                "ceph.encrypted": "0",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:                "ceph.osd_id": "2",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:                "ceph.type": "block",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:                "ceph.vdo": "0"
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:            },
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:            "type": "block",
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:            "vg_name": "ceph_vg2"
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:        }
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]:    ]
Dec  5 01:50:51 compute-0 intelligent_cohen[417075]: }
Dec  5 01:50:51 compute-0 systemd[1]: libpod-0198fc6f51bd93f201176463fd2e83ea56a712d860c6fe1062a97522080dc822.scope: Deactivated successfully.
Dec  5 01:50:51 compute-0 podman[417059]: 2025-12-05 01:50:51.8475404 +0000 UTC m=+1.209713992 container died 0198fc6f51bd93f201176463fd2e83ea56a712d860c6fe1062a97522080dc822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cohen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  5 01:50:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d127c47c74c8ef44b9b8876303ef525c8449a66a3f16535011a6f9ce588f906-merged.mount: Deactivated successfully.
Dec  5 01:50:51 compute-0 podman[417059]: 2025-12-05 01:50:51.950652928 +0000 UTC m=+1.312826490 container remove 0198fc6f51bd93f201176463fd2e83ea56a712d860c6fe1062a97522080dc822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cohen, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  5 01:50:51 compute-0 systemd[1]: libpod-conmon-0198fc6f51bd93f201176463fd2e83ea56a712d860c6fe1062a97522080dc822.scope: Deactivated successfully.
Dec  5 01:50:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:50:53 compute-0 podman[417234]: 2025-12-05 01:50:53.127108483 +0000 UTC m=+0.083845707 container create 06e93d89771bc55a6172214a3a7f11c49ceeaae6803466e9480db7802500d7b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mirzakhani, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  5 01:50:53 compute-0 podman[417234]: 2025-12-05 01:50:53.093971442 +0000 UTC m=+0.050708716 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:50:53 compute-0 systemd[1]: Started libpod-conmon-06e93d89771bc55a6172214a3a7f11c49ceeaae6803466e9480db7802500d7b8.scope.
Dec  5 01:50:53 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:50:53 compute-0 podman[417234]: 2025-12-05 01:50:53.32409619 +0000 UTC m=+0.280833414 container init 06e93d89771bc55a6172214a3a7f11c49ceeaae6803466e9480db7802500d7b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Dec  5 01:50:53 compute-0 podman[417234]: 2025-12-05 01:50:53.345744049 +0000 UTC m=+0.302481273 container start 06e93d89771bc55a6172214a3a7f11c49ceeaae6803466e9480db7802500d7b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mirzakhani, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:50:53 compute-0 podman[417234]: 2025-12-05 01:50:53.353234289 +0000 UTC m=+0.309971483 container attach 06e93d89771bc55a6172214a3a7f11c49ceeaae6803466e9480db7802500d7b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mirzakhani, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:50:53 compute-0 interesting_mirzakhani[417250]: 167 167
Dec  5 01:50:53 compute-0 systemd[1]: libpod-06e93d89771bc55a6172214a3a7f11c49ceeaae6803466e9480db7802500d7b8.scope: Deactivated successfully.
Dec  5 01:50:53 compute-0 podman[417234]: 2025-12-05 01:50:53.35787914 +0000 UTC m=+0.314616324 container died 06e93d89771bc55a6172214a3a7f11c49ceeaae6803466e9480db7802500d7b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mirzakhani, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  5 01:50:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-3135414f441c6ebb5a41c0262287df5e0969c357ad91399fd42f0b5716402a22-merged.mount: Deactivated successfully.
Dec  5 01:50:53 compute-0 podman[417234]: 2025-12-05 01:50:53.420793658 +0000 UTC m=+0.377530862 container remove 06e93d89771bc55a6172214a3a7f11c49ceeaae6803466e9480db7802500d7b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_mirzakhani, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:50:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1245: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Dec  5 01:50:53 compute-0 systemd[1]: libpod-conmon-06e93d89771bc55a6172214a3a7f11c49ceeaae6803466e9480db7802500d7b8.scope: Deactivated successfully.
Dec  5 01:50:53 compute-0 podman[417273]: 2025-12-05 01:50:53.683187992 +0000 UTC m=+0.079313610 container create e6fa7bacebabd47cde41ebe74d378374790a0c72c353b81094d73651e0dfb0b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:50:53 compute-0 podman[417273]: 2025-12-05 01:50:53.659758933 +0000 UTC m=+0.055884571 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:50:53 compute-0 systemd[1]: Started libpod-conmon-e6fa7bacebabd47cde41ebe74d378374790a0c72c353b81094d73651e0dfb0b7.scope.
Dec  5 01:50:53 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:50:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13c76badd6e610703e8e57aebf9d0dced77460115f76befbcee03c451cf18781/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:50:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13c76badd6e610703e8e57aebf9d0dced77460115f76befbcee03c451cf18781/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:50:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13c76badd6e610703e8e57aebf9d0dced77460115f76befbcee03c451cf18781/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:50:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13c76badd6e610703e8e57aebf9d0dced77460115f76befbcee03c451cf18781/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:50:53 compute-0 podman[417273]: 2025-12-05 01:50:53.877573555 +0000 UTC m=+0.273699193 container init e6fa7bacebabd47cde41ebe74d378374790a0c72c353b81094d73651e0dfb0b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ptolemy, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:50:53 compute-0 podman[417273]: 2025-12-05 01:50:53.895435127 +0000 UTC m=+0.291560755 container start e6fa7bacebabd47cde41ebe74d378374790a0c72c353b81094d73651e0dfb0b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:50:53 compute-0 podman[417273]: 2025-12-05 01:50:53.902716202 +0000 UTC m=+0.298841850 container attach e6fa7bacebabd47cde41ebe74d378374790a0c72c353b81094d73651e0dfb0b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Dec  5 01:50:53 compute-0 podman[417290]: 2025-12-05 01:50:53.918275929 +0000 UTC m=+0.109260722 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  5 01:50:53 compute-0 podman[417291]: 2025-12-05 01:50:53.934565337 +0000 UTC m=+0.105514607 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  5 01:50:54 compute-0 infallible_ptolemy[417292]: {
Dec  5 01:50:54 compute-0 infallible_ptolemy[417292]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 01:50:54 compute-0 infallible_ptolemy[417292]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:50:55 compute-0 infallible_ptolemy[417292]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 01:50:55 compute-0 infallible_ptolemy[417292]:        "osd_id": 0,
Dec  5 01:50:55 compute-0 infallible_ptolemy[417292]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:50:55 compute-0 infallible_ptolemy[417292]:        "type": "bluestore"
Dec  5 01:50:55 compute-0 infallible_ptolemy[417292]:    },
Dec  5 01:50:55 compute-0 infallible_ptolemy[417292]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 01:50:55 compute-0 infallible_ptolemy[417292]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:50:55 compute-0 infallible_ptolemy[417292]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 01:50:55 compute-0 infallible_ptolemy[417292]:        "osd_id": 1,
Dec  5 01:50:55 compute-0 infallible_ptolemy[417292]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:50:55 compute-0 infallible_ptolemy[417292]:        "type": "bluestore"
Dec  5 01:50:55 compute-0 infallible_ptolemy[417292]:    },
Dec  5 01:50:55 compute-0 infallible_ptolemy[417292]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 01:50:55 compute-0 infallible_ptolemy[417292]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:50:55 compute-0 infallible_ptolemy[417292]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 01:50:55 compute-0 infallible_ptolemy[417292]:        "osd_id": 2,
Dec  5 01:50:55 compute-0 infallible_ptolemy[417292]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:50:55 compute-0 infallible_ptolemy[417292]:        "type": "bluestore"
Dec  5 01:50:55 compute-0 infallible_ptolemy[417292]:    }
Dec  5 01:50:55 compute-0 infallible_ptolemy[417292]: }
Dec  5 01:50:55 compute-0 systemd[1]: libpod-e6fa7bacebabd47cde41ebe74d378374790a0c72c353b81094d73651e0dfb0b7.scope: Deactivated successfully.
Dec  5 01:50:55 compute-0 systemd[1]: libpod-e6fa7bacebabd47cde41ebe74d378374790a0c72c353b81094d73651e0dfb0b7.scope: Consumed 1.149s CPU time.
Dec  5 01:50:55 compute-0 podman[417273]: 2025-12-05 01:50:55.0490328 +0000 UTC m=+1.445158529 container died e6fa7bacebabd47cde41ebe74d378374790a0c72c353b81094d73651e0dfb0b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:50:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-13c76badd6e610703e8e57aebf9d0dced77460115f76befbcee03c451cf18781-merged.mount: Deactivated successfully.
Dec  5 01:50:55 compute-0 podman[417273]: 2025-12-05 01:50:55.157753876 +0000 UTC m=+1.553879504 container remove e6fa7bacebabd47cde41ebe74d378374790a0c72c353b81094d73651e0dfb0b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  5 01:50:55 compute-0 systemd[1]: libpod-conmon-e6fa7bacebabd47cde41ebe74d378374790a0c72c353b81094d73651e0dfb0b7.scope: Deactivated successfully.
Dec  5 01:50:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:50:55 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:50:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:50:55 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:50:55 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 68cf5ad9-f711-450e-9428-69c25c9419b0 does not exist
Dec  5 01:50:55 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 62354550-00ff-45b0-8d49-dc6cd1436235 does not exist
Dec  5 01:50:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1246: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:50:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:50:56.179 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:50:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:50:56.179 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:50:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:50:56.180 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:50:56 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:50:56 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:50:56 compute-0 nova_compute[349548]: 2025-12-05 01:50:56.411 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:50:56 compute-0 nova_compute[349548]: 2025-12-05 01:50:56.468 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:50:56 compute-0 podman[417427]: 2025-12-05 01:50:56.718574395 +0000 UTC m=+0.127157905 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  5 01:50:56 compute-0 podman[417426]: 2025-12-05 01:50:56.767591843 +0000 UTC m=+0.169217717 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  5 01:50:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1247: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:50:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:50:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1248: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:50:59 compute-0 podman[158197]: time="2025-12-05T01:50:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:50:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:50:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 01:50:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:50:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8629 "" "Go-http-client/1.1"
Dec  5 01:51:00 compute-0 podman[417462]: 2025-12-05 01:51:00.676120535 +0000 UTC m=+0.089160816 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, com.redhat.component=ubi9-container, release-0.7.12=, vendor=Red Hat, Inc., io.buildah.version=1.29.0, name=ubi9, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, version=9.4, architecture=x86_64, container_name=kepler, distribution-scope=public, managed_by=edpm_ansible, config_id=edpm, io.openshift.expose-services=)
Dec  5 01:51:01 compute-0 nova_compute[349548]: 2025-12-05 01:51:01.414 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:51:01 compute-0 openstack_network_exporter[366555]: ERROR   01:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:51:01 compute-0 openstack_network_exporter[366555]: ERROR   01:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:51:01 compute-0 openstack_network_exporter[366555]: ERROR   01:51:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:51:01 compute-0 openstack_network_exporter[366555]: ERROR   01:51:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:51:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:51:01 compute-0 openstack_network_exporter[366555]: ERROR   01:51:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:51:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:51:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1249: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:51:01 compute-0 nova_compute[349548]: 2025-12-05 01:51:01.471 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:51:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:51:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1250: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:51:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1251: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:51:06 compute-0 nova_compute[349548]: 2025-12-05 01:51:06.418 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:51:06 compute-0 nova_compute[349548]: 2025-12-05 01:51:06.474 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:51:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1252: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:51:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:51:07 compute-0 podman[417482]: 2025-12-05 01:51:07.721828912 +0000 UTC m=+0.123168383 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  5 01:51:07 compute-0 podman[417483]: 2025-12-05 01:51:07.741770412 +0000 UTC m=+0.135340225 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  5 01:51:07 compute-0 podman[417485]: 2025-12-05 01:51:07.757779492 +0000 UTC m=+0.139083410 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, release=1755695350, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, architecture=x86_64, config_id=edpm, managed_by=edpm_ansible, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  5 01:51:07 compute-0 podman[417484]: 2025-12-05 01:51:07.795752279 +0000 UTC m=+0.171403438 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  5 01:51:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1253: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:51:11 compute-0 nova_compute[349548]: 2025-12-05 01:51:11.420 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:51:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1254: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec  5 01:51:11 compute-0 nova_compute[349548]: 2025-12-05 01:51:11.478 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:51:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:51:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1255: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec  5 01:51:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1256: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec  5 01:51:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:51:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:51:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:51:16
Dec  5 01:51:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 01:51:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 01:51:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta']
Dec  5 01:51:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 01:51:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:51:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:51:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:51:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:51:16 compute-0 nova_compute[349548]: 2025-12-05 01:51:16.422 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:51:16 compute-0 nova_compute[349548]: 2025-12-05 01:51:16.481 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:51:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 01:51:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:51:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 01:51:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:51:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:51:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:51:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:51:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:51:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:51:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:51:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1257: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec  5 01:51:17 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec  5 01:51:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:51:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1258: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec  5 01:51:19 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  5 01:51:21 compute-0 nova_compute[349548]: 2025-12-05 01:51:21.326 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:51:21 compute-0 nova_compute[349548]: 2025-12-05 01:51:21.327 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:51:21 compute-0 nova_compute[349548]: 2025-12-05 01:51:21.327 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:51:21 compute-0 nova_compute[349548]: 2025-12-05 01:51:21.327 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:51:21 compute-0 nova_compute[349548]: 2025-12-05 01:51:21.327 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:51:21 compute-0 nova_compute[349548]: 2025-12-05 01:51:21.328 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 01:51:21 compute-0 nova_compute[349548]: 2025-12-05 01:51:21.426 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:51:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1259: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s wr, 0 op/s
Dec  5 01:51:21 compute-0 nova_compute[349548]: 2025-12-05 01:51:21.483 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:51:22 compute-0 nova_compute[349548]: 2025-12-05 01:51:22.063 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:51:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:51:23 compute-0 nova_compute[349548]: 2025-12-05 01:51:23.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:51:23 compute-0 nova_compute[349548]: 2025-12-05 01:51:23.066 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 01:51:23 compute-0 nova_compute[349548]: 2025-12-05 01:51:23.066 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  5 01:51:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1260: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:51:23 compute-0 nova_compute[349548]: 2025-12-05 01:51:23.500 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 01:51:23 compute-0 nova_compute[349548]: 2025-12-05 01:51:23.501 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 01:51:23 compute-0 nova_compute[349548]: 2025-12-05 01:51:23.501 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  5 01:51:23 compute-0 nova_compute[349548]: 2025-12-05 01:51:23.501 349552 DEBUG nova.objects.instance [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b69a0e24-1bc4-46a5-92d7-367c1efd53df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 01:51:24 compute-0 podman[417566]: 2025-12-05 01:51:24.722211073 +0000 UTC m=+0.121941548 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 01:51:24 compute-0 podman[417565]: 2025-12-05 01:51:24.72458032 +0000 UTC m=+0.129712967 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Dec  5 01:51:24 compute-0 nova_compute[349548]: 2025-12-05 01:51:24.962 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updating instance_info_cache with network_info: [{"id": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "address": "fa:16:3e:0c:12:24", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.48", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68143c81-65", "ovs_interfaceid": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 01:51:24 compute-0 nova_compute[349548]: 2025-12-05 01:51:24.982 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 01:51:24 compute-0 nova_compute[349548]: 2025-12-05 01:51:24.982 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  5 01:51:24 compute-0 nova_compute[349548]: 2025-12-05 01:51:24.983 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:51:25 compute-0 nova_compute[349548]: 2025-12-05 01:51:25.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:51:25 compute-0 nova_compute[349548]: 2025-12-05 01:51:25.098 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:51:25 compute-0 nova_compute[349548]: 2025-12-05 01:51:25.123 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:51:25 compute-0 nova_compute[349548]: 2025-12-05 01:51:25.123 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:51:25 compute-0 nova_compute[349548]: 2025-12-05 01:51:25.123 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:51:25 compute-0 nova_compute[349548]: 2025-12-05 01:51:25.124 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 01:51:25 compute-0 nova_compute[349548]: 2025-12-05 01:51:25.124 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:51:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1261: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:51:25 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:51:25 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3815787811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:51:25 compute-0 nova_compute[349548]: 2025-12-05 01:51:25.624 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:51:25 compute-0 nova_compute[349548]: 2025-12-05 01:51:25.749 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:51:25 compute-0 nova_compute[349548]: 2025-12-05 01:51:25.750 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:51:25 compute-0 nova_compute[349548]: 2025-12-05 01:51:25.750 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:51:25 compute-0 nova_compute[349548]: 2025-12-05 01:51:25.759 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:51:25 compute-0 nova_compute[349548]: 2025-12-05 01:51:25.760 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:51:25 compute-0 nova_compute[349548]: 2025-12-05 01:51:25.760 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:51:26 compute-0 nova_compute[349548]: 2025-12-05 01:51:26.176 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 01:51:26 compute-0 nova_compute[349548]: 2025-12-05 01:51:26.177 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3747MB free_disk=59.9220085144043GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 01:51:26 compute-0 nova_compute[349548]: 2025-12-05 01:51:26.177 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:51:26 compute-0 nova_compute[349548]: 2025-12-05 01:51:26.178 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:51:26 compute-0 nova_compute[349548]: 2025-12-05 01:51:26.300 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b69a0e24-1bc4-46a5-92d7-367c1efd53df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 01:51:26 compute-0 nova_compute[349548]: 2025-12-05 01:51:26.301 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b82c3f0e-6d6a-4a7b-9556-b609ad63e497 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 01:51:26 compute-0 nova_compute[349548]: 2025-12-05 01:51:26.302 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 01:51:26 compute-0 nova_compute[349548]: 2025-12-05 01:51:26.302 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 01:51:26 compute-0 nova_compute[349548]: 2025-12-05 01:51:26.398 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:51:26 compute-0 nova_compute[349548]: 2025-12-05 01:51:26.429 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:51:26 compute-0 nova_compute[349548]: 2025-12-05 01:51:26.486 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011047613669662043 of space, bias 1.0, pg target 0.3314284100898613 quantized to 32 (current 32)
Dec  5 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  5 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:51:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 01:51:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:51:26 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1287143309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:51:26 compute-0 nova_compute[349548]: 2025-12-05 01:51:26.877 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:51:26 compute-0 nova_compute[349548]: 2025-12-05 01:51:26.889 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 01:51:26 compute-0 nova_compute[349548]: 2025-12-05 01:51:26.911 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 01:51:26 compute-0 nova_compute[349548]: 2025-12-05 01:51:26.914 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 01:51:26 compute-0 nova_compute[349548]: 2025-12-05 01:51:26.914 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:51:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1262: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:51:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:51:27 compute-0 podman[417653]: 2025-12-05 01:51:27.688510403 +0000 UTC m=+0.096014209 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Dec  5 01:51:27 compute-0 podman[417652]: 2025-12-05 01:51:27.706340084 +0000 UTC m=+0.112938676 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  5 01:51:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1263: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:51:29 compute-0 podman[158197]: time="2025-12-05T01:51:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:51:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:51:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 01:51:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:51:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8622 "" "Go-http-client/1.1"
Dec  5 01:51:31 compute-0 openstack_network_exporter[366555]: ERROR   01:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:51:31 compute-0 openstack_network_exporter[366555]: ERROR   01:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:51:31 compute-0 openstack_network_exporter[366555]: ERROR   01:51:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:51:31 compute-0 openstack_network_exporter[366555]: ERROR   01:51:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:51:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:51:31 compute-0 openstack_network_exporter[366555]: ERROR   01:51:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:51:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:51:31 compute-0 nova_compute[349548]: 2025-12-05 01:51:31.430 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:51:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1264: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:51:31 compute-0 nova_compute[349548]: 2025-12-05 01:51:31.489 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:51:31 compute-0 podman[417690]: 2025-12-05 01:51:31.710353631 +0000 UTC m=+0.134218513 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, release=1214.1726694543, release-0.7.12=, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., container_name=kepler, build-date=2024-09-18T21:23:30, name=ubi9, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-type=git)
Dec  5 01:51:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:51:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1265: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:51:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1266: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:51:36 compute-0 nova_compute[349548]: 2025-12-05 01:51:36.433 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:51:36 compute-0 nova_compute[349548]: 2025-12-05 01:51:36.490 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:51:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1267: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:51:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:51:38 compute-0 podman[417711]: 2025-12-05 01:51:38.695828074 +0000 UTC m=+0.101525324 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 01:51:38 compute-0 podman[417710]: 2025-12-05 01:51:38.697708597 +0000 UTC m=+0.109299033 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  5 01:51:38 compute-0 podman[417713]: 2025-12-05 01:51:38.712827962 +0000 UTC m=+0.112166913 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, name=ubi9-minimal, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, release=1755695350, vendor=Red Hat, Inc., container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  5 01:51:38 compute-0 podman[417712]: 2025-12-05 01:51:38.740281934 +0000 UTC m=+0.132225118 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  5 01:51:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1268: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:51:41 compute-0 nova_compute[349548]: 2025-12-05 01:51:41.435 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:51:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1269: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:51:41 compute-0 nova_compute[349548]: 2025-12-05 01:51:41.492 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:51:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:51:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1270: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:51:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 01:51:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3335169055' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 01:51:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 01:51:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3335169055' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 01:51:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1271: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:51:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:51:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:51:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:51:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:51:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:51:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:51:46 compute-0 nova_compute[349548]: 2025-12-05 01:51:46.437 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:51:46 compute-0 nova_compute[349548]: 2025-12-05 01:51:46.494 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:51:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1272: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:51:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:51:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1273: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:51:51 compute-0 nova_compute[349548]: 2025-12-05 01:51:51.439 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:51:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1274: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:51:51 compute-0 nova_compute[349548]: 2025-12-05 01:51:51.496 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:51:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:51:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1275: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:51:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1276: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:51:55 compute-0 podman[417820]: 2025-12-05 01:51:55.744352578 +0000 UTC m=+0.152779465 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:51:55 compute-0 podman[417822]: 2025-12-05 01:51:55.747948779 +0000 UTC m=+0.144875873 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 01:51:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:51:56.180 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:51:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:51:56.181 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:51:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:51:56.181 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:51:56 compute-0 nova_compute[349548]: 2025-12-05 01:51:56.441 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:51:56 compute-0 nova_compute[349548]: 2025-12-05 01:51:56.498 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:51:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:51:56 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:51:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 01:51:56 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:51:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 01:51:56 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:51:56 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev c43b53e7-837e-4d33-98f2-01878302d075 does not exist
Dec  5 01:51:56 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 8340c628-1e23-458d-aa27-732ac72a16da does not exist
Dec  5 01:51:56 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 15e66a8b-bfc7-4615-b8cd-690ffce4bbf6 does not exist
Dec  5 01:51:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 01:51:56 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 01:51:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 01:51:56 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:51:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:51:56 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:51:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1277: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:51:57 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:51:57 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:51:57 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:51:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:51:58 compute-0 podman[418104]: 2025-12-05 01:51:58.029413222 +0000 UTC m=+0.119573382 container create f38862cebb9596e5bcb0cc928531c3bae334216142e4a7273e4989576e865955 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_nightingale, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:51:58 compute-0 podman[418104]: 2025-12-05 01:51:57.990616951 +0000 UTC m=+0.080777131 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:51:58 compute-0 systemd[1]: Started libpod-conmon-f38862cebb9596e5bcb0cc928531c3bae334216142e4a7273e4989576e865955.scope.
Dec  5 01:51:58 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:51:58 compute-0 podman[418104]: 2025-12-05 01:51:58.186109716 +0000 UTC m=+0.276269906 container init f38862cebb9596e5bcb0cc928531c3bae334216142e4a7273e4989576e865955 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:51:58 compute-0 podman[418104]: 2025-12-05 01:51:58.20262275 +0000 UTC m=+0.292782900 container start f38862cebb9596e5bcb0cc928531c3bae334216142e4a7273e4989576e865955 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_nightingale, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:51:58 compute-0 podman[418104]: 2025-12-05 01:51:58.208153445 +0000 UTC m=+0.298313605 container attach f38862cebb9596e5bcb0cc928531c3bae334216142e4a7273e4989576e865955 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_nightingale, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  5 01:51:58 compute-0 relaxed_nightingale[418131]: 167 167
Dec  5 01:51:58 compute-0 systemd[1]: libpod-f38862cebb9596e5bcb0cc928531c3bae334216142e4a7273e4989576e865955.scope: Deactivated successfully.
Dec  5 01:51:58 compute-0 podman[418104]: 2025-12-05 01:51:58.211686755 +0000 UTC m=+0.301846915 container died f38862cebb9596e5bcb0cc928531c3bae334216142e4a7273e4989576e865955 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  5 01:51:58 compute-0 podman[418116]: 2025-12-05 01:51:58.224456103 +0000 UTC m=+0.117381080 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec  5 01:51:58 compute-0 podman[418119]: 2025-12-05 01:51:58.236515142 +0000 UTC m=+0.131524297 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm)
Dec  5 01:51:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8fa11ecc8fe33b620ac001eceeb8fef697271570d29c3088874843a45907d8e-merged.mount: Deactivated successfully.
Dec  5 01:51:58 compute-0 podman[418104]: 2025-12-05 01:51:58.286821076 +0000 UTC m=+0.376981236 container remove f38862cebb9596e5bcb0cc928531c3bae334216142e4a7273e4989576e865955 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  5 01:51:58 compute-0 systemd[1]: libpod-conmon-f38862cebb9596e5bcb0cc928531c3bae334216142e4a7273e4989576e865955.scope: Deactivated successfully.
Dec  5 01:51:58 compute-0 podman[418178]: 2025-12-05 01:51:58.571657642 +0000 UTC m=+0.095815224 container create e4e8cc298d89c0ec5c562840df0c998a72f62fd3582cd89edf041bb8b71fda3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:51:58 compute-0 podman[418178]: 2025-12-05 01:51:58.543346096 +0000 UTC m=+0.067503718 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:51:58 compute-0 systemd[1]: Started libpod-conmon-e4e8cc298d89c0ec5c562840df0c998a72f62fd3582cd89edf041bb8b71fda3a.scope.
Dec  5 01:51:58 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:51:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e67647ac600f23e01ac5d14513ba41f0e9f66fc5e7e0ef6d5612a3c9f031587b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:51:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e67647ac600f23e01ac5d14513ba41f0e9f66fc5e7e0ef6d5612a3c9f031587b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:51:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e67647ac600f23e01ac5d14513ba41f0e9f66fc5e7e0ef6d5612a3c9f031587b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:51:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e67647ac600f23e01ac5d14513ba41f0e9f66fc5e7e0ef6d5612a3c9f031587b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:51:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e67647ac600f23e01ac5d14513ba41f0e9f66fc5e7e0ef6d5612a3c9f031587b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:51:58 compute-0 podman[418178]: 2025-12-05 01:51:58.745698834 +0000 UTC m=+0.269856416 container init e4e8cc298d89c0ec5c562840df0c998a72f62fd3582cd89edf041bb8b71fda3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_stonebraker, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:51:58 compute-0 podman[418178]: 2025-12-05 01:51:58.782066896 +0000 UTC m=+0.306224508 container start e4e8cc298d89c0ec5c562840df0c998a72f62fd3582cd89edf041bb8b71fda3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_stonebraker, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  5 01:51:58 compute-0 podman[418178]: 2025-12-05 01:51:58.79038886 +0000 UTC m=+0.314546462 container attach e4e8cc298d89c0ec5c562840df0c998a72f62fd3582cd89edf041bb8b71fda3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_stonebraker, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  5 01:51:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1278: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:51:59 compute-0 podman[158197]: time="2025-12-05T01:51:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:51:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:51:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45520 "" "Go-http-client/1.1"
Dec  5 01:51:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:51:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9031 "" "Go-http-client/1.1"
Dec  5 01:51:59 compute-0 kind_stonebraker[418195]: --> passed data devices: 0 physical, 3 LVM
Dec  5 01:51:59 compute-0 kind_stonebraker[418195]: --> relative data size: 1.0
Dec  5 01:51:59 compute-0 kind_stonebraker[418195]: --> All data devices are unavailable
Dec  5 01:52:00 compute-0 systemd[1]: libpod-e4e8cc298d89c0ec5c562840df0c998a72f62fd3582cd89edf041bb8b71fda3a.scope: Deactivated successfully.
Dec  5 01:52:00 compute-0 systemd[1]: libpod-e4e8cc298d89c0ec5c562840df0c998a72f62fd3582cd89edf041bb8b71fda3a.scope: Consumed 1.173s CPU time.
Dec  5 01:52:00 compute-0 podman[418178]: 2025-12-05 01:52:00.043999334 +0000 UTC m=+1.568156946 container died e4e8cc298d89c0ec5c562840df0c998a72f62fd3582cd89edf041bb8b71fda3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:52:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-e67647ac600f23e01ac5d14513ba41f0e9f66fc5e7e0ef6d5612a3c9f031587b-merged.mount: Deactivated successfully.
Dec  5 01:52:00 compute-0 podman[418178]: 2025-12-05 01:52:00.132829111 +0000 UTC m=+1.656986683 container remove e4e8cc298d89c0ec5c562840df0c998a72f62fd3582cd89edf041bb8b71fda3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_stonebraker, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:52:00 compute-0 systemd[1]: libpod-conmon-e4e8cc298d89c0ec5c562840df0c998a72f62fd3582cd89edf041bb8b71fda3a.scope: Deactivated successfully.
Dec  5 01:52:01 compute-0 podman[418374]: 2025-12-05 01:52:01.234314359 +0000 UTC m=+0.063518166 container create 683e804e26084946bf623a292583097029d870f7fee42fd2d4a9c7172c5164b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:52:01 compute-0 podman[418374]: 2025-12-05 01:52:01.213158885 +0000 UTC m=+0.042362672 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:52:01 compute-0 systemd[1]: Started libpod-conmon-683e804e26084946bf623a292583097029d870f7fee42fd2d4a9c7172c5164b9.scope.
Dec  5 01:52:01 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:52:01 compute-0 podman[418374]: 2025-12-05 01:52:01.38661293 +0000 UTC m=+0.215816787 container init 683e804e26084946bf623a292583097029d870f7fee42fd2d4a9c7172c5164b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wright, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  5 01:52:01 compute-0 podman[418374]: 2025-12-05 01:52:01.400462009 +0000 UTC m=+0.229665786 container start 683e804e26084946bf623a292583097029d870f7fee42fd2d4a9c7172c5164b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  5 01:52:01 compute-0 podman[418374]: 2025-12-05 01:52:01.405511581 +0000 UTC m=+0.234715458 container attach 683e804e26084946bf623a292583097029d870f7fee42fd2d4a9c7172c5164b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wright, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  5 01:52:01 compute-0 hopeful_wright[418389]: 167 167
Dec  5 01:52:01 compute-0 systemd[1]: libpod-683e804e26084946bf623a292583097029d870f7fee42fd2d4a9c7172c5164b9.scope: Deactivated successfully.
Dec  5 01:52:01 compute-0 conmon[418389]: conmon 683e804e26084946bf62 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-683e804e26084946bf623a292583097029d870f7fee42fd2d4a9c7172c5164b9.scope/container/memory.events
Dec  5 01:52:01 compute-0 podman[418374]: 2025-12-05 01:52:01.410546562 +0000 UTC m=+0.239750329 container died 683e804e26084946bf623a292583097029d870f7fee42fd2d4a9c7172c5164b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:52:01 compute-0 openstack_network_exporter[366555]: ERROR   01:52:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:52:01 compute-0 openstack_network_exporter[366555]: ERROR   01:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:52:01 compute-0 openstack_network_exporter[366555]: ERROR   01:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:52:01 compute-0 openstack_network_exporter[366555]: ERROR   01:52:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:52:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:52:01 compute-0 openstack_network_exporter[366555]: ERROR   01:52:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:52:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:52:01 compute-0 nova_compute[349548]: 2025-12-05 01:52:01.443 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:52:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-0dfae183bad71dc35b57cffa714a869e9e15e288d6bf93c153de4c81d7f0c3ba-merged.mount: Deactivated successfully.
Dec  5 01:52:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1279: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:52:01 compute-0 podman[418374]: 2025-12-05 01:52:01.481931659 +0000 UTC m=+0.311135436 container remove 683e804e26084946bf623a292583097029d870f7fee42fd2d4a9c7172c5164b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wright, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  5 01:52:01 compute-0 nova_compute[349548]: 2025-12-05 01:52:01.504 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:52:01 compute-0 systemd[1]: libpod-conmon-683e804e26084946bf623a292583097029d870f7fee42fd2d4a9c7172c5164b9.scope: Deactivated successfully.
Dec  5 01:52:01 compute-0 podman[418414]: 2025-12-05 01:52:01.776507827 +0000 UTC m=+0.099111447 container create 2a7cb1f2e2c86a9ba6c314a1cd786a5c5609fedc35976912d183ec22d05ff954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  5 01:52:01 compute-0 podman[418414]: 2025-12-05 01:52:01.724759853 +0000 UTC m=+0.047363563 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:52:01 compute-0 systemd[1]: Started libpod-conmon-2a7cb1f2e2c86a9ba6c314a1cd786a5c5609fedc35976912d183ec22d05ff954.scope.
Dec  5 01:52:01 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:52:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1039f4c548ba584d9b62bbe15a2410a126996fd1880ff692ffefc52b8d0aaf21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:52:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1039f4c548ba584d9b62bbe15a2410a126996fd1880ff692ffefc52b8d0aaf21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:52:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1039f4c548ba584d9b62bbe15a2410a126996fd1880ff692ffefc52b8d0aaf21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:52:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1039f4c548ba584d9b62bbe15a2410a126996fd1880ff692ffefc52b8d0aaf21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:52:01 compute-0 podman[418428]: 2025-12-05 01:52:01.938195712 +0000 UTC m=+0.102847912 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release-0.7.12=, container_name=kepler, vendor=Red Hat, Inc., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, config_id=edpm, release=1214.1726694543, architecture=x86_64)
Dec  5 01:52:01 compute-0 podman[418414]: 2025-12-05 01:52:01.95558855 +0000 UTC m=+0.278192200 container init 2a7cb1f2e2c86a9ba6c314a1cd786a5c5609fedc35976912d183ec22d05ff954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_gauss, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Dec  5 01:52:01 compute-0 podman[418414]: 2025-12-05 01:52:01.968523604 +0000 UTC m=+0.291127224 container start 2a7cb1f2e2c86a9ba6c314a1cd786a5c5609fedc35976912d183ec22d05ff954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_gauss, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:52:01 compute-0 podman[418414]: 2025-12-05 01:52:01.973067132 +0000 UTC m=+0.295670782 container attach 2a7cb1f2e2c86a9ba6c314a1cd786a5c5609fedc35976912d183ec22d05ff954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_gauss, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:52:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]: {
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:    "0": [
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:        {
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:            "devices": [
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:                "/dev/loop3"
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:            ],
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:            "lv_name": "ceph_lv0",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:            "lv_size": "21470642176",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:            "name": "ceph_lv0",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:            "tags": {
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:                "ceph.cluster_name": "ceph",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:                "ceph.crush_device_class": "",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:                "ceph.encrypted": "0",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:                "ceph.osd_id": "0",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:                "ceph.type": "block",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:                "ceph.vdo": "0"
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:            },
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:            "type": "block",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:            "vg_name": "ceph_vg0"
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:        }
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:    ],
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:    "1": [
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:        {
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:            "devices": [
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:                "/dev/loop4"
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:            ],
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:            "lv_name": "ceph_lv1",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:            "lv_size": "21470642176",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:            "name": "ceph_lv1",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:            "tags": {
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:                "ceph.cluster_name": "ceph",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:                "ceph.crush_device_class": "",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:                "ceph.encrypted": "0",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:                "ceph.osd_id": "1",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:                "ceph.type": "block",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:                "ceph.vdo": "0"
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:            },
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:            "type": "block",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:            "vg_name": "ceph_vg1"
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:        }
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:    ],
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:    "2": [
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:        {
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:            "devices": [
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:                "/dev/loop5"
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:            ],
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:            "lv_name": "ceph_lv2",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:            "lv_size": "21470642176",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:            "name": "ceph_lv2",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:            "tags": {
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:                "ceph.cluster_name": "ceph",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:                "ceph.crush_device_class": "",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:                "ceph.encrypted": "0",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:                "ceph.osd_id": "2",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:                "ceph.type": "block",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:                "ceph.vdo": "0"
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:            },
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:            "type": "block",
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:            "vg_name": "ceph_vg2"
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:        }
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]:    ]
Dec  5 01:52:02 compute-0 stupefied_gauss[418439]: }
Dec  5 01:52:02 compute-0 systemd[1]: libpod-2a7cb1f2e2c86a9ba6c314a1cd786a5c5609fedc35976912d183ec22d05ff954.scope: Deactivated successfully.
Dec  5 01:52:02 compute-0 podman[418414]: 2025-12-05 01:52:02.739836593 +0000 UTC m=+1.062440223 container died 2a7cb1f2e2c86a9ba6c314a1cd786a5c5609fedc35976912d183ec22d05ff954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_gauss, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:52:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-1039f4c548ba584d9b62bbe15a2410a126996fd1880ff692ffefc52b8d0aaf21-merged.mount: Deactivated successfully.
Dec  5 01:52:02 compute-0 podman[418414]: 2025-12-05 01:52:02.821490378 +0000 UTC m=+1.144093998 container remove 2a7cb1f2e2c86a9ba6c314a1cd786a5c5609fedc35976912d183ec22d05ff954 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:52:02 compute-0 systemd[1]: libpod-conmon-2a7cb1f2e2c86a9ba6c314a1cd786a5c5609fedc35976912d183ec22d05ff954.scope: Deactivated successfully.
Dec  5 01:52:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1280: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:52:03 compute-0 podman[418607]: 2025-12-05 01:52:03.97255735 +0000 UTC m=+0.080120933 container create e03ae56bac1a2c27672b2c27e171caa80e11be8053ee1763b3ded87e58eac64e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ellis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  5 01:52:04 compute-0 podman[418607]: 2025-12-05 01:52:03.947628269 +0000 UTC m=+0.055191912 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:52:04 compute-0 systemd[1]: Started libpod-conmon-e03ae56bac1a2c27672b2c27e171caa80e11be8053ee1763b3ded87e58eac64e.scope.
Dec  5 01:52:04 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:52:04 compute-0 podman[418607]: 2025-12-05 01:52:04.105314761 +0000 UTC m=+0.212878364 container init e03ae56bac1a2c27672b2c27e171caa80e11be8053ee1763b3ded87e58eac64e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ellis, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:52:04 compute-0 podman[418607]: 2025-12-05 01:52:04.114337655 +0000 UTC m=+0.221901228 container start e03ae56bac1a2c27672b2c27e171caa80e11be8053ee1763b3ded87e58eac64e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ellis, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:52:04 compute-0 podman[418607]: 2025-12-05 01:52:04.118961865 +0000 UTC m=+0.226525458 container attach e03ae56bac1a2c27672b2c27e171caa80e11be8053ee1763b3ded87e58eac64e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ellis, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:52:04 compute-0 brave_ellis[418623]: 167 167
Dec  5 01:52:04 compute-0 systemd[1]: libpod-e03ae56bac1a2c27672b2c27e171caa80e11be8053ee1763b3ded87e58eac64e.scope: Deactivated successfully.
Dec  5 01:52:04 compute-0 podman[418607]: 2025-12-05 01:52:04.121487616 +0000 UTC m=+0.229051189 container died e03ae56bac1a2c27672b2c27e171caa80e11be8053ee1763b3ded87e58eac64e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:52:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecabc8b3369dd63e3c4f9d4304f40126a6addbf5bafffc606fa1030166ae5afa-merged.mount: Deactivated successfully.
Dec  5 01:52:04 compute-0 podman[418607]: 2025-12-05 01:52:04.174244708 +0000 UTC m=+0.281808291 container remove e03ae56bac1a2c27672b2c27e171caa80e11be8053ee1763b3ded87e58eac64e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:52:04 compute-0 systemd[1]: libpod-conmon-e03ae56bac1a2c27672b2c27e171caa80e11be8053ee1763b3ded87e58eac64e.scope: Deactivated successfully.
Dec  5 01:52:04 compute-0 podman[418646]: 2025-12-05 01:52:04.424362318 +0000 UTC m=+0.086498972 container create 2160cf935e6ad96ee5bda2511b1f315d2de4ffd9916346f71c23373959a2c03e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_shockley, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  5 01:52:04 compute-0 podman[418646]: 2025-12-05 01:52:04.391196116 +0000 UTC m=+0.053332810 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:52:04 compute-0 systemd[1]: Started libpod-conmon-2160cf935e6ad96ee5bda2511b1f315d2de4ffd9916346f71c23373959a2c03e.scope.
Dec  5 01:52:04 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:52:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12bf3f2ed8ad5e5655b28fd45e8afab6698d8e976822a41e9aa520bd95f94b7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:52:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12bf3f2ed8ad5e5655b28fd45e8afab6698d8e976822a41e9aa520bd95f94b7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:52:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12bf3f2ed8ad5e5655b28fd45e8afab6698d8e976822a41e9aa520bd95f94b7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:52:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12bf3f2ed8ad5e5655b28fd45e8afab6698d8e976822a41e9aa520bd95f94b7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:52:04 compute-0 podman[418646]: 2025-12-05 01:52:04.607822035 +0000 UTC m=+0.269958719 container init 2160cf935e6ad96ee5bda2511b1f315d2de4ffd9916346f71c23373959a2c03e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  5 01:52:04 compute-0 podman[418646]: 2025-12-05 01:52:04.627155158 +0000 UTC m=+0.289291772 container start 2160cf935e6ad96ee5bda2511b1f315d2de4ffd9916346f71c23373959a2c03e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_shockley, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  5 01:52:04 compute-0 podman[418646]: 2025-12-05 01:52:04.631615063 +0000 UTC m=+0.293751717 container attach 2160cf935e6ad96ee5bda2511b1f315d2de4ffd9916346f71c23373959a2c03e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_shockley, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  5 01:52:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1281: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:52:05 compute-0 modest_shockley[418662]: {
Dec  5 01:52:05 compute-0 modest_shockley[418662]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 01:52:05 compute-0 modest_shockley[418662]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:52:05 compute-0 modest_shockley[418662]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 01:52:05 compute-0 modest_shockley[418662]:        "osd_id": 0,
Dec  5 01:52:05 compute-0 modest_shockley[418662]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:52:05 compute-0 modest_shockley[418662]:        "type": "bluestore"
Dec  5 01:52:05 compute-0 modest_shockley[418662]:    },
Dec  5 01:52:05 compute-0 modest_shockley[418662]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 01:52:05 compute-0 modest_shockley[418662]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:52:05 compute-0 modest_shockley[418662]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 01:52:05 compute-0 modest_shockley[418662]:        "osd_id": 1,
Dec  5 01:52:05 compute-0 modest_shockley[418662]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:52:05 compute-0 modest_shockley[418662]:        "type": "bluestore"
Dec  5 01:52:05 compute-0 modest_shockley[418662]:    },
Dec  5 01:52:05 compute-0 modest_shockley[418662]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 01:52:05 compute-0 modest_shockley[418662]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:52:05 compute-0 modest_shockley[418662]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 01:52:05 compute-0 modest_shockley[418662]:        "osd_id": 2,
Dec  5 01:52:05 compute-0 modest_shockley[418662]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:52:05 compute-0 modest_shockley[418662]:        "type": "bluestore"
Dec  5 01:52:05 compute-0 modest_shockley[418662]:    }
Dec  5 01:52:05 compute-0 modest_shockley[418662]: }
Dec  5 01:52:05 compute-0 systemd[1]: libpod-2160cf935e6ad96ee5bda2511b1f315d2de4ffd9916346f71c23373959a2c03e.scope: Deactivated successfully.
Dec  5 01:52:05 compute-0 podman[418646]: 2025-12-05 01:52:05.778802115 +0000 UTC m=+1.440938739 container died 2160cf935e6ad96ee5bda2511b1f315d2de4ffd9916346f71c23373959a2c03e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_shockley, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  5 01:52:05 compute-0 systemd[1]: libpod-2160cf935e6ad96ee5bda2511b1f315d2de4ffd9916346f71c23373959a2c03e.scope: Consumed 1.138s CPU time.
Dec  5 01:52:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-12bf3f2ed8ad5e5655b28fd45e8afab6698d8e976822a41e9aa520bd95f94b7f-merged.mount: Deactivated successfully.
Dec  5 01:52:05 compute-0 podman[418646]: 2025-12-05 01:52:05.855872372 +0000 UTC m=+1.518008996 container remove 2160cf935e6ad96ee5bda2511b1f315d2de4ffd9916346f71c23373959a2c03e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_shockley, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:52:05 compute-0 systemd[1]: libpod-conmon-2160cf935e6ad96ee5bda2511b1f315d2de4ffd9916346f71c23373959a2c03e.scope: Deactivated successfully.
Dec  5 01:52:05 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:52:05 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:52:05 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:52:05 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:52:05 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 0b2dd061-b004-4831-8301-51d38d67fa8f does not exist
Dec  5 01:52:05 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 0d1fc0e5-84b2-4c26-a1fa-44e8dffc43ec does not exist
Dec  5 01:52:06 compute-0 nova_compute[349548]: 2025-12-05 01:52:06.446 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:52:06 compute-0 nova_compute[349548]: 2025-12-05 01:52:06.506 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:52:06 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:52:06 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:52:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1282: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:52:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:52:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1283: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:52:09 compute-0 podman[418758]: 2025-12-05 01:52:09.721691344 +0000 UTC m=+0.114533910 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec  5 01:52:09 compute-0 podman[418759]: 2025-12-05 01:52:09.737022975 +0000 UTC m=+0.131275031 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 01:52:09 compute-0 podman[418761]: 2025-12-05 01:52:09.745267997 +0000 UTC m=+0.114102618 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41)
Dec  5 01:52:09 compute-0 podman[418760]: 2025-12-05 01:52:09.799140911 +0000 UTC m=+0.181615386 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  5 01:52:11 compute-0 nova_compute[349548]: 2025-12-05 01:52:11.447 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:52:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1284: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:52:11 compute-0 nova_compute[349548]: 2025-12-05 01:52:11.509 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:52:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:52:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1285: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:52:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1286: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:52:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:52:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:52:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:52:16
Dec  5 01:52:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 01:52:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 01:52:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'images', '.rgw.root', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', '.mgr', 'volumes']
Dec  5 01:52:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:52:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:52:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 01:52:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:52:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:52:16 compute-0 nova_compute[349548]: 2025-12-05 01:52:16.450 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:52:16 compute-0 nova_compute[349548]: 2025-12-05 01:52:16.512 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:52:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 01:52:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:52:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 01:52:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:52:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:52:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:52:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:52:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:52:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:52:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:52:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1287: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:52:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:52:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1288: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:52:21 compute-0 nova_compute[349548]: 2025-12-05 01:52:21.453 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:52:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1289: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:52:21 compute-0 nova_compute[349548]: 2025-12-05 01:52:21.515 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:52:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:52:22 compute-0 nova_compute[349548]: 2025-12-05 01:52:22.884 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:52:22 compute-0 nova_compute[349548]: 2025-12-05 01:52:22.884 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:52:22 compute-0 nova_compute[349548]: 2025-12-05 01:52:22.885 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:52:22 compute-0 nova_compute[349548]: 2025-12-05 01:52:22.885 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:52:22 compute-0 nova_compute[349548]: 2025-12-05 01:52:22.885 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:52:22 compute-0 nova_compute[349548]: 2025-12-05 01:52:22.886 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 01:52:23 compute-0 nova_compute[349548]: 2025-12-05 01:52:23.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:52:23 compute-0 nova_compute[349548]: 2025-12-05 01:52:23.068 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 01:52:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1290: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:52:23 compute-0 nova_compute[349548]: 2025-12-05 01:52:23.534 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 01:52:23 compute-0 nova_compute[349548]: 2025-12-05 01:52:23.535 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 01:52:23 compute-0 nova_compute[349548]: 2025-12-05 01:52:23.536 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  5 01:52:25 compute-0 nova_compute[349548]: 2025-12-05 01:52:25.107 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Updating instance_info_cache with network_info: [{"id": "554930d3-ff53-4ef1-af0a-bad6acef1456", "address": "fa:16:3e:43:63:18", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554930d3-ff", "ovs_interfaceid": "554930d3-ff53-4ef1-af0a-bad6acef1456", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 01:52:25 compute-0 nova_compute[349548]: 2025-12-05 01:52:25.124 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 01:52:25 compute-0 nova_compute[349548]: 2025-12-05 01:52:25.124 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  5 01:52:25 compute-0 nova_compute[349548]: 2025-12-05 01:52:25.125 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:52:25 compute-0 nova_compute[349548]: 2025-12-05 01:52:25.125 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:52:25 compute-0 nova_compute[349548]: 2025-12-05 01:52:25.126 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:52:25 compute-0 nova_compute[349548]: 2025-12-05 01:52:25.150 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:52:25 compute-0 nova_compute[349548]: 2025-12-05 01:52:25.151 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:52:25 compute-0 nova_compute[349548]: 2025-12-05 01:52:25.151 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:52:25 compute-0 nova_compute[349548]: 2025-12-05 01:52:25.152 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 01:52:25 compute-0 nova_compute[349548]: 2025-12-05 01:52:25.152 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:52:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1291: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:52:25 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:52:25 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1054848363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:52:25 compute-0 nova_compute[349548]: 2025-12-05 01:52:25.629 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:52:25 compute-0 nova_compute[349548]: 2025-12-05 01:52:25.767 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:52:25 compute-0 nova_compute[349548]: 2025-12-05 01:52:25.771 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:52:25 compute-0 nova_compute[349548]: 2025-12-05 01:52:25.771 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:52:25 compute-0 nova_compute[349548]: 2025-12-05 01:52:25.780 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:52:25 compute-0 nova_compute[349548]: 2025-12-05 01:52:25.781 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:52:25 compute-0 nova_compute[349548]: 2025-12-05 01:52:25.782 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:52:26 compute-0 nova_compute[349548]: 2025-12-05 01:52:26.256 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 01:52:26 compute-0 nova_compute[349548]: 2025-12-05 01:52:26.257 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3745MB free_disk=59.9220085144043GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 01:52:26 compute-0 nova_compute[349548]: 2025-12-05 01:52:26.258 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:52:26 compute-0 nova_compute[349548]: 2025-12-05 01:52:26.258 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:52:26 compute-0 nova_compute[349548]: 2025-12-05 01:52:26.366 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b69a0e24-1bc4-46a5-92d7-367c1efd53df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 01:52:26 compute-0 nova_compute[349548]: 2025-12-05 01:52:26.367 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b82c3f0e-6d6a-4a7b-9556-b609ad63e497 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 01:52:26 compute-0 nova_compute[349548]: 2025-12-05 01:52:26.367 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 01:52:26 compute-0 nova_compute[349548]: 2025-12-05 01:52:26.367 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 01:52:26 compute-0 nova_compute[349548]: 2025-12-05 01:52:26.446 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:52:26 compute-0 nova_compute[349548]: 2025-12-05 01:52:26.464 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:52:26 compute-0 nova_compute[349548]: 2025-12-05 01:52:26.517 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:52:26 compute-0 podman[418876]: 2025-12-05 01:52:26.69557926 +0000 UTC m=+0.088144428 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011047613669662043 of space, bias 1.0, pg target 0.3314284100898613 quantized to 32 (current 32)
Dec  5 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  5 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:52:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 01:52:26 compute-0 podman[418865]: 2025-12-05 01:52:26.734473754 +0000 UTC m=+0.131117757 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:52:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:52:26 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1898561506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:52:26 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 01:52:26 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 5973 writes, 26K keys, 5973 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 5973 writes, 5973 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1367 writes, 6150 keys, 1367 commit groups, 1.0 writes per commit group, ingest: 8.79 MB, 0.01 MB/s#012Interval WAL: 1367 writes, 1367 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    108.2      0.28              0.13        15    0.019       0      0       0.0       0.0#012  L6      1/0    7.02 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3    135.8    109.9      0.90              0.42        14    0.064     63K   7823       0.0       0.0#012 Sum      1/0    7.02 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3    103.6    109.5      1.18              0.55        29    0.041     63K   7823       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.5     98.8     99.9      0.38              0.17         8    0.047     20K   2553       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    135.8    109.9      0.90              0.42        14    0.064     63K   7823       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    110.0      0.27              0.13        14    0.020       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.4      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.029, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.13 GB write, 0.05 MB/s write, 0.12 GB read, 0.05 MB/s read, 1.2 seconds#012Interval compaction: 0.04 GB write, 0.06 MB/s write, 0.04 GB read, 0.06 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56463779d1f0#2 capacity: 308.00 MB usage: 13.15 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000117 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(849,12.64 MB,4.10526%) FilterBlock(30,181.92 KB,0.0576812%) IndexBlock(30,338.42 KB,0.107302%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  5 01:52:26 compute-0 nova_compute[349548]: 2025-12-05 01:52:26.987 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:52:26 compute-0 nova_compute[349548]: 2025-12-05 01:52:26.998 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 01:52:27 compute-0 nova_compute[349548]: 2025-12-05 01:52:27.024 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 01:52:27 compute-0 nova_compute[349548]: 2025-12-05 01:52:27.027 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 01:52:27 compute-0 nova_compute[349548]: 2025-12-05 01:52:27.027 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:52:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1292: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:52:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:52:28 compute-0 podman[418929]: 2025-12-05 01:52:28.720058631 +0000 UTC m=+0.122203236 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  5 01:52:28 compute-0 podman[418928]: 2025-12-05 01:52:28.737655466 +0000 UTC m=+0.140600653 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Dec  5 01:52:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1293: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:52:29 compute-0 podman[158197]: time="2025-12-05T01:52:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:52:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:52:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 01:52:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:52:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8626 "" "Go-http-client/1.1"
Dec  5 01:52:31 compute-0 openstack_network_exporter[366555]: ERROR   01:52:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:52:31 compute-0 openstack_network_exporter[366555]: ERROR   01:52:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:52:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:52:31 compute-0 openstack_network_exporter[366555]: ERROR   01:52:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:52:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:52:31 compute-0 openstack_network_exporter[366555]: ERROR   01:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:52:31 compute-0 openstack_network_exporter[366555]: ERROR   01:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:52:31 compute-0 nova_compute[349548]: 2025-12-05 01:52:31.458 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:52:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1294: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:52:31 compute-0 nova_compute[349548]: 2025-12-05 01:52:31.525 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:52:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:52:32 compute-0 podman[418965]: 2025-12-05 01:52:32.690756011 +0000 UTC m=+0.108659015 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, container_name=kepler, io.openshift.expose-services=, vendor=Red Hat, Inc., version=9.4, io.openshift.tags=base rhel9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  5 01:52:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1295: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:52:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1296: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:52:36 compute-0 nova_compute[349548]: 2025-12-05 01:52:36.464 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:52:36 compute-0 nova_compute[349548]: 2025-12-05 01:52:36.528 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:52:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1297: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:52:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.316 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.317 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.318 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.331 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b69a0e24-1bc4-46a5-92d7-367c1efd53df', 'name': 'test_0', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.337 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b82c3f0e-6d6a-4a7b-9556-b609ad63e497', 'name': 'vn-4ysdpfw-vozvkqjb7v2u-n3c5nyx5kkkm-vnf-x5qm3qqtonfj', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {'metering.server_group': 'b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.338 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.338 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd61438050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.338 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd61438050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.339 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.339 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-05T01:52:38.339029) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.341 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.341 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.341 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.342 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.342 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.342 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-05T01:52:38.342393) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.378 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.378 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.379 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.418 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.419 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.420 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.421 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.421 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.422 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.422 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.422 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.423 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.424 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.424 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.425 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.425 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.426 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.426 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-05T01:52:38.422837) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.426 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.427 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.427 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.428 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-05T01:52:38.427553) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.520 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.521 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.522 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.619 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.620 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.621 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.622 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.623 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.623 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.623 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.623 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.623 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.624 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 2043636416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.624 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 325714825 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.625 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-05T01:52:38.623704) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.625 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 190759187 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.626 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.latency volume: 2069488567 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.626 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.latency volume: 288882839 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.627 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.latency volume: 182154388 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.628 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.628 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.628 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.628 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.628 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.629 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.629 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.629 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.630 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-05T01:52:38.628798) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.630 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.631 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.631 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.632 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.633 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.634 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.634 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.634 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.634 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.635 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.635 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.635 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-05T01:52:38.635033) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.636 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.637 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.637 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.638 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.639 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.640 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.640 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.640 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.641 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.641 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.641 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.642 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 41762816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.643 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.643 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-05T01:52:38.641714) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.644 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.644 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.bytes volume: 41824256 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.645 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.646 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.647 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.647 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.647 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.648 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.648 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.648 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.649 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-05T01:52:38.648433) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.691 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.734 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.734 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.735 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.735 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.735 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.735 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.735 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.735 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 7524740776 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.735 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 28454640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.735 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.736 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.latency volume: 9113944897 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.736 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.latency volume: 32028870 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.736 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.737 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.737 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.737 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.737 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.737 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.737 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.737 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-05T01:52:38.735397) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.737 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.737 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.738 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.738 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.requests volume: 237 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.738 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-05T01:52:38.737641) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.738 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.738 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.739 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.739 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.739 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.739 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.739 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.739 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.739 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-05T01:52:38.739564) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.745 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.749 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.749 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.749 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.750 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.750 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.750 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.750 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.750 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.750 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.751 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.751 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.751 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.751 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.751 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.751 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.751 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.751 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-05T01:52:38.750232) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.751 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.752 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.752 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.752 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.752 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.753 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.753 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.753 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.753 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.753 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-05T01:52:38.751634) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.753 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.753 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.753 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.754 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.754 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.754 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.754 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.754 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-05T01:52:38.753655) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.754 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.754 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.754 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.754 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.755 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.bytes volume: 4628 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.755 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.755 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.755 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.755 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.755 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.756 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.756 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.756 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.bytes.delta volume: 140 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.756 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.756 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.756 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.756 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.757 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.757 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.757 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.757 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.757 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/memory.usage volume: 49.03125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.757 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/memory.usage volume: 49.15625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.757 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.758 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.758 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.758 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.758 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-05T01:52:38.754814) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.758 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.758 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-05T01:52:38.756003) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.758 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.758 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.758 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-05T01:52:38.757341) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.758 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.bytes volume: 4849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.759 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.759 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.759 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.759 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.759 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.759 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.760 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.760 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-05T01:52:38.758539) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.760 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.packets volume: 39 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.760 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-05T01:52:38.759824) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.760 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.760 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.760 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.760 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.761 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.761 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.761 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.761 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.761 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.761 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.761 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.762 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.762 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.762 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.762 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/cpu volume: 38600000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.762 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/cpu volume: 158680000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.762 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.763 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.763 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.763 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.763 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.763 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.763 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.763 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.764 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.764 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.764 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.764 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.764 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.764 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.764 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-05T01:52:38.761060) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.764 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-05T01:52:38.762210) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.764 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.765 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-05T01:52:38.763435) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.765 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.765 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-05T01:52:38.764636) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.765 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.765 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.765 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.766 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.766 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.766 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.766 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.766 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.766 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.766 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.766 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.766 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.767 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.767 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.767 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.767 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.767 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.767 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.767 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.767 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.767 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.767 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.768 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.768 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.768 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.768 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:52:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:52:38.768 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:52:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1298: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:52:40 compute-0 podman[418986]: 2025-12-05 01:52:40.704223125 +0000 UTC m=+0.116580986 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  5 01:52:40 compute-0 podman[418987]: 2025-12-05 01:52:40.705353086 +0000 UTC m=+0.108327095 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  5 01:52:40 compute-0 podman[418989]: 2025-12-05 01:52:40.739533543 +0000 UTC m=+0.129158728 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.buildah.version=1.33.7, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal)
Dec  5 01:52:40 compute-0 podman[418988]: 2025-12-05 01:52:40.75975769 +0000 UTC m=+0.153224082 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  5 01:52:41 compute-0 nova_compute[349548]: 2025-12-05 01:52:41.468 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:52:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1299: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:52:41 compute-0 nova_compute[349548]: 2025-12-05 01:52:41.531 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:52:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:52:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1300: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:52:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 01:52:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1828599589' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 01:52:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 01:52:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1828599589' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 01:52:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1301: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:52:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:52:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:52:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:52:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:52:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:52:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:52:46 compute-0 nova_compute[349548]: 2025-12-05 01:52:46.471 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:52:46 compute-0 nova_compute[349548]: 2025-12-05 01:52:46.534 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:52:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1302: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:52:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:52:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1303: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:52:51 compute-0 nova_compute[349548]: 2025-12-05 01:52:51.471 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:52:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1304: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:52:51 compute-0 nova_compute[349548]: 2025-12-05 01:52:51.536 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:52:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:52:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1305: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:52:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1306: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:52:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:52:56.181 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:52:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:52:56.182 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:52:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:52:56.183 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:52:56 compute-0 nova_compute[349548]: 2025-12-05 01:52:56.474 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:52:56 compute-0 nova_compute[349548]: 2025-12-05 01:52:56.538 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:52:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1307: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:52:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:52:57 compute-0 podman[419072]: 2025-12-05 01:52:57.696849732 +0000 UTC m=+0.100718931 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  5 01:52:57 compute-0 podman[419073]: 2025-12-05 01:52:57.710795743 +0000 UTC m=+0.113496289 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  5 01:52:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1308: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:52:59 compute-0 podman[419109]: 2025-12-05 01:52:59.723822389 +0000 UTC m=+0.129793426 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  5 01:52:59 compute-0 podman[158197]: time="2025-12-05T01:52:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:52:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:52:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 01:52:59 compute-0 podman[419110]: 2025-12-05 01:52:59.774773656 +0000 UTC m=+0.176955267 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  5 01:52:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:52:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8622 "" "Go-http-client/1.1"
Dec  5 01:53:01 compute-0 openstack_network_exporter[366555]: ERROR   01:53:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:53:01 compute-0 openstack_network_exporter[366555]: ERROR   01:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:53:01 compute-0 openstack_network_exporter[366555]: ERROR   01:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:53:01 compute-0 openstack_network_exporter[366555]: ERROR   01:53:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:53:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:53:01 compute-0 openstack_network_exporter[366555]: ERROR   01:53:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:53:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:53:01 compute-0 nova_compute[349548]: 2025-12-05 01:53:01.476 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:53:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1309: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:53:01 compute-0 nova_compute[349548]: 2025-12-05 01:53:01.541 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:53:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:53:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1310: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:53:03 compute-0 podman[419147]: 2025-12-05 01:53:03.678032478 +0000 UTC m=+0.102870442 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.openshift.tags=base rhel9, managed_by=edpm_ansible, release-0.7.12=, distribution-scope=public, io.buildah.version=1.29.0, vcs-type=git, io.openshift.expose-services=, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, build-date=2024-09-18T21:23:30, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec  5 01:53:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1311: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:53:06 compute-0 nova_compute[349548]: 2025-12-05 01:53:06.480 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:53:06 compute-0 nova_compute[349548]: 2025-12-05 01:53:06.544 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:53:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:53:07 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:53:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 01:53:07 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:53:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 01:53:07 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:53:07 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 8d9710f4-39d8-4eaf-a837-8402af085405 does not exist
Dec  5 01:53:07 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 69bc13fb-8402-4f49-a2f8-8166046b8312 does not exist
Dec  5 01:53:07 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 87213a3a-3f39-4970-88e9-1df0ec7da356 does not exist
Dec  5 01:53:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 01:53:07 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 01:53:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 01:53:07 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:53:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:53:07 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:53:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1312: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:53:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:53:07 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:53:07 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:53:07 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:53:08 compute-0 podman[419436]: 2025-12-05 01:53:08.381751989 +0000 UTC m=+0.092991335 container create 0c0341ad7548475be41dcac8e0cdfba3854bb3a4af9ecd9593c163d7dd3f8610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_varahamihira, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  5 01:53:08 compute-0 podman[419436]: 2025-12-05 01:53:08.338510458 +0000 UTC m=+0.049749804 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:53:08 compute-0 systemd[1]: Started libpod-conmon-0c0341ad7548475be41dcac8e0cdfba3854bb3a4af9ecd9593c163d7dd3f8610.scope.
Dec  5 01:53:08 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:53:08 compute-0 podman[419436]: 2025-12-05 01:53:08.548790377 +0000 UTC m=+0.260029753 container init 0c0341ad7548475be41dcac8e0cdfba3854bb3a4af9ecd9593c163d7dd3f8610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:53:08 compute-0 podman[419436]: 2025-12-05 01:53:08.558112928 +0000 UTC m=+0.269352234 container start 0c0341ad7548475be41dcac8e0cdfba3854bb3a4af9ecd9593c163d7dd3f8610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_varahamihira, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:53:08 compute-0 podman[419436]: 2025-12-05 01:53:08.562948384 +0000 UTC m=+0.274187750 container attach 0c0341ad7548475be41dcac8e0cdfba3854bb3a4af9ecd9593c163d7dd3f8610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_varahamihira, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  5 01:53:08 compute-0 nostalgic_varahamihira[419451]: 167 167
Dec  5 01:53:08 compute-0 podman[419436]: 2025-12-05 01:53:08.57138945 +0000 UTC m=+0.282628796 container died 0c0341ad7548475be41dcac8e0cdfba3854bb3a4af9ecd9593c163d7dd3f8610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_varahamihira, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  5 01:53:08 compute-0 systemd[1]: libpod-0c0341ad7548475be41dcac8e0cdfba3854bb3a4af9ecd9593c163d7dd3f8610.scope: Deactivated successfully.
Dec  5 01:53:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-92baa89a7fe535aea3b47f51cfa9f48e1286ab1d41d8dbf23f28e5b5f2a49227-merged.mount: Deactivated successfully.
Dec  5 01:53:08 compute-0 podman[419436]: 2025-12-05 01:53:08.670028623 +0000 UTC m=+0.381267939 container remove 0c0341ad7548475be41dcac8e0cdfba3854bb3a4af9ecd9593c163d7dd3f8610 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_varahamihira, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  5 01:53:08 compute-0 systemd[1]: libpod-conmon-0c0341ad7548475be41dcac8e0cdfba3854bb3a4af9ecd9593c163d7dd3f8610.scope: Deactivated successfully.
Dec  5 01:53:08 compute-0 podman[419475]: 2025-12-05 01:53:08.926065893 +0000 UTC m=+0.079227200 container create 9a76ffd9e14e27abf737c6bf65b04dff11435e32beacbecccd9d65aa457cfec7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_feistel, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:53:09 compute-0 podman[419475]: 2025-12-05 01:53:08.901086253 +0000 UTC m=+0.054247560 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:53:09 compute-0 systemd[1]: Started libpod-conmon-9a76ffd9e14e27abf737c6bf65b04dff11435e32beacbecccd9d65aa457cfec7.scope.
Dec  5 01:53:09 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:53:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32e88ac408648593cdeda7e402632fb32d311f4397eb6cc2adaaa4eeb9db0170/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:53:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32e88ac408648593cdeda7e402632fb32d311f4397eb6cc2adaaa4eeb9db0170/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:53:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32e88ac408648593cdeda7e402632fb32d311f4397eb6cc2adaaa4eeb9db0170/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:53:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32e88ac408648593cdeda7e402632fb32d311f4397eb6cc2adaaa4eeb9db0170/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:53:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32e88ac408648593cdeda7e402632fb32d311f4397eb6cc2adaaa4eeb9db0170/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:53:09 compute-0 podman[419475]: 2025-12-05 01:53:09.086066474 +0000 UTC m=+0.239227831 container init 9a76ffd9e14e27abf737c6bf65b04dff11435e32beacbecccd9d65aa457cfec7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec  5 01:53:09 compute-0 podman[419475]: 2025-12-05 01:53:09.100546909 +0000 UTC m=+0.253708246 container start 9a76ffd9e14e27abf737c6bf65b04dff11435e32beacbecccd9d65aa457cfec7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_feistel, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:53:09 compute-0 podman[419475]: 2025-12-05 01:53:09.108129732 +0000 UTC m=+0.261291089 container attach 9a76ffd9e14e27abf737c6bf65b04dff11435e32beacbecccd9d65aa457cfec7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_feistel, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:53:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1313: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:53:10 compute-0 hungry_feistel[419492]: --> passed data devices: 0 physical, 3 LVM
Dec  5 01:53:10 compute-0 hungry_feistel[419492]: --> relative data size: 1.0
Dec  5 01:53:10 compute-0 hungry_feistel[419492]: --> All data devices are unavailable
Dec  5 01:53:10 compute-0 systemd[1]: libpod-9a76ffd9e14e27abf737c6bf65b04dff11435e32beacbecccd9d65aa457cfec7.scope: Deactivated successfully.
Dec  5 01:53:10 compute-0 podman[419475]: 2025-12-05 01:53:10.36604903 +0000 UTC m=+1.519210357 container died 9a76ffd9e14e27abf737c6bf65b04dff11435e32beacbecccd9d65aa457cfec7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_feistel, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:53:10 compute-0 systemd[1]: libpod-9a76ffd9e14e27abf737c6bf65b04dff11435e32beacbecccd9d65aa457cfec7.scope: Consumed 1.198s CPU time.
Dec  5 01:53:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-32e88ac408648593cdeda7e402632fb32d311f4397eb6cc2adaaa4eeb9db0170-merged.mount: Deactivated successfully.
Dec  5 01:53:10 compute-0 podman[419475]: 2025-12-05 01:53:10.438434617 +0000 UTC m=+1.591595934 container remove 9a76ffd9e14e27abf737c6bf65b04dff11435e32beacbecccd9d65aa457cfec7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_feistel, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:53:10 compute-0 systemd[1]: libpod-conmon-9a76ffd9e14e27abf737c6bf65b04dff11435e32beacbecccd9d65aa457cfec7.scope: Deactivated successfully.
Dec  5 01:53:10 compute-0 podman[419581]: 2025-12-05 01:53:10.916917018 +0000 UTC m=+0.098135310 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  5 01:53:10 compute-0 podman[419582]: 2025-12-05 01:53:10.950150228 +0000 UTC m=+0.134620341 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 01:53:10 compute-0 podman[419583]: 2025-12-05 01:53:10.965123138 +0000 UTC m=+0.144939291 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec  5 01:53:10 compute-0 podman[419584]: 2025-12-05 01:53:10.969821549 +0000 UTC m=+0.132015888 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, io.openshift.expose-services=, config_id=edpm, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, release=1755695350, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., distribution-scope=public)
Dec  5 01:53:11 compute-0 nova_compute[349548]: 2025-12-05 01:53:11.483 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:53:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1314: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:53:11 compute-0 nova_compute[349548]: 2025-12-05 01:53:11.547 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:53:11 compute-0 podman[419753]: 2025-12-05 01:53:11.618667121 +0000 UTC m=+0.078874090 container create 58d5851364421d9062e567d5f59fede427f1194c8fe85e5835542a1c060f3829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  5 01:53:11 compute-0 podman[419753]: 2025-12-05 01:53:11.580846582 +0000 UTC m=+0.041053601 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:53:11 compute-0 systemd[1]: Started libpod-conmon-58d5851364421d9062e567d5f59fede427f1194c8fe85e5835542a1c060f3829.scope.
Dec  5 01:53:11 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:53:11 compute-0 podman[419753]: 2025-12-05 01:53:11.771577993 +0000 UTC m=+0.231785002 container init 58d5851364421d9062e567d5f59fede427f1194c8fe85e5835542a1c060f3829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_chaplygin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  5 01:53:11 compute-0 podman[419753]: 2025-12-05 01:53:11.784830014 +0000 UTC m=+0.245036983 container start 58d5851364421d9062e567d5f59fede427f1194c8fe85e5835542a1c060f3829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_chaplygin, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:53:11 compute-0 podman[419753]: 2025-12-05 01:53:11.791673376 +0000 UTC m=+0.251880385 container attach 58d5851364421d9062e567d5f59fede427f1194c8fe85e5835542a1c060f3829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:53:11 compute-0 naughty_chaplygin[419769]: 167 167
Dec  5 01:53:11 compute-0 systemd[1]: libpod-58d5851364421d9062e567d5f59fede427f1194c8fe85e5835542a1c060f3829.scope: Deactivated successfully.
Dec  5 01:53:11 compute-0 podman[419753]: 2025-12-05 01:53:11.796481661 +0000 UTC m=+0.256688590 container died 58d5851364421d9062e567d5f59fede427f1194c8fe85e5835542a1c060f3829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  5 01:53:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-c98174671b2e0c20cbed8c6ed796e5bda7dabed3dff77ba4e7d3cd85185b6c81-merged.mount: Deactivated successfully.
Dec  5 01:53:11 compute-0 podman[419753]: 2025-12-05 01:53:11.866607134 +0000 UTC m=+0.326814073 container remove 58d5851364421d9062e567d5f59fede427f1194c8fe85e5835542a1c060f3829 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_chaplygin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  5 01:53:11 compute-0 systemd[1]: libpod-conmon-58d5851364421d9062e567d5f59fede427f1194c8fe85e5835542a1c060f3829.scope: Deactivated successfully.
Dec  5 01:53:12 compute-0 podman[419791]: 2025-12-05 01:53:12.120145645 +0000 UTC m=+0.085485615 container create b7630c3646e4f05775bd78fbcddcdf24b46de883c976a45d023765cb65374283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  5 01:53:12 compute-0 podman[419791]: 2025-12-05 01:53:12.085789693 +0000 UTC m=+0.051129713 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:53:12 compute-0 systemd[1]: Started libpod-conmon-b7630c3646e4f05775bd78fbcddcdf24b46de883c976a45d023765cb65374283.scope.
Dec  5 01:53:12 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:53:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0c4c80b8253727b30fad30b8d17cfe3953494bd43049446657d92d92fba3d04/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:53:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0c4c80b8253727b30fad30b8d17cfe3953494bd43049446657d92d92fba3d04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:53:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0c4c80b8253727b30fad30b8d17cfe3953494bd43049446657d92d92fba3d04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:53:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0c4c80b8253727b30fad30b8d17cfe3953494bd43049446657d92d92fba3d04/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:53:12 compute-0 podman[419791]: 2025-12-05 01:53:12.277811851 +0000 UTC m=+0.243151821 container init b7630c3646e4f05775bd78fbcddcdf24b46de883c976a45d023765cb65374283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:53:12 compute-0 podman[419791]: 2025-12-05 01:53:12.296672739 +0000 UTC m=+0.262012679 container start b7630c3646e4f05775bd78fbcddcdf24b46de883c976a45d023765cb65374283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_napier, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:53:12 compute-0 podman[419791]: 2025-12-05 01:53:12.30277212 +0000 UTC m=+0.268112070 container attach b7630c3646e4f05775bd78fbcddcdf24b46de883c976a45d023765cb65374283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  5 01:53:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:53:13 compute-0 wonderful_napier[419807]: {
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:    "0": [
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:        {
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:            "devices": [
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:                "/dev/loop3"
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:            ],
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:            "lv_name": "ceph_lv0",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:            "lv_size": "21470642176",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:            "name": "ceph_lv0",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:            "tags": {
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:                "ceph.cluster_name": "ceph",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:                "ceph.crush_device_class": "",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:                "ceph.encrypted": "0",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:                "ceph.osd_id": "0",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:                "ceph.type": "block",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:                "ceph.vdo": "0"
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:            },
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:            "type": "block",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:            "vg_name": "ceph_vg0"
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:        }
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:    ],
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:    "1": [
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:        {
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:            "devices": [
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:                "/dev/loop4"
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:            ],
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:            "lv_name": "ceph_lv1",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:            "lv_size": "21470642176",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:            "name": "ceph_lv1",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:            "tags": {
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:                "ceph.cluster_name": "ceph",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:                "ceph.crush_device_class": "",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:                "ceph.encrypted": "0",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:                "ceph.osd_id": "1",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:                "ceph.type": "block",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:                "ceph.vdo": "0"
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:            },
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:            "type": "block",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:            "vg_name": "ceph_vg1"
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:        }
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:    ],
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:    "2": [
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:        {
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:            "devices": [
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:                "/dev/loop5"
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:            ],
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:            "lv_name": "ceph_lv2",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:            "lv_size": "21470642176",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:            "name": "ceph_lv2",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:            "tags": {
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:                "ceph.cluster_name": "ceph",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:                "ceph.crush_device_class": "",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:                "ceph.encrypted": "0",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:                "ceph.osd_id": "2",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:                "ceph.type": "block",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:                "ceph.vdo": "0"
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:            },
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:            "type": "block",
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:            "vg_name": "ceph_vg2"
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:        }
Dec  5 01:53:13 compute-0 wonderful_napier[419807]:    ]
Dec  5 01:53:13 compute-0 wonderful_napier[419807]: }
Dec  5 01:53:13 compute-0 systemd[1]: libpod-b7630c3646e4f05775bd78fbcddcdf24b46de883c976a45d023765cb65374283.scope: Deactivated successfully.
Dec  5 01:53:13 compute-0 podman[419791]: 2025-12-05 01:53:13.173301969 +0000 UTC m=+1.138641899 container died b7630c3646e4f05775bd78fbcddcdf24b46de883c976a45d023765cb65374283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  5 01:53:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0c4c80b8253727b30fad30b8d17cfe3953494bd43049446657d92d92fba3d04-merged.mount: Deactivated successfully.
Dec  5 01:53:13 compute-0 podman[419791]: 2025-12-05 01:53:13.255707686 +0000 UTC m=+1.221047626 container remove b7630c3646e4f05775bd78fbcddcdf24b46de883c976a45d023765cb65374283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_napier, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:53:13 compute-0 systemd[1]: libpod-conmon-b7630c3646e4f05775bd78fbcddcdf24b46de883c976a45d023765cb65374283.scope: Deactivated successfully.
Dec  5 01:53:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1315: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:53:14 compute-0 podman[419967]: 2025-12-05 01:53:14.321231407 +0000 UTC m=+0.092676746 container create d5feee262eb00d5fcab5e630cf418151998d739dbad5f75ecf4760617fc0049e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_williams, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  5 01:53:14 compute-0 podman[419967]: 2025-12-05 01:53:14.290699312 +0000 UTC m=+0.062144681 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:53:14 compute-0 systemd[1]: Started libpod-conmon-d5feee262eb00d5fcab5e630cf418151998d739dbad5f75ecf4760617fc0049e.scope.
Dec  5 01:53:14 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:53:14 compute-0 podman[419967]: 2025-12-05 01:53:14.466335901 +0000 UTC m=+0.237781220 container init d5feee262eb00d5fcab5e630cf418151998d739dbad5f75ecf4760617fc0049e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  5 01:53:14 compute-0 podman[419967]: 2025-12-05 01:53:14.476737422 +0000 UTC m=+0.248182741 container start d5feee262eb00d5fcab5e630cf418151998d739dbad5f75ecf4760617fc0049e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:53:14 compute-0 podman[419967]: 2025-12-05 01:53:14.482403811 +0000 UTC m=+0.253849160 container attach d5feee262eb00d5fcab5e630cf418151998d739dbad5f75ecf4760617fc0049e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  5 01:53:14 compute-0 peaceful_williams[419983]: 167 167
Dec  5 01:53:14 compute-0 systemd[1]: libpod-d5feee262eb00d5fcab5e630cf418151998d739dbad5f75ecf4760617fc0049e.scope: Deactivated successfully.
Dec  5 01:53:14 compute-0 podman[419967]: 2025-12-05 01:53:14.484347596 +0000 UTC m=+0.255792945 container died d5feee262eb00d5fcab5e630cf418151998d739dbad5f75ecf4760617fc0049e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:53:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-1411127e7999c03063b85e2375f6cc6331f3a5cb95f7b06d2d79b5aef601a3d5-merged.mount: Deactivated successfully.
Dec  5 01:53:14 compute-0 podman[419967]: 2025-12-05 01:53:14.577264808 +0000 UTC m=+0.348710157 container remove d5feee262eb00d5fcab5e630cf418151998d739dbad5f75ecf4760617fc0049e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  5 01:53:14 compute-0 systemd[1]: libpod-conmon-d5feee262eb00d5fcab5e630cf418151998d739dbad5f75ecf4760617fc0049e.scope: Deactivated successfully.
Dec  5 01:53:14 compute-0 podman[420006]: 2025-12-05 01:53:14.856433086 +0000 UTC m=+0.075678560 container create 645e6284fd3b6a1dc3ac914712e0468f658284c8bf89211838f056a387f87dc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hertz, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:53:14 compute-0 systemd[1]: Started libpod-conmon-645e6284fd3b6a1dc3ac914712e0468f658284c8bf89211838f056a387f87dc2.scope.
Dec  5 01:53:14 compute-0 podman[420006]: 2025-12-05 01:53:14.828243877 +0000 UTC m=+0.047489431 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:53:14 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:53:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d710f031c737b0623acf7fa9fa8cd7bd95a9c58569e202e3eca3f78d0ccc0187/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:53:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d710f031c737b0623acf7fa9fa8cd7bd95a9c58569e202e3eca3f78d0ccc0187/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:53:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d710f031c737b0623acf7fa9fa8cd7bd95a9c58569e202e3eca3f78d0ccc0187/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:53:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d710f031c737b0623acf7fa9fa8cd7bd95a9c58569e202e3eca3f78d0ccc0187/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:53:14 compute-0 podman[420006]: 2025-12-05 01:53:14.97938817 +0000 UTC m=+0.198633744 container init 645e6284fd3b6a1dc3ac914712e0468f658284c8bf89211838f056a387f87dc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  5 01:53:15 compute-0 podman[420006]: 2025-12-05 01:53:15.001371445 +0000 UTC m=+0.220616949 container start 645e6284fd3b6a1dc3ac914712e0468f658284c8bf89211838f056a387f87dc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:53:15 compute-0 podman[420006]: 2025-12-05 01:53:15.009825962 +0000 UTC m=+0.229071466 container attach 645e6284fd3b6a1dc3ac914712e0468f658284c8bf89211838f056a387f87dc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  5 01:53:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1316: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:53:16 compute-0 focused_hertz[420022]: {
Dec  5 01:53:16 compute-0 focused_hertz[420022]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 01:53:16 compute-0 focused_hertz[420022]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:53:16 compute-0 focused_hertz[420022]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 01:53:16 compute-0 focused_hertz[420022]:        "osd_id": 0,
Dec  5 01:53:16 compute-0 focused_hertz[420022]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:53:16 compute-0 focused_hertz[420022]:        "type": "bluestore"
Dec  5 01:53:16 compute-0 focused_hertz[420022]:    },
Dec  5 01:53:16 compute-0 focused_hertz[420022]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 01:53:16 compute-0 focused_hertz[420022]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:53:16 compute-0 focused_hertz[420022]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 01:53:16 compute-0 focused_hertz[420022]:        "osd_id": 1,
Dec  5 01:53:16 compute-0 focused_hertz[420022]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:53:16 compute-0 focused_hertz[420022]:        "type": "bluestore"
Dec  5 01:53:16 compute-0 focused_hertz[420022]:    },
Dec  5 01:53:16 compute-0 focused_hertz[420022]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 01:53:16 compute-0 focused_hertz[420022]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:53:16 compute-0 focused_hertz[420022]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 01:53:16 compute-0 focused_hertz[420022]:        "osd_id": 2,
Dec  5 01:53:16 compute-0 focused_hertz[420022]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:53:16 compute-0 focused_hertz[420022]:        "type": "bluestore"
Dec  5 01:53:16 compute-0 focused_hertz[420022]:    }
Dec  5 01:53:16 compute-0 focused_hertz[420022]: }
Dec  5 01:53:16 compute-0 systemd[1]: libpod-645e6284fd3b6a1dc3ac914712e0468f658284c8bf89211838f056a387f87dc2.scope: Deactivated successfully.
Dec  5 01:53:16 compute-0 podman[420006]: 2025-12-05 01:53:16.220964911 +0000 UTC m=+1.440210395 container died 645e6284fd3b6a1dc3ac914712e0468f658284c8bf89211838f056a387f87dc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:53:16 compute-0 systemd[1]: libpod-645e6284fd3b6a1dc3ac914712e0468f658284c8bf89211838f056a387f87dc2.scope: Consumed 1.215s CPU time.
Dec  5 01:53:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-d710f031c737b0623acf7fa9fa8cd7bd95a9c58569e202e3eca3f78d0ccc0187-merged.mount: Deactivated successfully.
Dec  5 01:53:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:53:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:53:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:53:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:53:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:53:16
Dec  5 01:53:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 01:53:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 01:53:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['images', '.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', '.mgr', 'volumes', 'default.rgw.log']
Dec  5 01:53:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 01:53:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:53:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:53:16 compute-0 podman[420006]: 2025-12-05 01:53:16.304404968 +0000 UTC m=+1.523650452 container remove 645e6284fd3b6a1dc3ac914712e0468f658284c8bf89211838f056a387f87dc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  5 01:53:16 compute-0 systemd[1]: libpod-conmon-645e6284fd3b6a1dc3ac914712e0468f658284c8bf89211838f056a387f87dc2.scope: Deactivated successfully.
Dec  5 01:53:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:53:16 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:53:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:53:16 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:53:16 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 8bc703b0-8718-4a99-8d67-e0f21d9a2428 does not exist
Dec  5 01:53:16 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 8347f290-45c8-49f4-8f1b-5d46eced7eab does not exist
Dec  5 01:53:16 compute-0 nova_compute[349548]: 2025-12-05 01:53:16.484 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:53:16 compute-0 nova_compute[349548]: 2025-12-05 01:53:16.549 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:53:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 01:53:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:53:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 01:53:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:53:16 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:53:16 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:53:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:53:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:53:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:53:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:53:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:53:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:53:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1317: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:53:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:53:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1318: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:53:21 compute-0 nova_compute[349548]: 2025-12-05 01:53:21.488 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:53:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1319: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:53:21 compute-0 nova_compute[349548]: 2025-12-05 01:53:21.552 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:53:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:53:22 compute-0 nova_compute[349548]: 2025-12-05 01:53:22.970 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:53:22 compute-0 nova_compute[349548]: 2025-12-05 01:53:22.971 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:53:22 compute-0 nova_compute[349548]: 2025-12-05 01:53:22.972 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:53:22 compute-0 nova_compute[349548]: 2025-12-05 01:53:22.972 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 01:53:23 compute-0 nova_compute[349548]: 2025-12-05 01:53:23.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:53:23 compute-0 nova_compute[349548]: 2025-12-05 01:53:23.068 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:53:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1320: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:53:24 compute-0 nova_compute[349548]: 2025-12-05 01:53:24.071 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:53:24 compute-0 nova_compute[349548]: 2025-12-05 01:53:24.075 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 01:53:24 compute-0 nova_compute[349548]: 2025-12-05 01:53:24.076 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  5 01:53:24 compute-0 nova_compute[349548]: 2025-12-05 01:53:24.315 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 01:53:24 compute-0 nova_compute[349548]: 2025-12-05 01:53:24.316 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 01:53:24 compute-0 nova_compute[349548]: 2025-12-05 01:53:24.316 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  5 01:53:24 compute-0 nova_compute[349548]: 2025-12-05 01:53:24.317 349552 DEBUG nova.objects.instance [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b69a0e24-1bc4-46a5-92d7-367c1efd53df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 01:53:25 compute-0 nova_compute[349548]: 2025-12-05 01:53:25.493 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updating instance_info_cache with network_info: [{"id": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "address": "fa:16:3e:0c:12:24", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.48", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68143c81-65", "ovs_interfaceid": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 01:53:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1321: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:53:25 compute-0 nova_compute[349548]: 2025-12-05 01:53:25.515 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 01:53:25 compute-0 nova_compute[349548]: 2025-12-05 01:53:25.516 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  5 01:53:25 compute-0 nova_compute[349548]: 2025-12-05 01:53:25.517 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:53:25 compute-0 nova_compute[349548]: 2025-12-05 01:53:25.517 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:53:25 compute-0 nova_compute[349548]: 2025-12-05 01:53:25.555 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:53:25 compute-0 nova_compute[349548]: 2025-12-05 01:53:25.556 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:53:25 compute-0 nova_compute[349548]: 2025-12-05 01:53:25.557 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:53:25 compute-0 nova_compute[349548]: 2025-12-05 01:53:25.557 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 01:53:25 compute-0 nova_compute[349548]: 2025-12-05 01:53:25.558 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:53:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:53:26 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2609764717' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.077 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.208 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.210 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.211 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.217 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.217 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.218 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.491 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.555 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011047613669662043 of space, bias 1.0, pg target 0.3314284100898613 quantized to 32 (current 32)
Dec  5 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  5 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:53:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.797 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.798 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3745MB free_disk=59.9220085144043GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.799 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.799 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.874 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b69a0e24-1bc4-46a5-92d7-367c1efd53df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.874 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b82c3f0e-6d6a-4a7b-9556-b609ad63e497 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.874 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.874 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.898 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing inventories for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  5 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.917 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating ProviderTree inventory for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  5 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.918 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating inventory in ProviderTree for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  5 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.936 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing aggregate associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  5 01:53:26 compute-0 nova_compute[349548]: 2025-12-05 01:53:26.965 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing trait associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, traits: HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,HW_CPU_X86_ABM,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE41,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI2,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  5 01:53:27 compute-0 nova_compute[349548]: 2025-12-05 01:53:27.037 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:53:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:53:27.352 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:c8:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:b5:45:4f:f9:d2'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 01:53:27 compute-0 nova_compute[349548]: 2025-12-05 01:53:27.352 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:53:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:53:27.355 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  5 01:53:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:53:27.357 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8dd76c1c-ab01-42af-b35e-2e870841b6ad, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 01:53:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1322: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:53:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:53:27 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1390564852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:53:27 compute-0 nova_compute[349548]: 2025-12-05 01:53:27.586 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:53:27 compute-0 nova_compute[349548]: 2025-12-05 01:53:27.595 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 01:53:27 compute-0 nova_compute[349548]: 2025-12-05 01:53:27.614 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 01:53:27 compute-0 nova_compute[349548]: 2025-12-05 01:53:27.616 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 01:53:27 compute-0 nova_compute[349548]: 2025-12-05 01:53:27.617 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.818s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:53:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:53:28 compute-0 podman[420162]: 2025-12-05 01:53:28.709249152 +0000 UTC m=+0.110828255 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  5 01:53:28 compute-0 podman[420161]: 2025-12-05 01:53:28.729862539 +0000 UTC m=+0.131138883 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  5 01:53:29 compute-0 nova_compute[349548]: 2025-12-05 01:53:29.166 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:53:29 compute-0 nova_compute[349548]: 2025-12-05 01:53:29.186 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:53:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1323: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:53:29 compute-0 podman[158197]: time="2025-12-05T01:53:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:53:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:53:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 01:53:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:53:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8620 "" "Go-http-client/1.1"
Dec  5 01:53:30 compute-0 podman[420202]: 2025-12-05 01:53:30.726431275 +0000 UTC m=+0.124748935 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 01:53:30 compute-0 podman[420201]: 2025-12-05 01:53:30.738454562 +0000 UTC m=+0.144475068 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  5 01:53:31 compute-0 openstack_network_exporter[366555]: ERROR   01:53:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:53:31 compute-0 openstack_network_exporter[366555]: ERROR   01:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:53:31 compute-0 openstack_network_exporter[366555]: ERROR   01:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:53:31 compute-0 openstack_network_exporter[366555]: ERROR   01:53:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:53:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:53:31 compute-0 openstack_network_exporter[366555]: ERROR   01:53:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:53:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:53:31 compute-0 nova_compute[349548]: 2025-12-05 01:53:31.490 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:53:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1324: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:53:31 compute-0 nova_compute[349548]: 2025-12-05 01:53:31.558 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:53:31 compute-0 nova_compute[349548]: 2025-12-05 01:53:31.998 349552 DEBUG oslo_concurrency.lockutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:53:32 compute-0 nova_compute[349548]: 2025-12-05 01:53:31.999 349552 DEBUG oslo_concurrency.lockutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:53:32 compute-0 nova_compute[349548]: 2025-12-05 01:53:32.016 349552 DEBUG nova.compute.manager [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  5 01:53:32 compute-0 nova_compute[349548]: 2025-12-05 01:53:32.099 349552 DEBUG oslo_concurrency.lockutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:53:32 compute-0 nova_compute[349548]: 2025-12-05 01:53:32.100 349552 DEBUG oslo_concurrency.lockutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:53:32 compute-0 nova_compute[349548]: 2025-12-05 01:53:32.112 349552 DEBUG nova.virt.hardware [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  5 01:53:32 compute-0 nova_compute[349548]: 2025-12-05 01:53:32.112 349552 INFO nova.compute.claims [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  5 01:53:32 compute-0 nova_compute[349548]: 2025-12-05 01:53:32.235 349552 DEBUG oslo_concurrency.processutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:53:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:53:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:53:32 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/140473233' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:53:32 compute-0 nova_compute[349548]: 2025-12-05 01:53:32.779 349552 DEBUG oslo_concurrency.processutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:53:32 compute-0 nova_compute[349548]: 2025-12-05 01:53:32.791 349552 DEBUG nova.compute.provider_tree [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 01:53:32 compute-0 nova_compute[349548]: 2025-12-05 01:53:32.815 349552 DEBUG nova.scheduler.client.report [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 01:53:32 compute-0 nova_compute[349548]: 2025-12-05 01:53:32.849 349552 DEBUG oslo_concurrency.lockutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:53:32 compute-0 nova_compute[349548]: 2025-12-05 01:53:32.850 349552 DEBUG nova.compute.manager [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  5 01:53:32 compute-0 nova_compute[349548]: 2025-12-05 01:53:32.918 349552 DEBUG nova.compute.manager [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  5 01:53:32 compute-0 nova_compute[349548]: 2025-12-05 01:53:32.918 349552 DEBUG nova.network.neutron [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  5 01:53:32 compute-0 nova_compute[349548]: 2025-12-05 01:53:32.940 349552 INFO nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  5 01:53:32 compute-0 nova_compute[349548]: 2025-12-05 01:53:32.976 349552 DEBUG nova.compute.manager [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  5 01:53:33 compute-0 nova_compute[349548]: 2025-12-05 01:53:33.067 349552 DEBUG nova.compute.manager [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  5 01:53:33 compute-0 nova_compute[349548]: 2025-12-05 01:53:33.070 349552 DEBUG nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  5 01:53:33 compute-0 nova_compute[349548]: 2025-12-05 01:53:33.071 349552 INFO nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Creating image(s)#033[00m
Dec  5 01:53:33 compute-0 nova_compute[349548]: 2025-12-05 01:53:33.131 349552 DEBUG nova.storage.rbd_utils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 01:53:33 compute-0 nova_compute[349548]: 2025-12-05 01:53:33.207 349552 DEBUG nova.storage.rbd_utils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 01:53:33 compute-0 nova_compute[349548]: 2025-12-05 01:53:33.269 349552 DEBUG nova.storage.rbd_utils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 01:53:33 compute-0 nova_compute[349548]: 2025-12-05 01:53:33.282 349552 DEBUG oslo_concurrency.processutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:53:33 compute-0 nova_compute[349548]: 2025-12-05 01:53:33.342 349552 DEBUG oslo_concurrency.processutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:53:33 compute-0 nova_compute[349548]: 2025-12-05 01:53:33.344 349552 DEBUG oslo_concurrency.lockutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "af0f6d73e40706411141d751e7ebef271f1a5b42" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:53:33 compute-0 nova_compute[349548]: 2025-12-05 01:53:33.345 349552 DEBUG oslo_concurrency.lockutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "af0f6d73e40706411141d751e7ebef271f1a5b42" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:53:33 compute-0 nova_compute[349548]: 2025-12-05 01:53:33.345 349552 DEBUG oslo_concurrency.lockutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "af0f6d73e40706411141d751e7ebef271f1a5b42" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:53:33 compute-0 nova_compute[349548]: 2025-12-05 01:53:33.387 349552 DEBUG nova.storage.rbd_utils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 01:53:33 compute-0 nova_compute[349548]: 2025-12-05 01:53:33.397 349552 DEBUG oslo_concurrency.processutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:53:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1325: 321 pgs: 321 active+clean; 139 MiB data, 267 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:53:33 compute-0 nova_compute[349548]: 2025-12-05 01:53:33.861 349552 DEBUG oslo_concurrency.processutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:53:34 compute-0 nova_compute[349548]: 2025-12-05 01:53:34.010 349552 DEBUG nova.storage.rbd_utils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] resizing rbd image 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  5 01:53:34 compute-0 nova_compute[349548]: 2025-12-05 01:53:34.232 349552 DEBUG nova.objects.instance [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lazy-loading 'migration_context' on Instance uuid 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 01:53:34 compute-0 nova_compute[349548]: 2025-12-05 01:53:34.300 349552 DEBUG nova.storage.rbd_utils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 01:53:34 compute-0 nova_compute[349548]: 2025-12-05 01:53:34.352 349552 DEBUG nova.storage.rbd_utils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 01:53:34 compute-0 nova_compute[349548]: 2025-12-05 01:53:34.363 349552 DEBUG oslo_concurrency.processutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:53:34 compute-0 nova_compute[349548]: 2025-12-05 01:53:34.430 349552 DEBUG oslo_concurrency.processutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:53:34 compute-0 nova_compute[349548]: 2025-12-05 01:53:34.431 349552 DEBUG oslo_concurrency.lockutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:53:34 compute-0 nova_compute[349548]: 2025-12-05 01:53:34.432 349552 DEBUG oslo_concurrency.lockutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:53:34 compute-0 nova_compute[349548]: 2025-12-05 01:53:34.432 349552 DEBUG oslo_concurrency.lockutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:53:34 compute-0 nova_compute[349548]: 2025-12-05 01:53:34.476 349552 DEBUG nova.storage.rbd_utils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 01:53:34 compute-0 nova_compute[349548]: 2025-12-05 01:53:34.488 349552 DEBUG oslo_concurrency.processutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:53:34 compute-0 podman[420497]: 2025-12-05 01:53:34.688154154 +0000 UTC m=+0.110215727 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=edpm, io.openshift.tags=base rhel9, architecture=x86_64, com.redhat.component=ubi9-container, distribution-scope=public, release-0.7.12=, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, version=9.4)
Dec  5 01:53:35 compute-0 nova_compute[349548]: 2025-12-05 01:53:35.017 349552 DEBUG oslo_concurrency.processutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:53:35 compute-0 nova_compute[349548]: 2025-12-05 01:53:35.272 349552 DEBUG nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  5 01:53:35 compute-0 nova_compute[349548]: 2025-12-05 01:53:35.273 349552 DEBUG nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Ensure instance console log exists: /var/lib/nova/instances/7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  5 01:53:35 compute-0 nova_compute[349548]: 2025-12-05 01:53:35.275 349552 DEBUG oslo_concurrency.lockutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:53:35 compute-0 nova_compute[349548]: 2025-12-05 01:53:35.276 349552 DEBUG oslo_concurrency.lockutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:53:35 compute-0 nova_compute[349548]: 2025-12-05 01:53:35.277 349552 DEBUG oslo_concurrency.lockutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:53:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1326: 321 pgs: 321 active+clean; 143 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 270 KiB/s wr, 22 op/s
Dec  5 01:53:36 compute-0 nova_compute[349548]: 2025-12-05 01:53:36.494 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:53:36 compute-0 nova_compute[349548]: 2025-12-05 01:53:36.561 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:53:36 compute-0 nova_compute[349548]: 2025-12-05 01:53:36.578 349552 DEBUG nova.network.neutron [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Successfully updated port: 4341bf52-6bd5-42ee-b25d-f3d9844af854 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  5 01:53:36 compute-0 nova_compute[349548]: 2025-12-05 01:53:36.601 349552 DEBUG oslo_concurrency.lockutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "refresh_cache-7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 01:53:36 compute-0 nova_compute[349548]: 2025-12-05 01:53:36.602 349552 DEBUG oslo_concurrency.lockutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquired lock "refresh_cache-7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 01:53:36 compute-0 nova_compute[349548]: 2025-12-05 01:53:36.603 349552 DEBUG nova.network.neutron [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  5 01:53:36 compute-0 nova_compute[349548]: 2025-12-05 01:53:36.756 349552 DEBUG nova.network.neutron [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  5 01:53:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1327: 321 pgs: 321 active+clean; 172 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 MiB/s wr, 36 op/s
Dec  5 01:53:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:53:37 compute-0 nova_compute[349548]: 2025-12-05 01:53:37.796 349552 DEBUG nova.compute.manager [req-8d96d2f9-e265-4b2e-b9c6-ede5ebda4e49 req-0698238d-2415-48c0-b38b-a9c3a382cb19 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Received event network-changed-4341bf52-6bd5-42ee-b25d-f3d9844af854 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 01:53:37 compute-0 nova_compute[349548]: 2025-12-05 01:53:37.797 349552 DEBUG nova.compute.manager [req-8d96d2f9-e265-4b2e-b9c6-ede5ebda4e49 req-0698238d-2415-48c0-b38b-a9c3a382cb19 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Refreshing instance network info cache due to event network-changed-4341bf52-6bd5-42ee-b25d-f3d9844af854. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  5 01:53:37 compute-0 nova_compute[349548]: 2025-12-05 01:53:37.797 349552 DEBUG oslo_concurrency.lockutils [req-8d96d2f9-e265-4b2e-b9c6-ede5ebda4e49 req-0698238d-2415-48c0-b38b-a9c3a382cb19 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 01:53:38 compute-0 nova_compute[349548]: 2025-12-05 01:53:38.942 349552 DEBUG nova.network.neutron [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Updating instance_info_cache with network_info: [{"id": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "address": "fa:16:3e:68:a7:22", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4341bf52-6b", "ovs_interfaceid": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 01:53:38 compute-0 nova_compute[349548]: 2025-12-05 01:53:38.966 349552 DEBUG oslo_concurrency.lockutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Releasing lock "refresh_cache-7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 01:53:38 compute-0 nova_compute[349548]: 2025-12-05 01:53:38.967 349552 DEBUG nova.compute.manager [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Instance network_info: |[{"id": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "address": "fa:16:3e:68:a7:22", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4341bf52-6b", "ovs_interfaceid": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  5 01:53:38 compute-0 nova_compute[349548]: 2025-12-05 01:53:38.969 349552 DEBUG oslo_concurrency.lockutils [req-8d96d2f9-e265-4b2e-b9c6-ede5ebda4e49 req-0698238d-2415-48c0-b38b-a9c3a382cb19 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 01:53:38 compute-0 nova_compute[349548]: 2025-12-05 01:53:38.970 349552 DEBUG nova.network.neutron [req-8d96d2f9-e265-4b2e-b9c6-ede5ebda4e49 req-0698238d-2415-48c0-b38b-a9c3a382cb19 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Refreshing network info cache for port 4341bf52-6bd5-42ee-b25d-f3d9844af854 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  5 01:53:38 compute-0 nova_compute[349548]: 2025-12-05 01:53:38.975 349552 DEBUG nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Start _get_guest_xml network_info=[{"id": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "address": "fa:16:3e:68:a7:22", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4341bf52-6b", "ovs_interfaceid": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-05T01:46:34Z,direct_url=<?>,disk_format='qcow2',id=aa58c1e9-bdcc-4e60-9cee-eaeee0741251,min_disk=0,min_ram=0,name='cirros',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-05T01:46:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'image_id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}], 'ephemerals': [{'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'device_name': '/dev/vdb', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 1}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  5 01:53:38 compute-0 nova_compute[349548]: 2025-12-05 01:53:38.993 349552 WARNING nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 01:53:39 compute-0 nova_compute[349548]: 2025-12-05 01:53:39.009 349552 DEBUG nova.virt.libvirt.host [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  5 01:53:39 compute-0 nova_compute[349548]: 2025-12-05 01:53:39.010 349552 DEBUG nova.virt.libvirt.host [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  5 01:53:39 compute-0 nova_compute[349548]: 2025-12-05 01:53:39.016 349552 DEBUG nova.virt.libvirt.host [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  5 01:53:39 compute-0 nova_compute[349548]: 2025-12-05 01:53:39.017 349552 DEBUG nova.virt.libvirt.host [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  5 01:53:39 compute-0 nova_compute[349548]: 2025-12-05 01:53:39.018 349552 DEBUG nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  5 01:53:39 compute-0 nova_compute[349548]: 2025-12-05 01:53:39.018 349552 DEBUG nova.virt.hardware [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-05T01:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='7d473820-6f66-40b4-b8d1-decd466d7dd2',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-05T01:46:34Z,direct_url=<?>,disk_format='qcow2',id=aa58c1e9-bdcc-4e60-9cee-eaeee0741251,min_disk=0,min_ram=0,name='cirros',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-05T01:46:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  5 01:53:39 compute-0 nova_compute[349548]: 2025-12-05 01:53:39.019 349552 DEBUG nova.virt.hardware [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  5 01:53:39 compute-0 nova_compute[349548]: 2025-12-05 01:53:39.019 349552 DEBUG nova.virt.hardware [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  5 01:53:39 compute-0 nova_compute[349548]: 2025-12-05 01:53:39.020 349552 DEBUG nova.virt.hardware [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  5 01:53:39 compute-0 nova_compute[349548]: 2025-12-05 01:53:39.020 349552 DEBUG nova.virt.hardware [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  5 01:53:39 compute-0 nova_compute[349548]: 2025-12-05 01:53:39.020 349552 DEBUG nova.virt.hardware [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  5 01:53:39 compute-0 nova_compute[349548]: 2025-12-05 01:53:39.021 349552 DEBUG nova.virt.hardware [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  5 01:53:39 compute-0 nova_compute[349548]: 2025-12-05 01:53:39.021 349552 DEBUG nova.virt.hardware [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  5 01:53:39 compute-0 nova_compute[349548]: 2025-12-05 01:53:39.022 349552 DEBUG nova.virt.hardware [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  5 01:53:39 compute-0 nova_compute[349548]: 2025-12-05 01:53:39.022 349552 DEBUG nova.virt.hardware [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  5 01:53:39 compute-0 nova_compute[349548]: 2025-12-05 01:53:39.023 349552 DEBUG nova.virt.hardware [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  5 01:53:39 compute-0 nova_compute[349548]: 2025-12-05 01:53:39.026 349552 DEBUG oslo_concurrency.processutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:53:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  5 01:53:39 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3067829114' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  5 01:53:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1328: 321 pgs: 321 active+clean; 172 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 MiB/s wr, 36 op/s
Dec  5 01:53:39 compute-0 nova_compute[349548]: 2025-12-05 01:53:39.534 349552 DEBUG oslo_concurrency.processutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:53:39 compute-0 nova_compute[349548]: 2025-12-05 01:53:39.537 349552 DEBUG oslo_concurrency.processutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:53:39 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Dec  5 01:53:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:53:39.746049) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  5 01:53:39 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Dec  5 01:53:39 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899619746088, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2039, "num_deletes": 251, "total_data_size": 3364282, "memory_usage": 3421440, "flush_reason": "Manual Compaction"}
Dec  5 01:53:39 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Dec  5 01:53:39 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899619768089, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3309185, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25532, "largest_seqno": 27570, "table_properties": {"data_size": 3299989, "index_size": 5818, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 18439, "raw_average_key_size": 20, "raw_value_size": 3281649, "raw_average_value_size": 3563, "num_data_blocks": 259, "num_entries": 921, "num_filter_entries": 921, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764899392, "oldest_key_time": 1764899392, "file_creation_time": 1764899619, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Dec  5 01:53:39 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 22103 microseconds, and 6845 cpu microseconds.
Dec  5 01:53:39 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 01:53:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:53:39.768150) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3309185 bytes OK
Dec  5 01:53:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:53:39.768171) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Dec  5 01:53:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:53:39.770271) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Dec  5 01:53:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:53:39.770286) EVENT_LOG_v1 {"time_micros": 1764899619770281, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  5 01:53:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:53:39.770303) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  5 01:53:39 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3355782, prev total WAL file size 3355782, number of live WAL files 2.
Dec  5 01:53:39 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 01:53:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:53:39.771512) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Dec  5 01:53:39 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  5 01:53:39 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3231KB)], [59(7186KB)]
Dec  5 01:53:39 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899619771589, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 10668050, "oldest_snapshot_seqno": -1}
Dec  5 01:53:39 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5021 keys, 8913574 bytes, temperature: kUnknown
Dec  5 01:53:39 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899619814358, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 8913574, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8878460, "index_size": 21436, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12613, "raw_key_size": 124527, "raw_average_key_size": 24, "raw_value_size": 8786097, "raw_average_value_size": 1749, "num_data_blocks": 889, "num_entries": 5021, "num_filter_entries": 5021, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764899619, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Dec  5 01:53:39 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 01:53:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:53:39.814531) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 8913574 bytes
Dec  5 01:53:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:53:39.816421) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 249.1 rd, 208.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.0 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 5535, records dropped: 514 output_compression: NoCompression
Dec  5 01:53:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:53:39.816435) EVENT_LOG_v1 {"time_micros": 1764899619816428, "job": 32, "event": "compaction_finished", "compaction_time_micros": 42820, "compaction_time_cpu_micros": 18238, "output_level": 6, "num_output_files": 1, "total_output_size": 8913574, "num_input_records": 5535, "num_output_records": 5021, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  5 01:53:39 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 01:53:39 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899619817005, "job": 32, "event": "table_file_deletion", "file_number": 61}
Dec  5 01:53:39 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 01:53:39 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899619818283, "job": 32, "event": "table_file_deletion", "file_number": 59}
Dec  5 01:53:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:53:39.771359) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:53:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:53:39.818470) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:53:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:53:39.818475) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:53:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:53:39.818477) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:53:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:53:39.818479) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:53:39 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:53:39.818481) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:53:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  5 01:53:40 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/393097682' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  5 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.024 349552 DEBUG oslo_concurrency.processutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.074 349552 DEBUG nova.storage.rbd_utils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.085 349552 DEBUG oslo_concurrency.processutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:53:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  5 01:53:40 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4126179487' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  5 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.625 349552 DEBUG oslo_concurrency.processutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.627 349552 DEBUG nova.virt.libvirt.vif [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T01:53:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-4ysdpfw-vyar5vmyxehf-7qpgpa3gxwp3-vnf-gvxpa75bo2i7',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-4ysdpfw-vyar5vmyxehf-7qpgpa3gxwp3-vnf-gvxpa75bo2i7',id=3,image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ad982b73954486390215862ee62239f',ramdisk_id='',reservation_id='r-6yiphc1y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T01:53:33Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wNjI3NDkyODY1Nzg2OTkzOTcyPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTA2Mjc0OTI4NjU3ODY5OTM5NzI9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDYyNzQ5Mjg2NTc4Njk5Mzk3Mj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTA2Mjc0OTI4NjU3ODY5OTM5NzI9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wNjI3NDkyODY1Nzg2OTkzOTcyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wNjI3NDkyODY1Nzg2OTkzOTcyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjI
Dec  5 01:53:40 compute-0 nova_compute[349548]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDYyNzQ5Mjg2NTc4Njk5Mzk3Mj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTA2Mjc0OTI4NjU3ODY5OTM5NzI9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wNjI3NDkyODY1Nzg2OTkzOTcyPT0tLQo=',user_id='ff880837791d4f49a54672b8d0e705ff',uuid=7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "address": "fa:16:3e:68:a7:22", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4341bf52-6b", "ovs_interfaceid": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  5 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.628 349552 DEBUG nova.network.os_vif_util [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converting VIF {"id": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "address": "fa:16:3e:68:a7:22", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4341bf52-6b", "ovs_interfaceid": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.629 349552 DEBUG nova.network.os_vif_util [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:a7:22,bridge_name='br-int',has_traffic_filtering=True,id=4341bf52-6bd5-42ee-b25d-f3d9844af854,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4341bf52-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.631 349552 DEBUG nova.objects.instance [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lazy-loading 'pci_devices' on Instance uuid 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.650 349552 DEBUG nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] End _get_guest_xml xml=<domain type="kvm">
Dec  5 01:53:40 compute-0 nova_compute[349548]:  <uuid>7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5</uuid>
Dec  5 01:53:40 compute-0 nova_compute[349548]:  <name>instance-00000003</name>
Dec  5 01:53:40 compute-0 nova_compute[349548]:  <memory>524288</memory>
Dec  5 01:53:40 compute-0 nova_compute[349548]:  <vcpu>1</vcpu>
Dec  5 01:53:40 compute-0 nova_compute[349548]:  <metadata>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  5 01:53:40 compute-0 nova_compute[349548]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:      <nova:name>vn-4ysdpfw-vyar5vmyxehf-7qpgpa3gxwp3-vnf-gvxpa75bo2i7</nova:name>
Dec  5 01:53:40 compute-0 nova_compute[349548]:      <nova:creationTime>2025-12-05 01:53:38</nova:creationTime>
Dec  5 01:53:40 compute-0 nova_compute[349548]:      <nova:flavor name="m1.small">
Dec  5 01:53:40 compute-0 nova_compute[349548]:        <nova:memory>512</nova:memory>
Dec  5 01:53:40 compute-0 nova_compute[349548]:        <nova:disk>1</nova:disk>
Dec  5 01:53:40 compute-0 nova_compute[349548]:        <nova:swap>0</nova:swap>
Dec  5 01:53:40 compute-0 nova_compute[349548]:        <nova:ephemeral>1</nova:ephemeral>
Dec  5 01:53:40 compute-0 nova_compute[349548]:        <nova:vcpus>1</nova:vcpus>
Dec  5 01:53:40 compute-0 nova_compute[349548]:      </nova:flavor>
Dec  5 01:53:40 compute-0 nova_compute[349548]:      <nova:owner>
Dec  5 01:53:40 compute-0 nova_compute[349548]:        <nova:user uuid="ff880837791d4f49a54672b8d0e705ff">admin</nova:user>
Dec  5 01:53:40 compute-0 nova_compute[349548]:        <nova:project uuid="6ad982b73954486390215862ee62239f">admin</nova:project>
Dec  5 01:53:40 compute-0 nova_compute[349548]:      </nova:owner>
Dec  5 01:53:40 compute-0 nova_compute[349548]:      <nova:root type="image" uuid="aa58c1e9-bdcc-4e60-9cee-eaeee0741251"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:      <nova:ports>
Dec  5 01:53:40 compute-0 nova_compute[349548]:        <nova:port uuid="4341bf52-6bd5-42ee-b25d-f3d9844af854">
Dec  5 01:53:40 compute-0 nova_compute[349548]:          <nova:ip type="fixed" address="192.168.0.25" ipVersion="4"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:        </nova:port>
Dec  5 01:53:40 compute-0 nova_compute[349548]:      </nova:ports>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    </nova:instance>
Dec  5 01:53:40 compute-0 nova_compute[349548]:  </metadata>
Dec  5 01:53:40 compute-0 nova_compute[349548]:  <sysinfo type="smbios">
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <system>
Dec  5 01:53:40 compute-0 nova_compute[349548]:      <entry name="manufacturer">RDO</entry>
Dec  5 01:53:40 compute-0 nova_compute[349548]:      <entry name="product">OpenStack Compute</entry>
Dec  5 01:53:40 compute-0 nova_compute[349548]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  5 01:53:40 compute-0 nova_compute[349548]:      <entry name="serial">7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5</entry>
Dec  5 01:53:40 compute-0 nova_compute[349548]:      <entry name="uuid">7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5</entry>
Dec  5 01:53:40 compute-0 nova_compute[349548]:      <entry name="family">Virtual Machine</entry>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    </system>
Dec  5 01:53:40 compute-0 nova_compute[349548]:  </sysinfo>
Dec  5 01:53:40 compute-0 nova_compute[349548]:  <os>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <boot dev="hd"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <smbios mode="sysinfo"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:  </os>
Dec  5 01:53:40 compute-0 nova_compute[349548]:  <features>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <acpi/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <apic/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <vmcoreinfo/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:  </features>
Dec  5 01:53:40 compute-0 nova_compute[349548]:  <clock offset="utc">
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <timer name="pit" tickpolicy="delay"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <timer name="hpet" present="no"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:  </clock>
Dec  5 01:53:40 compute-0 nova_compute[349548]:  <cpu mode="host-model" match="exact">
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <topology sockets="1" cores="1" threads="1"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:  </cpu>
Dec  5 01:53:40 compute-0 nova_compute[349548]:  <devices>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <disk type="network" device="disk">
Dec  5 01:53:40 compute-0 nova_compute[349548]:      <driver type="raw" cache="none"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:      <source protocol="rbd" name="vms/7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_disk">
Dec  5 01:53:40 compute-0 nova_compute[349548]:        <host name="192.168.122.100" port="6789"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:      </source>
Dec  5 01:53:40 compute-0 nova_compute[349548]:      <auth username="openstack">
Dec  5 01:53:40 compute-0 nova_compute[349548]:        <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:      </auth>
Dec  5 01:53:40 compute-0 nova_compute[349548]:      <target dev="vda" bus="virtio"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    </disk>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <disk type="network" device="disk">
Dec  5 01:53:40 compute-0 nova_compute[349548]:      <driver type="raw" cache="none"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:      <source protocol="rbd" name="vms/7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_disk.eph0">
Dec  5 01:53:40 compute-0 nova_compute[349548]:        <host name="192.168.122.100" port="6789"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:      </source>
Dec  5 01:53:40 compute-0 nova_compute[349548]:      <auth username="openstack">
Dec  5 01:53:40 compute-0 nova_compute[349548]:        <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:      </auth>
Dec  5 01:53:40 compute-0 nova_compute[349548]:      <target dev="vdb" bus="virtio"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    </disk>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <disk type="network" device="cdrom">
Dec  5 01:53:40 compute-0 nova_compute[349548]:      <driver type="raw" cache="none"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:      <source protocol="rbd" name="vms/7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_disk.config">
Dec  5 01:53:40 compute-0 nova_compute[349548]:        <host name="192.168.122.100" port="6789"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:      </source>
Dec  5 01:53:40 compute-0 nova_compute[349548]:      <auth username="openstack">
Dec  5 01:53:40 compute-0 nova_compute[349548]:        <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:      </auth>
Dec  5 01:53:40 compute-0 nova_compute[349548]:      <target dev="sda" bus="sata"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    </disk>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <interface type="ethernet">
Dec  5 01:53:40 compute-0 nova_compute[349548]:      <mac address="fa:16:3e:68:a7:22"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:      <model type="virtio"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:      <driver name="vhost" rx_queue_size="512"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:      <mtu size="1442"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:      <target dev="tap4341bf52-6b"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    </interface>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <serial type="pty">
Dec  5 01:53:40 compute-0 nova_compute[349548]:      <log file="/var/lib/nova/instances/7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/console.log" append="off"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    </serial>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <video>
Dec  5 01:53:40 compute-0 nova_compute[349548]:      <model type="virtio"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    </video>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <input type="tablet" bus="usb"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <rng model="virtio">
Dec  5 01:53:40 compute-0 nova_compute[349548]:      <backend model="random">/dev/urandom</backend>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    </rng>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <controller type="usb" index="0"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    <memballoon model="virtio">
Dec  5 01:53:40 compute-0 nova_compute[349548]:      <stats period="10"/>
Dec  5 01:53:40 compute-0 nova_compute[349548]:    </memballoon>
Dec  5 01:53:40 compute-0 nova_compute[349548]:  </devices>
Dec  5 01:53:40 compute-0 nova_compute[349548]: </domain>
Dec  5 01:53:40 compute-0 nova_compute[349548]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  5 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.664 349552 DEBUG nova.compute.manager [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Preparing to wait for external event network-vif-plugged-4341bf52-6bd5-42ee-b25d-f3d9844af854 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  5 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.664 349552 DEBUG oslo_concurrency.lockutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.664 349552 DEBUG oslo_concurrency.lockutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.665 349552 DEBUG oslo_concurrency.lockutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.665 349552 DEBUG nova.virt.libvirt.vif [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T01:53:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-4ysdpfw-vyar5vmyxehf-7qpgpa3gxwp3-vnf-gvxpa75bo2i7',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-4ysdpfw-vyar5vmyxehf-7qpgpa3gxwp3-vnf-gvxpa75bo2i7',id=3,image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ad982b73954486390215862ee62239f',ramdisk_id='',reservation_id='r-6yiphc1y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T01:53:33Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wNjI3NDkyODY1Nzg2OTkzOTcyPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTA2Mjc0OTI4NjU3ODY5OTM5NzI9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDYyNzQ5Mjg2NTc4Njk5Mzk3Mj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTA2Mjc0OTI4NjU3ODY5OTM5NzI9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wNjI3NDkyODY1Nzg2OTkzOTcyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wNjI3NDkyODY1Nzg2OTkzOTcyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJ
Dec  5 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.666 349552 DEBUG nova.network.os_vif_util [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converting VIF {"id": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "address": "fa:16:3e:68:a7:22", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4341bf52-6b", "ovs_interfaceid": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.666 349552 DEBUG nova.network.os_vif_util [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:a7:22,bridge_name='br-int',has_traffic_filtering=True,id=4341bf52-6bd5-42ee-b25d-f3d9844af854,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4341bf52-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.667 349552 DEBUG os_vif [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:a7:22,bridge_name='br-int',has_traffic_filtering=True,id=4341bf52-6bd5-42ee-b25d-f3d9844af854,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4341bf52-6b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  5 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.667 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.670 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.670 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.675 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.675 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4341bf52-6b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.676 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4341bf52-6b, col_values=(('external_ids', {'iface-id': '4341bf52-6bd5-42ee-b25d-f3d9844af854', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:68:a7:22', 'vm-uuid': '7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 01:53:40 compute-0 NetworkManager[49092]: <info>  [1764899620.6801] manager: (tap4341bf52-6b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Dec  5 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.683 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  5 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.696 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.698 349552 INFO os_vif [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:a7:22,bridge_name='br-int',has_traffic_filtering=True,id=4341bf52-6bd5-42ee-b25d-f3d9844af854,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4341bf52-6b')#033[00m
Dec  5 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.819 349552 DEBUG nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  5 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.821 349552 DEBUG nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  5 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.822 349552 DEBUG nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  5 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.823 349552 DEBUG nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] No VIF found with MAC fa:16:3e:68:a7:22, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  5 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.824 349552 INFO nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Using config drive#033[00m
Dec  5 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.866 349552 DEBUG nova.storage.rbd_utils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 01:53:40 compute-0 rsyslogd[188644]: message too long (8192) with configured size 8096, begin of message is: 2025-12-05 01:53:40.627 349552 DEBUG nova.virt.libvirt.vif [None req-91ef5c10-b4 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  5 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.967 349552 DEBUG nova.network.neutron [req-8d96d2f9-e265-4b2e-b9c6-ede5ebda4e49 req-0698238d-2415-48c0-b38b-a9c3a382cb19 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Updated VIF entry in instance network info cache for port 4341bf52-6bd5-42ee-b25d-f3d9844af854. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  5 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.968 349552 DEBUG nova.network.neutron [req-8d96d2f9-e265-4b2e-b9c6-ede5ebda4e49 req-0698238d-2415-48c0-b38b-a9c3a382cb19 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Updating instance_info_cache with network_info: [{"id": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "address": "fa:16:3e:68:a7:22", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4341bf52-6b", "ovs_interfaceid": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 01:53:40 compute-0 nova_compute[349548]: 2025-12-05 01:53:40.986 349552 DEBUG oslo_concurrency.lockutils [req-8d96d2f9-e265-4b2e-b9c6-ede5ebda4e49 req-0698238d-2415-48c0-b38b-a9c3a382cb19 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 01:53:41 compute-0 nova_compute[349548]: 2025-12-05 01:53:41.205 349552 INFO nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Creating config drive at /var/lib/nova/instances/7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.config#033[00m
Dec  5 01:53:41 compute-0 nova_compute[349548]: 2025-12-05 01:53:41.219 349552 DEBUG oslo_concurrency.processutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp73tf4ki8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:53:41 compute-0 nova_compute[349548]: 2025-12-05 01:53:41.353 349552 DEBUG oslo_concurrency.processutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp73tf4ki8" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:53:41 compute-0 nova_compute[349548]: 2025-12-05 01:53:41.416 349552 DEBUG nova.storage.rbd_utils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 01:53:41 compute-0 nova_compute[349548]: 2025-12-05 01:53:41.429 349552 DEBUG oslo_concurrency.processutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.config 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:53:41 compute-0 nova_compute[349548]: 2025-12-05 01:53:41.497 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:53:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1329: 321 pgs: 321 active+clean; 172 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 MiB/s wr, 37 op/s
Dec  5 01:53:41 compute-0 podman[420717]: 2025-12-05 01:53:41.688628767 +0000 UTC m=+0.102657116 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  5 01:53:41 compute-0 podman[420716]: 2025-12-05 01:53:41.691197939 +0000 UTC m=+0.106057981 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 01:53:41 compute-0 podman[420719]: 2025-12-05 01:53:41.704543783 +0000 UTC m=+0.114535738 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_id=edpm, release=1755695350, vendor=Red Hat, Inc., version=9.6, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, distribution-scope=public, name=ubi9-minimal, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container)
Dec  5 01:53:41 compute-0 nova_compute[349548]: 2025-12-05 01:53:41.742 349552 DEBUG oslo_concurrency.processutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.config 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.313s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:53:41 compute-0 nova_compute[349548]: 2025-12-05 01:53:41.743 349552 INFO nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Deleting local config drive /var/lib/nova/instances/7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.config because it was imported into RBD.#033[00m
Dec  5 01:53:41 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec  5 01:53:41 compute-0 podman[420718]: 2025-12-05 01:53:41.787688121 +0000 UTC m=+0.188494379 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2)
Dec  5 01:53:41 compute-0 systemd[1]: Started libvirt secret daemon.
Dec  5 01:53:41 compute-0 NetworkManager[49092]: <info>  [1764899621.8716] manager: (tap4341bf52-6b): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Dec  5 01:53:41 compute-0 kernel: tap4341bf52-6b: entered promiscuous mode
Dec  5 01:53:41 compute-0 nova_compute[349548]: 2025-12-05 01:53:41.887 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:53:41 compute-0 ovn_controller[89286]: 2025-12-05T01:53:41Z|00040|binding|INFO|Claiming lport 4341bf52-6bd5-42ee-b25d-f3d9844af854 for this chassis.
Dec  5 01:53:41 compute-0 ovn_controller[89286]: 2025-12-05T01:53:41Z|00041|binding|INFO|4341bf52-6bd5-42ee-b25d-f3d9844af854: Claiming fa:16:3e:68:a7:22 192.168.0.25
Dec  5 01:53:41 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:53:41.900 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:a7:22 192.168.0.25'], port_security=['fa:16:3e:68:a7:22 192.168.0.25'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-qkgif4ysdpfw-vyar5vmyxehf-7qpgpa3gxwp3-port-3t3utgry676a', 'neutron:cidrs': '192.168.0.25/24', 'neutron:device_id': '7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-qkgif4ysdpfw-vyar5vmyxehf-7qpgpa3gxwp3-port-3t3utgry676a', 'neutron:project_id': '6ad982b73954486390215862ee62239f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cf07c149-4b4f-4cc9-a5b5-cfd139acbede', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.236'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8440543a-d57d-422f-b491-49a678c2776e, chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=4341bf52-6bd5-42ee-b25d-f3d9844af854) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 01:53:41 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:53:41.902 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 4341bf52-6bd5-42ee-b25d-f3d9844af854 in datapath 49f7d2f1-f1ff-4dcc-94db-d088dc8d3183 bound to our chassis#033[00m
Dec  5 01:53:41 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:53:41.905 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 49f7d2f1-f1ff-4dcc-94db-d088dc8d3183#033[00m
Dec  5 01:53:41 compute-0 ovn_controller[89286]: 2025-12-05T01:53:41Z|00042|binding|INFO|Setting lport 4341bf52-6bd5-42ee-b25d-f3d9844af854 ovn-installed in OVS
Dec  5 01:53:41 compute-0 ovn_controller[89286]: 2025-12-05T01:53:41Z|00043|binding|INFO|Setting lport 4341bf52-6bd5-42ee-b25d-f3d9844af854 up in Southbound
Dec  5 01:53:41 compute-0 nova_compute[349548]: 2025-12-05 01:53:41.915 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:53:41 compute-0 systemd-udevd[420841]: Network interface NamePolicy= disabled on kernel command line.
Dec  5 01:53:41 compute-0 systemd-machined[138700]: New machine qemu-3-instance-00000003.
Dec  5 01:53:41 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Dec  5 01:53:41 compute-0 NetworkManager[49092]: <info>  [1764899621.9364] device (tap4341bf52-6b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  5 01:53:41 compute-0 NetworkManager[49092]: <info>  [1764899621.9382] device (tap4341bf52-6b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  5 01:53:41 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:53:41.937 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[6bb4b963-6e7d-4423-a8b8-e6f4a1124e6e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:53:41 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:53:41.971 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[5fade635-7f87-48f0-8d5f-3bb51018c657]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:53:41 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:53:41.975 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[89f28969-3256-4211-b014-c29c5765c4de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:53:42 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:53:42.013 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[db4f7160-1b61-4d4b-b365-42678c2f8f53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:53:42 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:53:42.039 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[4e330214-fb49-4b9a-9d3a-6c60620500b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap49f7d2f1-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:8a:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 8, 'rx_bytes': 574, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 8, 'rx_bytes': 574, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537514, 'reachable_time': 15952, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 420853, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:53:42 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:53:42.061 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[58a717ca-5dd6-4361-a01d-de19dc7915d5]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap49f7d2f1-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 537531, 'tstamp': 537531}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 420855, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap49f7d2f1-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 537536, 'tstamp': 537536}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 420855, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:53:42 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:53:42.063 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap49f7d2f1-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.066 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:53:42 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:53:42.068 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap49f7d2f1-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 01:53:42 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:53:42.069 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 01:53:42 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:53:42.070 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap49f7d2f1-f0, col_values=(('external_ids', {'iface-id': '35b0af3f-4a87-44c5-9b77-2f08261b9985'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 01:53:42 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:53:42.071 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.106 349552 DEBUG nova.compute.manager [req-146e440c-5d9e-498e-bf31-5caa8724749d req-c7f8bff5-45c2-4490-b0b2-a8a47a875efe a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Received event network-vif-plugged-4341bf52-6bd5-42ee-b25d-f3d9844af854 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.107 349552 DEBUG oslo_concurrency.lockutils [req-146e440c-5d9e-498e-bf31-5caa8724749d req-c7f8bff5-45c2-4490-b0b2-a8a47a875efe a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.108 349552 DEBUG oslo_concurrency.lockutils [req-146e440c-5d9e-498e-bf31-5caa8724749d req-c7f8bff5-45c2-4490-b0b2-a8a47a875efe a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.108 349552 DEBUG oslo_concurrency.lockutils [req-146e440c-5d9e-498e-bf31-5caa8724749d req-c7f8bff5-45c2-4490-b0b2-a8a47a875efe a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.109 349552 DEBUG nova.compute.manager [req-146e440c-5d9e-498e-bf31-5caa8724749d req-c7f8bff5-45c2-4490-b0b2-a8a47a875efe a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Processing event network-vif-plugged-4341bf52-6bd5-42ee-b25d-f3d9844af854 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  5 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.626 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764899622.623513, 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.627 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] VM Started (Lifecycle Event)#033[00m
Dec  5 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.630 349552 DEBUG nova.compute.manager [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  5 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.636 349552 DEBUG nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  5 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.642 349552 INFO nova.virt.libvirt.driver [-] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Instance spawned successfully.#033[00m
Dec  5 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.643 349552 DEBUG nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  5 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.647 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.652 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  5 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.672 349552 DEBUG nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.673 349552 DEBUG nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.673 349552 DEBUG nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.674 349552 DEBUG nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.674 349552 DEBUG nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 01:53:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.675 349552 DEBUG nova.virt.libvirt.driver [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.679 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  5 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.679 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764899622.625402, 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.681 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] VM Paused (Lifecycle Event)#033[00m
Dec  5 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.710 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.717 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764899622.6345963, 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.718 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] VM Resumed (Lifecycle Event)#033[00m
Dec  5 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.741 349552 INFO nova.compute.manager [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Took 9.67 seconds to spawn the instance on the hypervisor.#033[00m
Dec  5 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.741 349552 DEBUG nova.compute.manager [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.798 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.808 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  5 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.824 349552 INFO nova.compute.manager [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Took 10.76 seconds to build instance.#033[00m
Dec  5 01:53:42 compute-0 nova_compute[349548]: 2025-12-05 01:53:42.852 349552 DEBUG oslo_concurrency.lockutils [None req-91ef5c10-b499-4397-9536-401f7b1ca092 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.853s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:53:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1330: 321 pgs: 321 active+clean; 172 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 MiB/s wr, 37 op/s
Dec  5 01:53:43 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec  5 01:53:43 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec  5 01:53:44 compute-0 nova_compute[349548]: 2025-12-05 01:53:44.225 349552 DEBUG nova.compute.manager [req-c57ac460-abde-4043-84fa-b09679a7277f req-9bc7b94a-1a84-4a6f-bf9d-7c0355aa3f8d a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Received event network-vif-plugged-4341bf52-6bd5-42ee-b25d-f3d9844af854 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 01:53:44 compute-0 nova_compute[349548]: 2025-12-05 01:53:44.227 349552 DEBUG oslo_concurrency.lockutils [req-c57ac460-abde-4043-84fa-b09679a7277f req-9bc7b94a-1a84-4a6f-bf9d-7c0355aa3f8d a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:53:44 compute-0 nova_compute[349548]: 2025-12-05 01:53:44.228 349552 DEBUG oslo_concurrency.lockutils [req-c57ac460-abde-4043-84fa-b09679a7277f req-9bc7b94a-1a84-4a6f-bf9d-7c0355aa3f8d a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:53:44 compute-0 nova_compute[349548]: 2025-12-05 01:53:44.229 349552 DEBUG oslo_concurrency.lockutils [req-c57ac460-abde-4043-84fa-b09679a7277f req-9bc7b94a-1a84-4a6f-bf9d-7c0355aa3f8d a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:53:44 compute-0 nova_compute[349548]: 2025-12-05 01:53:44.229 349552 DEBUG nova.compute.manager [req-c57ac460-abde-4043-84fa-b09679a7277f req-9bc7b94a-1a84-4a6f-bf9d-7c0355aa3f8d a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] No waiting events found dispatching network-vif-plugged-4341bf52-6bd5-42ee-b25d-f3d9844af854 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 01:53:44 compute-0 nova_compute[349548]: 2025-12-05 01:53:44.230 349552 WARNING nova.compute.manager [req-c57ac460-abde-4043-84fa-b09679a7277f req-9bc7b94a-1a84-4a6f-bf9d-7c0355aa3f8d a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Received unexpected event network-vif-plugged-4341bf52-6bd5-42ee-b25d-f3d9844af854 for instance with vm_state active and task_state None.#033[00m
Dec  5 01:53:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 01:53:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2656003973' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 01:53:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 01:53:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2656003973' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 01:53:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1331: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 1.4 MiB/s wr, 54 op/s
Dec  5 01:53:45 compute-0 nova_compute[349548]: 2025-12-05 01:53:45.680 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:53:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:53:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:53:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:53:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:53:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:53:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:53:46 compute-0 nova_compute[349548]: 2025-12-05 01:53:46.499 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:53:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1332: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.1 MiB/s wr, 73 op/s
Dec  5 01:53:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:53:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1333: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 20 KiB/s wr, 59 op/s
Dec  5 01:53:50 compute-0 nova_compute[349548]: 2025-12-05 01:53:50.686 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:53:51 compute-0 nova_compute[349548]: 2025-12-05 01:53:51.502 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:53:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1334: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 61 op/s
Dec  5 01:53:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:53:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1335: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 60 op/s
Dec  5 01:53:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1336: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 20 KiB/s wr, 59 op/s
Dec  5 01:53:55 compute-0 nova_compute[349548]: 2025-12-05 01:53:55.696 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:53:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:53:56.183 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:53:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:53:56.184 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:53:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:53:56.185 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:53:56 compute-0 nova_compute[349548]: 2025-12-05 01:53:56.506 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:53:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1337: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 43 op/s
Dec  5 01:53:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:53:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1338: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 1 op/s
Dec  5 01:53:59 compute-0 podman[420936]: 2025-12-05 01:53:59.703468912 +0000 UTC m=+0.107348436 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  5 01:53:59 compute-0 podman[420937]: 2025-12-05 01:53:59.728116513 +0000 UTC m=+0.129582049 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 01:53:59 compute-0 podman[158197]: time="2025-12-05T01:53:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:53:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:53:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 01:53:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:53:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8621 "" "Go-http-client/1.1"
Dec  5 01:54:00 compute-0 nova_compute[349548]: 2025-12-05 01:54:00.703 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:54:01 compute-0 openstack_network_exporter[366555]: ERROR   01:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:54:01 compute-0 openstack_network_exporter[366555]: ERROR   01:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:54:01 compute-0 openstack_network_exporter[366555]: ERROR   01:54:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:54:01 compute-0 openstack_network_exporter[366555]: ERROR   01:54:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:54:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:54:01 compute-0 openstack_network_exporter[366555]: ERROR   01:54:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:54:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:54:01 compute-0 nova_compute[349548]: 2025-12-05 01:54:01.508 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:54:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1339: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 1 op/s
Dec  5 01:54:01 compute-0 podman[420976]: 2025-12-05 01:54:01.747482947 +0000 UTC m=+0.142673027 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  5 01:54:01 compute-0 podman[420975]: 2025-12-05 01:54:01.751486179 +0000 UTC m=+0.155122855 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, io.buildah.version=1.41.4)
Dec  5 01:54:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:54:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1340: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:54:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1341: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:54:05 compute-0 nova_compute[349548]: 2025-12-05 01:54:05.708 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:54:05 compute-0 podman[421011]: 2025-12-05 01:54:05.730525774 +0000 UTC m=+0.129653682 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, container_name=kepler, vcs-type=git, architecture=x86_64, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., config_id=edpm, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, com.redhat.component=ubi9-container)
Dec  5 01:54:06 compute-0 nova_compute[349548]: 2025-12-05 01:54:06.510 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:54:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1342: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:54:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:54:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1343: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:54:10 compute-0 nova_compute[349548]: 2025-12-05 01:54:10.713 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:54:11 compute-0 nova_compute[349548]: 2025-12-05 01:54:11.514 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:54:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1344: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:54:11 compute-0 ovn_controller[89286]: 2025-12-05T01:54:11Z|00044|memory_trim|INFO|Detected inactivity (last active 30011 ms ago): trimming memory
Dec  5 01:54:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:54:12 compute-0 podman[421030]: 2025-12-05 01:54:12.713363763 +0000 UTC m=+0.108759447 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  5 01:54:12 compute-0 podman[421029]: 2025-12-05 01:54:12.743487766 +0000 UTC m=+0.144400005 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS)
Dec  5 01:54:12 compute-0 podman[421032]: 2025-12-05 01:54:12.748541668 +0000 UTC m=+0.128789028 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, distribution-scope=public, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vcs-type=git)
Dec  5 01:54:12 compute-0 podman[421031]: 2025-12-05 01:54:12.778650481 +0000 UTC m=+0.169548779 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible)
Dec  5 01:54:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1345: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:54:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1346: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:54:15 compute-0 nova_compute[349548]: 2025-12-05 01:54:15.721 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:54:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:54:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:54:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:54:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:54:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:54:16
Dec  5 01:54:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 01:54:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 01:54:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'vms', 'cephfs.cephfs.data', '.rgw.root', 'backups', 'default.rgw.meta', 'default.rgw.control', 'images', '.mgr', 'cephfs.cephfs.meta']
Dec  5 01:54:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 01:54:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:54:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:54:16 compute-0 nova_compute[349548]: 2025-12-05 01:54:16.516 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:54:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 01:54:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:54:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 01:54:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:54:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:54:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:54:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:54:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:54:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:54:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:54:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1347: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 341 B/s wr, 7 op/s
Dec  5 01:54:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:54:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:54:17 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:54:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 01:54:17 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:54:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 01:54:17 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:54:17 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev a7177625-f8cf-4768-860a-b948f5d1dcc3 does not exist
Dec  5 01:54:17 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev e66943c4-f329-4521-9e57-dd53e87fdedb does not exist
Dec  5 01:54:17 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev aad8d181-da1d-4616-8ae7-1301bccb0c0a does not exist
Dec  5 01:54:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 01:54:17 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 01:54:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 01:54:17 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:54:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:54:17 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:54:17 compute-0 ovn_controller[89286]: 2025-12-05T01:54:17Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:68:a7:22 192.168.0.25
Dec  5 01:54:18 compute-0 ovn_controller[89286]: 2025-12-05T01:54:18Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:68:a7:22 192.168.0.25
Dec  5 01:54:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:54:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:54:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:54:19 compute-0 podman[421377]: 2025-12-05 01:54:19.163265416 +0000 UTC m=+0.079537319 container create 343ad646bd82bc88d0e0f8dbbd6f9af9cdb3a5eebb9b14556468438f945baab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  5 01:54:19 compute-0 podman[421377]: 2025-12-05 01:54:19.118995356 +0000 UTC m=+0.035267299 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:54:19 compute-0 systemd[1]: Started libpod-conmon-343ad646bd82bc88d0e0f8dbbd6f9af9cdb3a5eebb9b14556468438f945baab4.scope.
Dec  5 01:54:19 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:54:19 compute-0 podman[421377]: 2025-12-05 01:54:19.294256314 +0000 UTC m=+0.210528247 container init 343ad646bd82bc88d0e0f8dbbd6f9af9cdb3a5eebb9b14556468438f945baab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:54:19 compute-0 podman[421377]: 2025-12-05 01:54:19.313064751 +0000 UTC m=+0.229336664 container start 343ad646bd82bc88d0e0f8dbbd6f9af9cdb3a5eebb9b14556468438f945baab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  5 01:54:19 compute-0 podman[421377]: 2025-12-05 01:54:19.319278005 +0000 UTC m=+0.235549908 container attach 343ad646bd82bc88d0e0f8dbbd6f9af9cdb3a5eebb9b14556468438f945baab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_panini, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  5 01:54:19 compute-0 hardcore_panini[421392]: 167 167
Dec  5 01:54:19 compute-0 systemd[1]: libpod-343ad646bd82bc88d0e0f8dbbd6f9af9cdb3a5eebb9b14556468438f945baab4.scope: Deactivated successfully.
Dec  5 01:54:19 compute-0 podman[421397]: 2025-12-05 01:54:19.423613847 +0000 UTC m=+0.066929925 container died 343ad646bd82bc88d0e0f8dbbd6f9af9cdb3a5eebb9b14556468438f945baab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_panini, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  5 01:54:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a11b8f7943a43ab4cd9fe8a2af9d6651fd462f2142de5dbb246b802d91b64df-merged.mount: Deactivated successfully.
Dec  5 01:54:19 compute-0 podman[421397]: 2025-12-05 01:54:19.510812749 +0000 UTC m=+0.154128747 container remove 343ad646bd82bc88d0e0f8dbbd6f9af9cdb3a5eebb9b14556468438f945baab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_panini, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  5 01:54:19 compute-0 systemd[1]: libpod-conmon-343ad646bd82bc88d0e0f8dbbd6f9af9cdb3a5eebb9b14556468438f945baab4.scope: Deactivated successfully.
Dec  5 01:54:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1348: 321 pgs: 321 active+clean; 172 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 341 B/s wr, 7 op/s
Dec  5 01:54:19 compute-0 podman[421419]: 2025-12-05 01:54:19.816663025 +0000 UTC m=+0.090672621 container create cb2a71aff4155338a9978944ed7b86527726d7d8e0399fc92eb2efa7cedd3b8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  5 01:54:19 compute-0 podman[421419]: 2025-12-05 01:54:19.775399919 +0000 UTC m=+0.049409595 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:54:19 compute-0 systemd[1]: Started libpod-conmon-cb2a71aff4155338a9978944ed7b86527726d7d8e0399fc92eb2efa7cedd3b8d.scope.
Dec  5 01:54:19 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:54:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/862c40cb65149a4bfac317ca78a69c94478568ab43c3e7b73c2b1a9955bbfbac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:54:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/862c40cb65149a4bfac317ca78a69c94478568ab43c3e7b73c2b1a9955bbfbac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:54:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/862c40cb65149a4bfac317ca78a69c94478568ab43c3e7b73c2b1a9955bbfbac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:54:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/862c40cb65149a4bfac317ca78a69c94478568ab43c3e7b73c2b1a9955bbfbac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:54:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/862c40cb65149a4bfac317ca78a69c94478568ab43c3e7b73c2b1a9955bbfbac/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:54:19 compute-0 podman[421419]: 2025-12-05 01:54:19.995010929 +0000 UTC m=+0.269020535 container init cb2a71aff4155338a9978944ed7b86527726d7d8e0399fc92eb2efa7cedd3b8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_leakey, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  5 01:54:20 compute-0 podman[421419]: 2025-12-05 01:54:20.020264377 +0000 UTC m=+0.294274003 container start cb2a71aff4155338a9978944ed7b86527726d7d8e0399fc92eb2efa7cedd3b8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:54:20 compute-0 podman[421419]: 2025-12-05 01:54:20.033269901 +0000 UTC m=+0.307279507 container attach cb2a71aff4155338a9978944ed7b86527726d7d8e0399fc92eb2efa7cedd3b8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_leakey, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:54:20 compute-0 nova_compute[349548]: 2025-12-05 01:54:20.726 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:54:21 compute-0 sharp_leakey[421435]: --> passed data devices: 0 physical, 3 LVM
Dec  5 01:54:21 compute-0 sharp_leakey[421435]: --> relative data size: 1.0
Dec  5 01:54:21 compute-0 sharp_leakey[421435]: --> All data devices are unavailable
Dec  5 01:54:21 compute-0 systemd[1]: libpod-cb2a71aff4155338a9978944ed7b86527726d7d8e0399fc92eb2efa7cedd3b8d.scope: Deactivated successfully.
Dec  5 01:54:21 compute-0 systemd[1]: libpod-cb2a71aff4155338a9978944ed7b86527726d7d8e0399fc92eb2efa7cedd3b8d.scope: Consumed 1.200s CPU time.
Dec  5 01:54:21 compute-0 podman[421419]: 2025-12-05 01:54:21.302928218 +0000 UTC m=+1.576937824 container died cb2a71aff4155338a9978944ed7b86527726d7d8e0399fc92eb2efa7cedd3b8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_leakey, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:54:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-862c40cb65149a4bfac317ca78a69c94478568ab43c3e7b73c2b1a9955bbfbac-merged.mount: Deactivated successfully.
Dec  5 01:54:21 compute-0 podman[421419]: 2025-12-05 01:54:21.383069092 +0000 UTC m=+1.657078688 container remove cb2a71aff4155338a9978944ed7b86527726d7d8e0399fc92eb2efa7cedd3b8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_leakey, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  5 01:54:21 compute-0 systemd[1]: libpod-conmon-cb2a71aff4155338a9978944ed7b86527726d7d8e0399fc92eb2efa7cedd3b8d.scope: Deactivated successfully.
Dec  5 01:54:21 compute-0 nova_compute[349548]: 2025-12-05 01:54:21.517 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:54:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1349: 321 pgs: 321 active+clean; 177 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 29 KiB/s wr, 26 op/s
Dec  5 01:54:22 compute-0 nova_compute[349548]: 2025-12-05 01:54:22.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:54:22 compute-0 nova_compute[349548]: 2025-12-05 01:54:22.068 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:54:22 compute-0 nova_compute[349548]: 2025-12-05 01:54:22.068 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 01:54:22 compute-0 podman[421616]: 2025-12-05 01:54:22.408518421 +0000 UTC m=+0.065773893 container create 0dc5e906f4147870741680cd835a30cb1b0259f133b0e7792dab347ccf43c84c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  5 01:54:22 compute-0 systemd[1]: Started libpod-conmon-0dc5e906f4147870741680cd835a30cb1b0259f133b0e7792dab347ccf43c84c.scope.
Dec  5 01:54:22 compute-0 podman[421616]: 2025-12-05 01:54:22.384784146 +0000 UTC m=+0.042039638 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:54:22 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:54:22 compute-0 podman[421616]: 2025-12-05 01:54:22.538392098 +0000 UTC m=+0.195647620 container init 0dc5e906f4147870741680cd835a30cb1b0259f133b0e7792dab347ccf43c84c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lewin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:54:22 compute-0 podman[421616]: 2025-12-05 01:54:22.554326994 +0000 UTC m=+0.211582456 container start 0dc5e906f4147870741680cd835a30cb1b0259f133b0e7792dab347ccf43c84c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:54:22 compute-0 podman[421616]: 2025-12-05 01:54:22.559817778 +0000 UTC m=+0.217073250 container attach 0dc5e906f4147870741680cd835a30cb1b0259f133b0e7792dab347ccf43c84c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lewin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  5 01:54:22 compute-0 sleepy_lewin[421633]: 167 167
Dec  5 01:54:22 compute-0 systemd[1]: libpod-0dc5e906f4147870741680cd835a30cb1b0259f133b0e7792dab347ccf43c84c.scope: Deactivated successfully.
Dec  5 01:54:22 compute-0 podman[421616]: 2025-12-05 01:54:22.568983785 +0000 UTC m=+0.226239287 container died 0dc5e906f4147870741680cd835a30cb1b0259f133b0e7792dab347ccf43c84c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  5 01:54:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a2f21f1185ad79198825176b58b1ecbfb765d28d20ed06f614d90178c1818a3-merged.mount: Deactivated successfully.
Dec  5 01:54:22 compute-0 podman[421616]: 2025-12-05 01:54:22.624438128 +0000 UTC m=+0.281693610 container remove 0dc5e906f4147870741680cd835a30cb1b0259f133b0e7792dab347ccf43c84c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  5 01:54:22 compute-0 systemd[1]: libpod-conmon-0dc5e906f4147870741680cd835a30cb1b0259f133b0e7792dab347ccf43c84c.scope: Deactivated successfully.
Dec  5 01:54:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:54:22 compute-0 podman[421656]: 2025-12-05 01:54:22.844224893 +0000 UTC m=+0.052727218 container create 6d5dd2eecfc637151add432fdffbfd488cffabd44b86408501fe630a60955726 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  5 01:54:22 compute-0 systemd[1]: Started libpod-conmon-6d5dd2eecfc637151add432fdffbfd488cffabd44b86408501fe630a60955726.scope.
Dec  5 01:54:22 compute-0 podman[421656]: 2025-12-05 01:54:22.825103988 +0000 UTC m=+0.033606313 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:54:22 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:54:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe8625996359ea483b0a4b651fb09bbf170a33e1ab0d957024963d36cdb64e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:54:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe8625996359ea483b0a4b651fb09bbf170a33e1ab0d957024963d36cdb64e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:54:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe8625996359ea483b0a4b651fb09bbf170a33e1ab0d957024963d36cdb64e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:54:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1fe8625996359ea483b0a4b651fb09bbf170a33e1ab0d957024963d36cdb64e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:54:22 compute-0 podman[421656]: 2025-12-05 01:54:22.981302592 +0000 UTC m=+0.189804947 container init 6d5dd2eecfc637151add432fdffbfd488cffabd44b86408501fe630a60955726 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_heisenberg, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:54:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 01:54:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 6573 writes, 26K keys, 6573 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6573 writes, 1296 syncs, 5.07 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 694 writes, 1830 keys, 694 commit groups, 1.0 writes per commit group, ingest: 1.60 MB, 0.00 MB/s#012Interval WAL: 694 writes, 301 syncs, 2.31 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  5 01:54:23 compute-0 podman[421656]: 2025-12-05 01:54:22.999700087 +0000 UTC m=+0.208202412 container start 6d5dd2eecfc637151add432fdffbfd488cffabd44b86408501fe630a60955726 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Dec  5 01:54:23 compute-0 podman[421656]: 2025-12-05 01:54:23.004719158 +0000 UTC m=+0.213221493 container attach 6d5dd2eecfc637151add432fdffbfd488cffabd44b86408501fe630a60955726 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:54:23 compute-0 nova_compute[349548]: 2025-12-05 01:54:23.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:54:23 compute-0 nova_compute[349548]: 2025-12-05 01:54:23.068 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:54:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1350: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]: {
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:    "0": [
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:        {
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:            "devices": [
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:                "/dev/loop3"
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:            ],
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:            "lv_name": "ceph_lv0",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:            "lv_size": "21470642176",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:            "name": "ceph_lv0",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:            "tags": {
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:                "ceph.cluster_name": "ceph",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:                "ceph.crush_device_class": "",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:                "ceph.encrypted": "0",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:                "ceph.osd_id": "0",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:                "ceph.type": "block",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:                "ceph.vdo": "0"
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:            },
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:            "type": "block",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:            "vg_name": "ceph_vg0"
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:        }
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:    ],
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:    "1": [
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:        {
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:            "devices": [
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:                "/dev/loop4"
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:            ],
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:            "lv_name": "ceph_lv1",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:            "lv_size": "21470642176",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:            "name": "ceph_lv1",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:            "tags": {
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:                "ceph.cluster_name": "ceph",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:                "ceph.crush_device_class": "",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:                "ceph.encrypted": "0",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:                "ceph.osd_id": "1",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:                "ceph.type": "block",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:                "ceph.vdo": "0"
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:            },
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:            "type": "block",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:            "vg_name": "ceph_vg1"
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:        }
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:    ],
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:    "2": [
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:        {
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:            "devices": [
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:                "/dev/loop5"
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:            ],
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:            "lv_name": "ceph_lv2",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:            "lv_size": "21470642176",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:            "name": "ceph_lv2",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:            "tags": {
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:                "ceph.cluster_name": "ceph",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:                "ceph.crush_device_class": "",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:                "ceph.encrypted": "0",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:                "ceph.osd_id": "2",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:                "ceph.type": "block",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:                "ceph.vdo": "0"
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:            },
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:            "type": "block",
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:            "vg_name": "ceph_vg2"
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:        }
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]:    ]
Dec  5 01:54:23 compute-0 nervous_heisenberg[421672]: }
Dec  5 01:54:23 compute-0 systemd[1]: libpod-6d5dd2eecfc637151add432fdffbfd488cffabd44b86408501fe630a60955726.scope: Deactivated successfully.
Dec  5 01:54:23 compute-0 podman[421656]: 2025-12-05 01:54:23.846706719 +0000 UTC m=+1.055209034 container died 6d5dd2eecfc637151add432fdffbfd488cffabd44b86408501fe630a60955726 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_heisenberg, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:54:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-1fe8625996359ea483b0a4b651fb09bbf170a33e1ab0d957024963d36cdb64e6-merged.mount: Deactivated successfully.
Dec  5 01:54:23 compute-0 podman[421656]: 2025-12-05 01:54:23.926478733 +0000 UTC m=+1.134981048 container remove 6d5dd2eecfc637151add432fdffbfd488cffabd44b86408501fe630a60955726 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_heisenberg, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:54:23 compute-0 systemd[1]: libpod-conmon-6d5dd2eecfc637151add432fdffbfd488cffabd44b86408501fe630a60955726.scope: Deactivated successfully.
Dec  5 01:54:24 compute-0 nova_compute[349548]: 2025-12-05 01:54:24.063 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:54:24 compute-0 nova_compute[349548]: 2025-12-05 01:54:24.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:54:24 compute-0 nova_compute[349548]: 2025-12-05 01:54:24.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 01:54:24 compute-0 nova_compute[349548]: 2025-12-05 01:54:24.619 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 01:54:24 compute-0 nova_compute[349548]: 2025-12-05 01:54:24.620 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 01:54:24 compute-0 nova_compute[349548]: 2025-12-05 01:54:24.620 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  5 01:54:24 compute-0 podman[421831]: 2025-12-05 01:54:24.946077966 +0000 UTC m=+0.077704707 container create c4fd77345ef6dead86e3fe9c8ebc02e880ffb8e68d3d7abcc38f6287a6c6e95e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_elbakyan, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  5 01:54:25 compute-0 podman[421831]: 2025-12-05 01:54:24.920446699 +0000 UTC m=+0.052073470 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:54:25 compute-0 systemd[1]: Started libpod-conmon-c4fd77345ef6dead86e3fe9c8ebc02e880ffb8e68d3d7abcc38f6287a6c6e95e.scope.
Dec  5 01:54:25 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:54:25 compute-0 podman[421831]: 2025-12-05 01:54:25.083353311 +0000 UTC m=+0.214980072 container init c4fd77345ef6dead86e3fe9c8ebc02e880ffb8e68d3d7abcc38f6287a6c6e95e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_elbakyan, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  5 01:54:25 compute-0 podman[421831]: 2025-12-05 01:54:25.098760812 +0000 UTC m=+0.230387553 container start c4fd77345ef6dead86e3fe9c8ebc02e880ffb8e68d3d7abcc38f6287a6c6e95e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_elbakyan, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:54:25 compute-0 podman[421831]: 2025-12-05 01:54:25.103342731 +0000 UTC m=+0.234969482 container attach c4fd77345ef6dead86e3fe9c8ebc02e880ffb8e68d3d7abcc38f6287a6c6e95e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:54:25 compute-0 youthful_elbakyan[421847]: 167 167
Dec  5 01:54:25 compute-0 systemd[1]: libpod-c4fd77345ef6dead86e3fe9c8ebc02e880ffb8e68d3d7abcc38f6287a6c6e95e.scope: Deactivated successfully.
Dec  5 01:54:25 compute-0 conmon[421847]: conmon c4fd77345ef6dead86e3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c4fd77345ef6dead86e3fe9c8ebc02e880ffb8e68d3d7abcc38f6287a6c6e95e.scope/container/memory.events
Dec  5 01:54:25 compute-0 podman[421831]: 2025-12-05 01:54:25.11581456 +0000 UTC m=+0.247441321 container died c4fd77345ef6dead86e3fe9c8ebc02e880ffb8e68d3d7abcc38f6287a6c6e95e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  5 01:54:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-388d760a135427145ff715cf7cbc370ad8d461857c0e6deb08c1c2bc1c8eafea-merged.mount: Deactivated successfully.
Dec  5 01:54:25 compute-0 podman[421831]: 2025-12-05 01:54:25.166104628 +0000 UTC m=+0.297731369 container remove c4fd77345ef6dead86e3fe9c8ebc02e880ffb8e68d3d7abcc38f6287a6c6e95e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  5 01:54:25 compute-0 systemd[1]: libpod-conmon-c4fd77345ef6dead86e3fe9c8ebc02e880ffb8e68d3d7abcc38f6287a6c6e95e.scope: Deactivated successfully.
Dec  5 01:54:25 compute-0 podman[421873]: 2025-12-05 01:54:25.439737482 +0000 UTC m=+0.096917015 container create b59b87d3e467a1c2f4646cd7276a393bd21c4e82bd036c8bd30b475266d2707b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_satoshi, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:54:25 compute-0 podman[421873]: 2025-12-05 01:54:25.410656177 +0000 UTC m=+0.067835790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:54:25 compute-0 systemd[1]: Started libpod-conmon-b59b87d3e467a1c2f4646cd7276a393bd21c4e82bd036c8bd30b475266d2707b.scope.
Dec  5 01:54:25 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:54:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8abe4a66391dfc77099148803f536d88263d2e4fd428f50cb6b49541da8e4a79/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:54:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1351: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Dec  5 01:54:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8abe4a66391dfc77099148803f536d88263d2e4fd428f50cb6b49541da8e4a79/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:54:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8abe4a66391dfc77099148803f536d88263d2e4fd428f50cb6b49541da8e4a79/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:54:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8abe4a66391dfc77099148803f536d88263d2e4fd428f50cb6b49541da8e4a79/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:54:25 compute-0 podman[421873]: 2025-12-05 01:54:25.58534252 +0000 UTC m=+0.242522133 container init b59b87d3e467a1c2f4646cd7276a393bd21c4e82bd036c8bd30b475266d2707b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_satoshi, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:54:25 compute-0 podman[421873]: 2025-12-05 01:54:25.618379955 +0000 UTC m=+0.275559488 container start b59b87d3e467a1c2f4646cd7276a393bd21c4e82bd036c8bd30b475266d2707b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:54:25 compute-0 podman[421873]: 2025-12-05 01:54:25.623175049 +0000 UTC m=+0.280354662 container attach b59b87d3e467a1c2f4646cd7276a393bd21c4e82bd036c8bd30b475266d2707b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_satoshi, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  5 01:54:25 compute-0 nova_compute[349548]: 2025-12-05 01:54:25.734 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:54:25 compute-0 nova_compute[349548]: 2025-12-05 01:54:25.786 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Updating instance_info_cache with network_info: [{"id": "554930d3-ff53-4ef1-af0a-bad6acef1456", "address": "fa:16:3e:43:63:18", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554930d3-ff", "ovs_interfaceid": "554930d3-ff53-4ef1-af0a-bad6acef1456", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 01:54:25 compute-0 nova_compute[349548]: 2025-12-05 01:54:25.815 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 01:54:25 compute-0 nova_compute[349548]: 2025-12-05 01:54:25.815 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  5 01:54:26 compute-0 nova_compute[349548]: 2025-12-05 01:54:26.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:54:26 compute-0 nova_compute[349548]: 2025-12-05 01:54:26.520 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016563471666015497 of space, bias 1.0, pg target 0.4969041499804649 quantized to 32 (current 32)
Dec  5 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  5 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:54:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 01:54:26 compute-0 ecstatic_satoshi[421890]: {
Dec  5 01:54:26 compute-0 ecstatic_satoshi[421890]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 01:54:26 compute-0 ecstatic_satoshi[421890]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:54:26 compute-0 ecstatic_satoshi[421890]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 01:54:26 compute-0 ecstatic_satoshi[421890]:        "osd_id": 0,
Dec  5 01:54:26 compute-0 ecstatic_satoshi[421890]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:54:26 compute-0 ecstatic_satoshi[421890]:        "type": "bluestore"
Dec  5 01:54:26 compute-0 ecstatic_satoshi[421890]:    },
Dec  5 01:54:26 compute-0 ecstatic_satoshi[421890]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 01:54:26 compute-0 ecstatic_satoshi[421890]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:54:26 compute-0 ecstatic_satoshi[421890]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 01:54:26 compute-0 ecstatic_satoshi[421890]:        "osd_id": 1,
Dec  5 01:54:26 compute-0 ecstatic_satoshi[421890]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:54:26 compute-0 ecstatic_satoshi[421890]:        "type": "bluestore"
Dec  5 01:54:26 compute-0 ecstatic_satoshi[421890]:    },
Dec  5 01:54:26 compute-0 ecstatic_satoshi[421890]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 01:54:26 compute-0 ecstatic_satoshi[421890]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:54:26 compute-0 ecstatic_satoshi[421890]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 01:54:26 compute-0 ecstatic_satoshi[421890]:        "osd_id": 2,
Dec  5 01:54:26 compute-0 ecstatic_satoshi[421890]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:54:26 compute-0 ecstatic_satoshi[421890]:        "type": "bluestore"
Dec  5 01:54:26 compute-0 ecstatic_satoshi[421890]:    }
Dec  5 01:54:26 compute-0 ecstatic_satoshi[421890]: }
Dec  5 01:54:26 compute-0 systemd[1]: libpod-b59b87d3e467a1c2f4646cd7276a393bd21c4e82bd036c8bd30b475266d2707b.scope: Deactivated successfully.
Dec  5 01:54:26 compute-0 systemd[1]: libpod-b59b87d3e467a1c2f4646cd7276a393bd21c4e82bd036c8bd30b475266d2707b.scope: Consumed 1.212s CPU time.
Dec  5 01:54:26 compute-0 podman[421923]: 2025-12-05 01:54:26.920981435 +0000 UTC m=+0.060952748 container died b59b87d3e467a1c2f4646cd7276a393bd21c4e82bd036c8bd30b475266d2707b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_satoshi, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec  5 01:54:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-8abe4a66391dfc77099148803f536d88263d2e4fd428f50cb6b49541da8e4a79-merged.mount: Deactivated successfully.
Dec  5 01:54:27 compute-0 podman[421923]: 2025-12-05 01:54:27.007380855 +0000 UTC m=+0.147352158 container remove b59b87d3e467a1c2f4646cd7276a393bd21c4e82bd036c8bd30b475266d2707b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_satoshi, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  5 01:54:27 compute-0 systemd[1]: libpod-conmon-b59b87d3e467a1c2f4646cd7276a393bd21c4e82bd036c8bd30b475266d2707b.scope: Deactivated successfully.
Dec  5 01:54:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:54:27 compute-0 nova_compute[349548]: 2025-12-05 01:54:27.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:54:27 compute-0 nova_compute[349548]: 2025-12-05 01:54:27.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:54:27 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:54:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:54:27 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:54:27 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 77dc7518-5cf2-4ff4-8051-1567cf658588 does not exist
Dec  5 01:54:27 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 8771fc56-4f54-4ab2-801c-9fb23e523ee8 does not exist
Dec  5 01:54:27 compute-0 nova_compute[349548]: 2025-12-05 01:54:27.098 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:54:27 compute-0 nova_compute[349548]: 2025-12-05 01:54:27.099 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:54:27 compute-0 nova_compute[349548]: 2025-12-05 01:54:27.100 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:54:27 compute-0 nova_compute[349548]: 2025-12-05 01:54:27.100 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 01:54:27 compute-0 nova_compute[349548]: 2025-12-05 01:54:27.101 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:54:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1352: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 58 op/s
Dec  5 01:54:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:54:27 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3798772681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:54:27 compute-0 nova_compute[349548]: 2025-12-05 01:54:27.583 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:54:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:54:27 compute-0 nova_compute[349548]: 2025-12-05 01:54:27.738 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:54:27 compute-0 nova_compute[349548]: 2025-12-05 01:54:27.739 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:54:27 compute-0 nova_compute[349548]: 2025-12-05 01:54:27.739 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:54:27 compute-0 nova_compute[349548]: 2025-12-05 01:54:27.745 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:54:27 compute-0 nova_compute[349548]: 2025-12-05 01:54:27.745 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:54:27 compute-0 nova_compute[349548]: 2025-12-05 01:54:27.746 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:54:27 compute-0 nova_compute[349548]: 2025-12-05 01:54:27.752 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:54:27 compute-0 nova_compute[349548]: 2025-12-05 01:54:27.752 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:54:27 compute-0 nova_compute[349548]: 2025-12-05 01:54:27.752 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:54:28 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:54:28 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:54:28 compute-0 nova_compute[349548]: 2025-12-05 01:54:28.306 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 01:54:28 compute-0 nova_compute[349548]: 2025-12-05 01:54:28.308 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3505MB free_disk=59.888919830322266GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 01:54:28 compute-0 nova_compute[349548]: 2025-12-05 01:54:28.308 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:54:28 compute-0 nova_compute[349548]: 2025-12-05 01:54:28.309 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:54:28 compute-0 nova_compute[349548]: 2025-12-05 01:54:28.657 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b69a0e24-1bc4-46a5-92d7-367c1efd53df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 01:54:28 compute-0 nova_compute[349548]: 2025-12-05 01:54:28.658 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b82c3f0e-6d6a-4a7b-9556-b609ad63e497 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 01:54:28 compute-0 nova_compute[349548]: 2025-12-05 01:54:28.658 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 01:54:28 compute-0 nova_compute[349548]: 2025-12-05 01:54:28.658 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 01:54:28 compute-0 nova_compute[349548]: 2025-12-05 01:54:28.659 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2048MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 01:54:28 compute-0 nova_compute[349548]: 2025-12-05 01:54:28.738 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:54:29 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 01:54:29 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.2 total, 600.0 interval#012Cumulative writes: 8074 writes, 32K keys, 8074 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 8074 writes, 1655 syncs, 4.88 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 887 writes, 3287 keys, 887 commit groups, 1.0 writes per commit group, ingest: 3.45 MB, 0.01 MB/s#012Interval WAL: 887 writes, 328 syncs, 2.70 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  5 01:54:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:54:29 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2807330372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:54:29 compute-0 nova_compute[349548]: 2025-12-05 01:54:29.259 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:54:29 compute-0 nova_compute[349548]: 2025-12-05 01:54:29.272 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 01:54:29 compute-0 nova_compute[349548]: 2025-12-05 01:54:29.295 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 01:54:29 compute-0 nova_compute[349548]: 2025-12-05 01:54:29.329 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 01:54:29 compute-0 nova_compute[349548]: 2025-12-05 01:54:29.329 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.021s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:54:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1353: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 133 KiB/s rd, 1.5 MiB/s wr, 50 op/s
Dec  5 01:54:29 compute-0 podman[158197]: time="2025-12-05T01:54:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:54:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:54:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 01:54:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:54:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8628 "" "Go-http-client/1.1"
Dec  5 01:54:30 compute-0 podman[422034]: 2025-12-05 01:54:30.699423583 +0000 UTC m=+0.101725180 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  5 01:54:30 compute-0 podman[422033]: 2025-12-05 01:54:30.721876532 +0000 UTC m=+0.129933340 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  5 01:54:30 compute-0 nova_compute[349548]: 2025-12-05 01:54:30.737 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:54:31 compute-0 openstack_network_exporter[366555]: ERROR   01:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:54:31 compute-0 openstack_network_exporter[366555]: ERROR   01:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:54:31 compute-0 openstack_network_exporter[366555]: ERROR   01:54:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:54:31 compute-0 openstack_network_exporter[366555]: ERROR   01:54:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:54:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:54:31 compute-0 openstack_network_exporter[366555]: ERROR   01:54:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:54:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:54:31 compute-0 nova_compute[349548]: 2025-12-05 01:54:31.522 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:54:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1354: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 133 KiB/s rd, 1.5 MiB/s wr, 50 op/s
Dec  5 01:54:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:54:32 compute-0 podman[422076]: 2025-12-05 01:54:32.708743295 +0000 UTC m=+0.118378587 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  5 01:54:32 compute-0 podman[422077]: 2025-12-05 01:54:32.745816683 +0000 UTC m=+0.134563570 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  5 01:54:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1355: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 96 KiB/s rd, 1.5 MiB/s wr, 31 op/s
Dec  5 01:54:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1356: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s wr, 0 op/s
Dec  5 01:54:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 01:54:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 6650 writes, 27K keys, 6650 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6650 writes, 1298 syncs, 5.12 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 733 writes, 2726 keys, 733 commit groups, 1.0 writes per commit group, ingest: 3.29 MB, 0.01 MB/s#012Interval WAL: 733 writes, 277 syncs, 2.65 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  5 01:54:35 compute-0 nova_compute[349548]: 2025-12-05 01:54:35.741 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:54:36 compute-0 nova_compute[349548]: 2025-12-05 01:54:36.524 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:54:36 compute-0 podman[422113]: 2025-12-05 01:54:36.717640906 +0000 UTC m=+0.119430276 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., io.openshift.expose-services=, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, config_id=edpm, vendor=Red Hat, Inc., release=1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9)
Dec  5 01:54:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1357: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s wr, 0 op/s
Dec  5 01:54:37 compute-0 ceph-mgr[193209]: [devicehealth INFO root] Check health
Dec  5 01:54:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.317 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.318 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.319 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.326 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.326 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.326 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.328 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.330 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}03a5c5085f72a10a14834caf2c8f725d7bea9761ee1da0af3d318eb89d91a8ae" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.838 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Fri, 05 Dec 2025 01:54:38 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-07a1c91e-44a9-4e80-ba31-883598b9668d x-openstack-request-id: req-07a1c91e-44a9-4e80-ba31-883598b9668d _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.839 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5", "name": "vn-4ysdpfw-vyar5vmyxehf-7qpgpa3gxwp3-vnf-gvxpa75bo2i7", "status": "ACTIVE", "tenant_id": "6ad982b73954486390215862ee62239f", "user_id": "ff880837791d4f49a54672b8d0e705ff", "metadata": {"metering.server_group": "b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1"}, "hostId": "c00078154b620f81ef3acab090afa15b914aca6c57286253be564282", "image": {"id": "aa58c1e9-bdcc-4e60-9cee-eaeee0741251", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/aa58c1e9-bdcc-4e60-9cee-eaeee0741251"}]}, "flavor": {"id": "7d473820-6f66-40b4-b8d1-decd466d7dd2", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/7d473820-6f66-40b4-b8d1-decd466d7dd2"}]}, "created": "2025-12-05T01:53:30Z", "updated": "2025-12-05T01:53:42Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.25", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:68:a7:22"}, {"version": 4, "addr": "192.168.122.236", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:68:a7:22"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-05T01:53:42.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.839 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5 used request id req-07a1c91e-44a9-4e80-ba31-883598b9668d request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.841 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5', 'name': 'vn-4ysdpfw-vyar5vmyxehf-7qpgpa3gxwp3-vnf-gvxpa75bo2i7', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {'metering.server_group': 'b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.846 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b69a0e24-1bc4-46a5-92d7-367c1efd53df', 'name': 'test_0', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.849 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b82c3f0e-6d6a-4a7b-9556-b609ad63e497', 'name': 'vn-4ysdpfw-vozvkqjb7v2u-n3c5nyx5kkkm-vnf-x5qm3qqtonfj', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {'metering.server_group': 'b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.850 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.850 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd61438050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.850 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd61438050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.850 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.851 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.852 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.852 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.852 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.852 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.852 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-05T01:54:38.850553) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.853 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.853 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-05T01:54:38.853059) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.881 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.882 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.882 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.915 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.916 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.916 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.955 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.956 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.957 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.958 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.958 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.958 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.959 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.959 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.960 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.960 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.961 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.961 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.961 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.961 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.961 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-05T01:54:38.959282) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.962 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-05T01:54:38.961776) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.962 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.962 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-4ysdpfw-vyar5vmyxehf-7qpgpa3gxwp3-vnf-gvxpa75bo2i7>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-4ysdpfw-vyar5vmyxehf-7qpgpa3gxwp3-vnf-gvxpa75bo2i7>]
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.962 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.962 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.963 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.963 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.963 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:54:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:38.964 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-05T01:54:38.963447) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.025 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.026 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.026 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.114 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.115 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.115 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.178 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.178 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.179 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.180 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.181 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.181 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.181 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.181 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.194 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.195 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-05T01:54:39.194300) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.194 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.latency volume: 1788689993 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.200 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.latency volume: 318906117 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.200 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.latency volume: 246265233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.200 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 2043636416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.201 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 325714825 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.201 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 190759187 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.201 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.latency volume: 2069488567 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.201 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.latency volume: 288882839 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.202 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.latency volume: 182154388 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.202 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.202 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.202 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.203 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.203 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.203 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.203 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.203 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.203 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.203 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.204 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.204 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.204 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.204 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.204 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.205 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.205 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.205 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.205 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.205 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.205 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.206 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.206 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.206 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.206 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.206 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.207 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.207 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.207 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.207 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.210 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.211 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.211 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.211 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.211 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.211 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-05T01:54:39.203155) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.212 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-05T01:54:39.205854) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.211 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.212 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.bytes volume: 41705472 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.212 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-05T01:54:39.211916) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.212 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.213 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.213 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 41762816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.213 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.214 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.214 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.bytes volume: 41824256 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.214 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.214 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.215 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.215 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.215 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.215 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.215 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.216 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.216 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-05T01:54:39.216039) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.237 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.258 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.277 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.278 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.279 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.279 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.279 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.279 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.279 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.279 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.latency volume: 6939125600 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.280 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.latency volume: 30429022 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.280 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.280 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 7524740776 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.280 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-05T01:54:39.279653) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.281 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 28454640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.281 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.281 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.latency volume: 9113944897 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.281 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.latency volume: 32028870 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.281 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.282 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.282 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.282 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.282 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.282 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.283 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.283 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.requests volume: 224 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.283 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-05T01:54:39.282998) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.283 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.283 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.284 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.284 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.284 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.284 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.requests volume: 237 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.284 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.284 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.285 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.285 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.285 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.285 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.285 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.285 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.286 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-05T01:54:39.285834) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.289 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5 / tap4341bf52-6b inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.289 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.293 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.297 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.297 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.297 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.297 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.297 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.298 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.298 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.298 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.298 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.298 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-05T01:54:39.297960) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.298 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.299 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.299 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.299 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.299 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.299 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.299 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-05T01:54:39.299367) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.299 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.300 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.300 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.300 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.300 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.301 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.301 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.301 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.301 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.302 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.302 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.302 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.302 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.302 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.302 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.302 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.303 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.303 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.303 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.303 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.303 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.303 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-05T01:54:39.302284) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.303 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.outgoing.bytes volume: 1991 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.304 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-05T01:54:39.303585) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.304 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.304 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.bytes volume: 4698 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.304 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.305 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.305 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.305 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.305 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.305 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.305 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-05T01:54:39.305330) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.305 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.306 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.306 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.306 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.306 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.306 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.306 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.306 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.307 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-05T01:54:39.306732) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.307 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.307 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-4ysdpfw-vyar5vmyxehf-7qpgpa3gxwp3-vnf-gvxpa75bo2i7>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-4ysdpfw-vyar5vmyxehf-7qpgpa3gxwp3-vnf-gvxpa75bo2i7>]
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.307 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.307 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.307 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.307 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.307 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.307 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/memory.usage volume: 49.64453125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.307 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-05T01:54:39.307616) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.308 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/memory.usage volume: 48.91015625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.308 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/memory.usage volume: 49.15625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.308 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.308 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.309 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.309 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.309 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.309 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.309 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.309 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-05T01:54:39.309314) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.309 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.bytes volume: 2052 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.310 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.bytes volume: 4933 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.310 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.310 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.310 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.310 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.310 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.310 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.310 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.outgoing.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.311 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.311 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.packets volume: 40 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.311 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.311 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.311 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-05T01:54:39.310783) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.312 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.312 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.312 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.312 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.312 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.312 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.313 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-05T01:54:39.312392) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.313 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.313 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.313 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.313 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.314 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.314 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.314 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.314 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/cpu volume: 33920000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.314 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/cpu volume: 40500000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.314 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/cpu volume: 277840000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.315 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.315 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.315 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-05T01:54:39.314303) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.315 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.315 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.316 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.316 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.316 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-05T01:54:39.316068) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.316 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.316 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.316 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.317 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.317 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.317 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.317 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.317 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.317 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.317 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.318 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-05T01:54:39.317687) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.318 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.318 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.319 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.319 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.319 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.319 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.319 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.319 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.319 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.319 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.320 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.321 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.321 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.321 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:54:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:54:39.321 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:54:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1358: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:54:40 compute-0 nova_compute[349548]: 2025-12-05 01:54:40.745 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:54:41 compute-0 nova_compute[349548]: 2025-12-05 01:54:41.526 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:54:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1359: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:54:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:54:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1360: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:54:43 compute-0 podman[422135]: 2025-12-05 01:54:43.687842241 +0000 UTC m=+0.096984768 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  5 01:54:43 compute-0 podman[422138]: 2025-12-05 01:54:43.698806998 +0000 UTC m=+0.094036315 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, vendor=Red Hat, Inc., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, version=9.6, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  5 01:54:43 compute-0 podman[422136]: 2025-12-05 01:54:43.715130185 +0000 UTC m=+0.117596985 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 01:54:43 compute-0 podman[422137]: 2025-12-05 01:54:43.748563521 +0000 UTC m=+0.144991031 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.license=GPLv2)
Dec  5 01:54:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 01:54:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1583200801' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 01:54:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 01:54:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1583200801' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 01:54:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1361: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:54:45 compute-0 nova_compute[349548]: 2025-12-05 01:54:45.750 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:54:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:54:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:54:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:54:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:54:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:54:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:54:46 compute-0 nova_compute[349548]: 2025-12-05 01:54:46.529 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:54:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1362: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:54:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:54:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1363: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:54:50 compute-0 nova_compute[349548]: 2025-12-05 01:54:50.754 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:54:51 compute-0 nova_compute[349548]: 2025-12-05 01:54:51.536 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:54:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1364: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Dec  5 01:54:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:54:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1365: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.1 KiB/s wr, 0 op/s
Dec  5 01:54:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1366: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.1 KiB/s wr, 0 op/s
Dec  5 01:54:55 compute-0 nova_compute[349548]: 2025-12-05 01:54:55.759 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:54:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:54:56.184 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:54:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:54:56.185 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:54:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:54:56.187 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:54:56 compute-0 nova_compute[349548]: 2025-12-05 01:54:56.536 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:54:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1367: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.1 KiB/s wr, 0 op/s
Dec  5 01:54:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:54:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1368: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.1 KiB/s wr, 0 op/s
Dec  5 01:54:59 compute-0 podman[158197]: time="2025-12-05T01:54:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:54:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:54:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 01:54:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:54:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8636 "" "Go-http-client/1.1"
Dec  5 01:55:00 compute-0 nova_compute[349548]: 2025-12-05 01:55:00.765 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:55:01 compute-0 openstack_network_exporter[366555]: ERROR   01:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:55:01 compute-0 openstack_network_exporter[366555]: ERROR   01:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:55:01 compute-0 openstack_network_exporter[366555]: ERROR   01:55:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:55:01 compute-0 openstack_network_exporter[366555]: ERROR   01:55:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:55:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:55:01 compute-0 openstack_network_exporter[366555]: ERROR   01:55:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:55:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:55:01 compute-0 nova_compute[349548]: 2025-12-05 01:55:01.540 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:55:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1369: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.1 KiB/s wr, 0 op/s
Dec  5 01:55:01 compute-0 podman[422221]: 2025-12-05 01:55:01.674311172 +0000 UTC m=+0.088679845 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 01:55:01 compute-0 podman[422220]: 2025-12-05 01:55:01.698569821 +0000 UTC m=+0.114154978 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec  5 01:55:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:55:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1370: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s wr, 0 op/s
Dec  5 01:55:03 compute-0 podman[422263]: 2025-12-05 01:55:03.726101574 +0000 UTC m=+0.125394463 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Dec  5 01:55:03 compute-0 podman[422262]: 2025-12-05 01:55:03.757310258 +0000 UTC m=+0.163837519 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Dec  5 01:55:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1371: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:55:05 compute-0 nova_compute[349548]: 2025-12-05 01:55:05.771 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:55:06 compute-0 nova_compute[349548]: 2025-12-05 01:55:06.543 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:55:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1372: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:55:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:55:07 compute-0 podman[422298]: 2025-12-05 01:55:07.729691406 +0000 UTC m=+0.133859299 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, vcs-type=git, container_name=kepler, name=ubi9, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, config_id=edpm)
Dec  5 01:55:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1373: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:55:10 compute-0 nova_compute[349548]: 2025-12-05 01:55:10.777 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:55:11 compute-0 nova_compute[349548]: 2025-12-05 01:55:11.545 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:55:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1374: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:55:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:55:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1375: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:55:14 compute-0 podman[422318]: 2025-12-05 01:55:14.713617236 +0000 UTC m=+0.123316475 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  5 01:55:14 compute-0 podman[422319]: 2025-12-05 01:55:14.745864529 +0000 UTC m=+0.142994846 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 01:55:14 compute-0 podman[422326]: 2025-12-05 01:55:14.753733729 +0000 UTC m=+0.131691739 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350)
Dec  5 01:55:14 compute-0 podman[422320]: 2025-12-05 01:55:14.803307966 +0000 UTC m=+0.175275158 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  5 01:55:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1376: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:55:15 compute-0 nova_compute[349548]: 2025-12-05 01:55:15.782 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:55:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:55:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:55:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:55:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:55:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:55:16
Dec  5 01:55:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 01:55:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 01:55:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'default.rgw.log', 'images', '.rgw.root', 'default.rgw.meta', 'vms', '.mgr', 'cephfs.cephfs.data']
Dec  5 01:55:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 01:55:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:55:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:55:16 compute-0 nova_compute[349548]: 2025-12-05 01:55:16.551 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:55:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 01:55:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:55:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 01:55:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:55:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:55:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:55:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:55:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:55:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:55:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:55:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1377: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:55:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:55:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1378: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:55:20 compute-0 nova_compute[349548]: 2025-12-05 01:55:20.787 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:55:21 compute-0 nova_compute[349548]: 2025-12-05 01:55:21.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:55:21 compute-0 nova_compute[349548]: 2025-12-05 01:55:21.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  5 01:55:21 compute-0 nova_compute[349548]: 2025-12-05 01:55:21.553 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:55:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1379: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:55:22 compute-0 nova_compute[349548]: 2025-12-05 01:55:22.095 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:55:22 compute-0 nova_compute[349548]: 2025-12-05 01:55:22.095 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:55:22 compute-0 nova_compute[349548]: 2025-12-05 01:55:22.096 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 01:55:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:55:23 compute-0 nova_compute[349548]: 2025-12-05 01:55:23.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:55:23 compute-0 nova_compute[349548]: 2025-12-05 01:55:23.066 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  5 01:55:23 compute-0 nova_compute[349548]: 2025-12-05 01:55:23.089 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  5 01:55:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1380: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:55:24 compute-0 nova_compute[349548]: 2025-12-05 01:55:24.089 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:55:25 compute-0 nova_compute[349548]: 2025-12-05 01:55:25.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:55:25 compute-0 nova_compute[349548]: 2025-12-05 01:55:25.066 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 01:55:25 compute-0 nova_compute[349548]: 2025-12-05 01:55:25.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  5 01:55:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1381: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:55:25 compute-0 nova_compute[349548]: 2025-12-05 01:55:25.713 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 01:55:25 compute-0 nova_compute[349548]: 2025-12-05 01:55:25.714 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 01:55:25 compute-0 nova_compute[349548]: 2025-12-05 01:55:25.714 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  5 01:55:25 compute-0 nova_compute[349548]: 2025-12-05 01:55:25.716 349552 DEBUG nova.objects.instance [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b69a0e24-1bc4-46a5-92d7-367c1efd53df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 01:55:25 compute-0 nova_compute[349548]: 2025-12-05 01:55:25.791 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:55:26 compute-0 nova_compute[349548]: 2025-12-05 01:55:26.555 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016577461621736017 of space, bias 1.0, pg target 0.4973238486520805 quantized to 32 (current 32)
Dec  5 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  5 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:55:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 01:55:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1382: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:55:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:55:27 compute-0 nova_compute[349548]: 2025-12-05 01:55:27.730 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updating instance_info_cache with network_info: [{"id": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "address": "fa:16:3e:0c:12:24", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.48", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68143c81-65", "ovs_interfaceid": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 01:55:27 compute-0 nova_compute[349548]: 2025-12-05 01:55:27.750 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 01:55:27 compute-0 nova_compute[349548]: 2025-12-05 01:55:27.752 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  5 01:55:27 compute-0 nova_compute[349548]: 2025-12-05 01:55:27.753 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:55:27 compute-0 nova_compute[349548]: 2025-12-05 01:55:27.756 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:55:27 compute-0 nova_compute[349548]: 2025-12-05 01:55:27.757 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:55:27 compute-0 nova_compute[349548]: 2025-12-05 01:55:27.794 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:55:27 compute-0 nova_compute[349548]: 2025-12-05 01:55:27.797 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:55:27 compute-0 nova_compute[349548]: 2025-12-05 01:55:27.798 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:55:27 compute-0 nova_compute[349548]: 2025-12-05 01:55:27.798 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 01:55:27 compute-0 nova_compute[349548]: 2025-12-05 01:55:27.799 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:55:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:55:28 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1467279240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:55:28 compute-0 nova_compute[349548]: 2025-12-05 01:55:28.283 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:55:28 compute-0 nova_compute[349548]: 2025-12-05 01:55:28.397 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:55:28 compute-0 nova_compute[349548]: 2025-12-05 01:55:28.397 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:55:28 compute-0 nova_compute[349548]: 2025-12-05 01:55:28.397 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:55:28 compute-0 nova_compute[349548]: 2025-12-05 01:55:28.403 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:55:28 compute-0 nova_compute[349548]: 2025-12-05 01:55:28.403 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:55:28 compute-0 nova_compute[349548]: 2025-12-05 01:55:28.404 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:55:28 compute-0 nova_compute[349548]: 2025-12-05 01:55:28.410 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:55:28 compute-0 nova_compute[349548]: 2025-12-05 01:55:28.410 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:55:28 compute-0 nova_compute[349548]: 2025-12-05 01:55:28.411 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:55:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:55:28 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:55:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 01:55:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:55:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 01:55:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:55:28 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev c04683a5-4dec-4407-a718-63cfc7259477 does not exist
Dec  5 01:55:28 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev cdec43e4-b4ed-4759-bd9e-5cfb2cdb0e63 does not exist
Dec  5 01:55:28 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev ec700d17-11d5-415a-b013-97c52eb733bd does not exist
Dec  5 01:55:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 01:55:28 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 01:55:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 01:55:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:55:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:55:28 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:55:28 compute-0 nova_compute[349548]: 2025-12-05 01:55:28.938 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 01:55:28 compute-0 nova_compute[349548]: 2025-12-05 01:55:28.939 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3490MB free_disk=59.88883590698242GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 01:55:28 compute-0 nova_compute[349548]: 2025-12-05 01:55:28.940 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:55:28 compute-0 nova_compute[349548]: 2025-12-05 01:55:28.940 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:55:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:55:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:55:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:55:29 compute-0 nova_compute[349548]: 2025-12-05 01:55:29.173 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b69a0e24-1bc4-46a5-92d7-367c1efd53df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 01:55:29 compute-0 nova_compute[349548]: 2025-12-05 01:55:29.173 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b82c3f0e-6d6a-4a7b-9556-b609ad63e497 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 01:55:29 compute-0 nova_compute[349548]: 2025-12-05 01:55:29.173 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 01:55:29 compute-0 nova_compute[349548]: 2025-12-05 01:55:29.189 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 01:55:29 compute-0 nova_compute[349548]: 2025-12-05 01:55:29.190 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2048MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 01:55:29 compute-0 nova_compute[349548]: 2025-12-05 01:55:29.434 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:55:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1383: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:55:29 compute-0 podman[422699]: 2025-12-05 01:55:29.6272694 +0000 UTC m=+0.073073688 container create 4a5befb52041ebc18bf8a529545ff04653faefd187ea7f3e8310774b37d9f145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_tesla, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:55:29 compute-0 podman[422699]: 2025-12-05 01:55:29.601783146 +0000 UTC m=+0.047587444 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:55:29 compute-0 systemd[1]: Started libpod-conmon-4a5befb52041ebc18bf8a529545ff04653faefd187ea7f3e8310774b37d9f145.scope.
Dec  5 01:55:29 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:55:29 compute-0 podman[158197]: time="2025-12-05T01:55:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:55:29 compute-0 podman[422699]: 2025-12-05 01:55:29.756396756 +0000 UTC m=+0.202201154 container init 4a5befb52041ebc18bf8a529545ff04653faefd187ea7f3e8310774b37d9f145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:55:29 compute-0 podman[422699]: 2025-12-05 01:55:29.782304462 +0000 UTC m=+0.228108770 container start 4a5befb52041ebc18bf8a529545ff04653faefd187ea7f3e8310774b37d9f145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  5 01:55:29 compute-0 podman[422699]: 2025-12-05 01:55:29.787165958 +0000 UTC m=+0.232970296 container attach 4a5befb52041ebc18bf8a529545ff04653faefd187ea7f3e8310774b37d9f145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_tesla, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:55:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:55:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45191 "" "Go-http-client/1.1"
Dec  5 01:55:29 compute-0 funny_tesla[422734]: 167 167
Dec  5 01:55:29 compute-0 systemd[1]: libpod-4a5befb52041ebc18bf8a529545ff04653faefd187ea7f3e8310774b37d9f145.scope: Deactivated successfully.
Dec  5 01:55:29 compute-0 podman[422699]: 2025-12-05 01:55:29.804127513 +0000 UTC m=+0.249931811 container died 4a5befb52041ebc18bf8a529545ff04653faefd187ea7f3e8310774b37d9f145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_tesla, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  5 01:55:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-b98137664089a6d1d9c2124d93d50a22566e4a0dcc31dc818ebe3f26d918d1ea-merged.mount: Deactivated successfully.
Dec  5 01:55:29 compute-0 podman[422699]: 2025-12-05 01:55:29.8618808 +0000 UTC m=+0.307685108 container remove 4a5befb52041ebc18bf8a529545ff04653faefd187ea7f3e8310774b37d9f145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:55:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:55:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8624 "" "Go-http-client/1.1"
Dec  5 01:55:29 compute-0 systemd[1]: libpod-conmon-4a5befb52041ebc18bf8a529545ff04653faefd187ea7f3e8310774b37d9f145.scope: Deactivated successfully.
Dec  5 01:55:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:55:29 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2100140819' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:55:29 compute-0 nova_compute[349548]: 2025-12-05 01:55:29.937 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:55:29 compute-0 nova_compute[349548]: 2025-12-05 01:55:29.949 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 01:55:29 compute-0 nova_compute[349548]: 2025-12-05 01:55:29.984 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:55:29 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:55:29.985 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:c8:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:b5:45:4f:f9:d2'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 01:55:29 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:55:29.986 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  5 01:55:29 compute-0 nova_compute[349548]: 2025-12-05 01:55:29.999 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 01:55:30 compute-0 nova_compute[349548]: 2025-12-05 01:55:30.000 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 01:55:30 compute-0 nova_compute[349548]: 2025-12-05 01:55:30.001 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.061s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:55:30 compute-0 nova_compute[349548]: 2025-12-05 01:55:30.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:55:30 compute-0 nova_compute[349548]: 2025-12-05 01:55:30.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:55:30 compute-0 nova_compute[349548]: 2025-12-05 01:55:30.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:55:30 compute-0 podman[422760]: 2025-12-05 01:55:30.153530128 +0000 UTC m=+0.089442266 container create 79ee89d2049129e948da155522a4b3d5734cfebf4e8bcf234b305b02e3d998c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_grothendieck, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  5 01:55:30 compute-0 podman[422760]: 2025-12-05 01:55:30.122101298 +0000 UTC m=+0.058013526 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:55:30 compute-0 systemd[1]: Started libpod-conmon-79ee89d2049129e948da155522a4b3d5734cfebf4e8bcf234b305b02e3d998c2.scope.
Dec  5 01:55:30 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:55:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2782ac93dcdfe004fd127bbe4118bbdb4f4ee530fb66b3c9613ec410a33d8ee0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:55:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2782ac93dcdfe004fd127bbe4118bbdb4f4ee530fb66b3c9613ec410a33d8ee0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:55:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2782ac93dcdfe004fd127bbe4118bbdb4f4ee530fb66b3c9613ec410a33d8ee0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:55:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2782ac93dcdfe004fd127bbe4118bbdb4f4ee530fb66b3c9613ec410a33d8ee0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:55:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2782ac93dcdfe004fd127bbe4118bbdb4f4ee530fb66b3c9613ec410a33d8ee0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:55:30 compute-0 podman[422760]: 2025-12-05 01:55:30.281332627 +0000 UTC m=+0.217244785 container init 79ee89d2049129e948da155522a4b3d5734cfebf4e8bcf234b305b02e3d998c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_grothendieck, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:55:30 compute-0 podman[422760]: 2025-12-05 01:55:30.294693102 +0000 UTC m=+0.230605240 container start 79ee89d2049129e948da155522a4b3d5734cfebf4e8bcf234b305b02e3d998c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_grothendieck, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  5 01:55:30 compute-0 podman[422760]: 2025-12-05 01:55:30.298736355 +0000 UTC m=+0.234648493 container attach 79ee89d2049129e948da155522a4b3d5734cfebf4e8bcf234b305b02e3d998c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_grothendieck, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:55:30 compute-0 nova_compute[349548]: 2025-12-05 01:55:30.794 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:55:31 compute-0 openstack_network_exporter[366555]: ERROR   01:55:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:55:31 compute-0 openstack_network_exporter[366555]: ERROR   01:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:55:31 compute-0 openstack_network_exporter[366555]: ERROR   01:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:55:31 compute-0 openstack_network_exporter[366555]: ERROR   01:55:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:55:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:55:31 compute-0 openstack_network_exporter[366555]: ERROR   01:55:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:55:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:55:31 compute-0 laughing_grothendieck[422777]: --> passed data devices: 0 physical, 3 LVM
Dec  5 01:55:31 compute-0 laughing_grothendieck[422777]: --> relative data size: 1.0
Dec  5 01:55:31 compute-0 laughing_grothendieck[422777]: --> All data devices are unavailable
Dec  5 01:55:31 compute-0 systemd[1]: libpod-79ee89d2049129e948da155522a4b3d5734cfebf4e8bcf234b305b02e3d998c2.scope: Deactivated successfully.
Dec  5 01:55:31 compute-0 podman[422760]: 2025-12-05 01:55:31.510312916 +0000 UTC m=+1.446225094 container died 79ee89d2049129e948da155522a4b3d5734cfebf4e8bcf234b305b02e3d998c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_grothendieck, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:55:31 compute-0 systemd[1]: libpod-79ee89d2049129e948da155522a4b3d5734cfebf4e8bcf234b305b02e3d998c2.scope: Consumed 1.157s CPU time.
Dec  5 01:55:31 compute-0 nova_compute[349548]: 2025-12-05 01:55:31.558 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:55:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-2782ac93dcdfe004fd127bbe4118bbdb4f4ee530fb66b3c9613ec410a33d8ee0-merged.mount: Deactivated successfully.
Dec  5 01:55:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1384: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:55:31 compute-0 podman[422760]: 2025-12-05 01:55:31.622779456 +0000 UTC m=+1.558691604 container remove 79ee89d2049129e948da155522a4b3d5734cfebf4e8bcf234b305b02e3d998c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Dec  5 01:55:31 compute-0 systemd[1]: libpod-conmon-79ee89d2049129e948da155522a4b3d5734cfebf4e8bcf234b305b02e3d998c2.scope: Deactivated successfully.
Dec  5 01:55:31 compute-0 podman[422841]: 2025-12-05 01:55:31.927091598 +0000 UTC m=+0.107601464 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 01:55:31 compute-0 podman[422842]: 2025-12-05 01:55:31.940246087 +0000 UTC m=+0.121842164 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 01:55:32 compute-0 nova_compute[349548]: 2025-12-05 01:55:32.080 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:55:32 compute-0 podman[422994]: 2025-12-05 01:55:32.57417097 +0000 UTC m=+0.058657923 container create b73bab1e0fa04d7a7be47741a13101901737fba95a390639509939c2905b8d4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_murdock, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  5 01:55:32 compute-0 podman[422994]: 2025-12-05 01:55:32.548147692 +0000 UTC m=+0.032634655 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:55:32 compute-0 systemd[1]: Started libpod-conmon-b73bab1e0fa04d7a7be47741a13101901737fba95a390639509939c2905b8d4b.scope.
Dec  5 01:55:32 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:55:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:55:32 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Dec  5 01:55:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:55:32.721051) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  5 01:55:32 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Dec  5 01:55:32 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899732721110, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1389, "num_deletes": 505, "total_data_size": 1659088, "memory_usage": 1697256, "flush_reason": "Manual Compaction"}
Dec  5 01:55:32 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Dec  5 01:55:32 compute-0 podman[422994]: 2025-12-05 01:55:32.723607364 +0000 UTC m=+0.208094337 container init b73bab1e0fa04d7a7be47741a13101901737fba95a390639509939c2905b8d4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_murdock, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  5 01:55:32 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899732731093, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 985475, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27571, "largest_seqno": 28959, "table_properties": {"data_size": 980561, "index_size": 1798, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 15287, "raw_average_key_size": 19, "raw_value_size": 968081, "raw_average_value_size": 1217, "num_data_blocks": 82, "num_entries": 795, "num_filter_entries": 795, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764899620, "oldest_key_time": 1764899620, "file_creation_time": 1764899732, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Dec  5 01:55:32 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 10121 microseconds, and 5010 cpu microseconds.
Dec  5 01:55:32 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 01:55:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:55:32.731173) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 985475 bytes OK
Dec  5 01:55:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:55:32.731189) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Dec  5 01:55:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:55:32.734105) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Dec  5 01:55:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:55:32.734120) EVENT_LOG_v1 {"time_micros": 1764899732734116, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  5 01:55:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:55:32.734135) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  5 01:55:32 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1651836, prev total WAL file size 1651836, number of live WAL files 2.
Dec  5 01:55:32 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 01:55:32 compute-0 podman[422994]: 2025-12-05 01:55:32.740990591 +0000 UTC m=+0.225477544 container start b73bab1e0fa04d7a7be47741a13101901737fba95a390639509939c2905b8d4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_murdock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:55:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:55:32.735617) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303033' seq:72057594037927935, type:22 .. '6D6772737461740031323534' seq:0, type:0; will stop at (end)
Dec  5 01:55:32 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  5 01:55:32 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(962KB)], [62(8704KB)]
Dec  5 01:55:32 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899732735671, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 9899049, "oldest_snapshot_seqno": -1}
Dec  5 01:55:32 compute-0 podman[422994]: 2025-12-05 01:55:32.75024693 +0000 UTC m=+0.234733933 container attach b73bab1e0fa04d7a7be47741a13101901737fba95a390639509939c2905b8d4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_murdock, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  5 01:55:32 compute-0 kind_murdock[423010]: 167 167
Dec  5 01:55:32 compute-0 systemd[1]: libpod-b73bab1e0fa04d7a7be47741a13101901737fba95a390639509939c2905b8d4b.scope: Deactivated successfully.
Dec  5 01:55:32 compute-0 podman[422994]: 2025-12-05 01:55:32.755393275 +0000 UTC m=+0.239880248 container died b73bab1e0fa04d7a7be47741a13101901737fba95a390639509939c2905b8d4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:55:32 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 4843 keys, 7174974 bytes, temperature: kUnknown
Dec  5 01:55:32 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899732806703, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 7174974, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7143702, "index_size": 18042, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12165, "raw_key_size": 122173, "raw_average_key_size": 25, "raw_value_size": 7057081, "raw_average_value_size": 1457, "num_data_blocks": 748, "num_entries": 4843, "num_filter_entries": 4843, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764899732, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Dec  5 01:55:32 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 01:55:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:55:32.807318) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 7174974 bytes
Dec  5 01:55:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:55:32.810183) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.5 rd, 100.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 8.5 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(17.3) write-amplify(7.3) OK, records in: 5816, records dropped: 973 output_compression: NoCompression
Dec  5 01:55:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:55:32.810210) EVENT_LOG_v1 {"time_micros": 1764899732810197, "job": 34, "event": "compaction_finished", "compaction_time_micros": 71486, "compaction_time_cpu_micros": 23416, "output_level": 6, "num_output_files": 1, "total_output_size": 7174974, "num_input_records": 5816, "num_output_records": 4843, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  5 01:55:32 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 01:55:32 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899732812116, "job": 34, "event": "table_file_deletion", "file_number": 64}
Dec  5 01:55:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-42c866258acbf1c15df57c38396f4fe3cd81cdb63024ee95b238626912c24d59-merged.mount: Deactivated successfully.
Dec  5 01:55:32 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 01:55:32 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899732815360, "job": 34, "event": "table_file_deletion", "file_number": 62}
Dec  5 01:55:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:55:32.735415) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:55:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:55:32.815868) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:55:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:55:32.815874) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:55:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:55:32.815877) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:55:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:55:32.815879) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:55:32 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:55:32.815914) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:55:32 compute-0 podman[422994]: 2025-12-05 01:55:32.838266856 +0000 UTC m=+0.322753809 container remove b73bab1e0fa04d7a7be47741a13101901737fba95a390639509939c2905b8d4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Dec  5 01:55:32 compute-0 systemd[1]: libpod-conmon-b73bab1e0fa04d7a7be47741a13101901737fba95a390639509939c2905b8d4b.scope: Deactivated successfully.
Dec  5 01:55:33 compute-0 podman[423033]: 2025-12-05 01:55:33.108198845 +0000 UTC m=+0.080280549 container create 3030b76e5f1507734042ea8d00b2966dc05b0827b82b49e5c8afd437f15fa797 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_solomon, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  5 01:55:33 compute-0 podman[423033]: 2025-12-05 01:55:33.06481945 +0000 UTC m=+0.036901194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:55:33 compute-0 systemd[1]: Started libpod-conmon-3030b76e5f1507734042ea8d00b2966dc05b0827b82b49e5c8afd437f15fa797.scope.
Dec  5 01:55:33 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:55:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1292ff254ae9b24cc5b6f2b322fe228e3acf30d4aaea87d25cc691752038fe6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:55:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1292ff254ae9b24cc5b6f2b322fe228e3acf30d4aaea87d25cc691752038fe6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:55:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1292ff254ae9b24cc5b6f2b322fe228e3acf30d4aaea87d25cc691752038fe6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:55:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1292ff254ae9b24cc5b6f2b322fe228e3acf30d4aaea87d25cc691752038fe6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:55:33 compute-0 podman[423033]: 2025-12-05 01:55:33.261429367 +0000 UTC m=+0.233511101 container init 3030b76e5f1507734042ea8d00b2966dc05b0827b82b49e5c8afd437f15fa797 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:55:33 compute-0 podman[423033]: 2025-12-05 01:55:33.278624068 +0000 UTC m=+0.250705772 container start 3030b76e5f1507734042ea8d00b2966dc05b0827b82b49e5c8afd437f15fa797 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_solomon, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Dec  5 01:55:33 compute-0 podman[423033]: 2025-12-05 01:55:33.284914484 +0000 UTC m=+0.256996188 container attach 3030b76e5f1507734042ea8d00b2966dc05b0827b82b49e5c8afd437f15fa797 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_solomon, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:55:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1385: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:55:34 compute-0 practical_solomon[423048]: {
Dec  5 01:55:34 compute-0 practical_solomon[423048]:    "0": [
Dec  5 01:55:34 compute-0 practical_solomon[423048]:        {
Dec  5 01:55:34 compute-0 practical_solomon[423048]:            "devices": [
Dec  5 01:55:34 compute-0 practical_solomon[423048]:                "/dev/loop3"
Dec  5 01:55:34 compute-0 practical_solomon[423048]:            ],
Dec  5 01:55:34 compute-0 practical_solomon[423048]:            "lv_name": "ceph_lv0",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:            "lv_size": "21470642176",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:            "name": "ceph_lv0",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:            "tags": {
Dec  5 01:55:34 compute-0 practical_solomon[423048]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:                "ceph.cluster_name": "ceph",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:                "ceph.crush_device_class": "",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:                "ceph.encrypted": "0",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:                "ceph.osd_id": "0",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:                "ceph.type": "block",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:                "ceph.vdo": "0"
Dec  5 01:55:34 compute-0 practical_solomon[423048]:            },
Dec  5 01:55:34 compute-0 practical_solomon[423048]:            "type": "block",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:            "vg_name": "ceph_vg0"
Dec  5 01:55:34 compute-0 practical_solomon[423048]:        }
Dec  5 01:55:34 compute-0 practical_solomon[423048]:    ],
Dec  5 01:55:34 compute-0 practical_solomon[423048]:    "1": [
Dec  5 01:55:34 compute-0 practical_solomon[423048]:        {
Dec  5 01:55:34 compute-0 practical_solomon[423048]:            "devices": [
Dec  5 01:55:34 compute-0 practical_solomon[423048]:                "/dev/loop4"
Dec  5 01:55:34 compute-0 practical_solomon[423048]:            ],
Dec  5 01:55:34 compute-0 practical_solomon[423048]:            "lv_name": "ceph_lv1",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:            "lv_size": "21470642176",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:            "name": "ceph_lv1",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:            "tags": {
Dec  5 01:55:34 compute-0 practical_solomon[423048]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:                "ceph.cluster_name": "ceph",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:                "ceph.crush_device_class": "",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:                "ceph.encrypted": "0",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:                "ceph.osd_id": "1",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:                "ceph.type": "block",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:                "ceph.vdo": "0"
Dec  5 01:55:34 compute-0 practical_solomon[423048]:            },
Dec  5 01:55:34 compute-0 practical_solomon[423048]:            "type": "block",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:            "vg_name": "ceph_vg1"
Dec  5 01:55:34 compute-0 practical_solomon[423048]:        }
Dec  5 01:55:34 compute-0 practical_solomon[423048]:    ],
Dec  5 01:55:34 compute-0 practical_solomon[423048]:    "2": [
Dec  5 01:55:34 compute-0 practical_solomon[423048]:        {
Dec  5 01:55:34 compute-0 practical_solomon[423048]:            "devices": [
Dec  5 01:55:34 compute-0 practical_solomon[423048]:                "/dev/loop5"
Dec  5 01:55:34 compute-0 practical_solomon[423048]:            ],
Dec  5 01:55:34 compute-0 practical_solomon[423048]:            "lv_name": "ceph_lv2",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:            "lv_size": "21470642176",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:            "name": "ceph_lv2",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:            "tags": {
Dec  5 01:55:34 compute-0 practical_solomon[423048]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:                "ceph.cluster_name": "ceph",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:                "ceph.crush_device_class": "",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:                "ceph.encrypted": "0",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:                "ceph.osd_id": "2",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:                "ceph.type": "block",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:                "ceph.vdo": "0"
Dec  5 01:55:34 compute-0 practical_solomon[423048]:            },
Dec  5 01:55:34 compute-0 practical_solomon[423048]:            "type": "block",
Dec  5 01:55:34 compute-0 practical_solomon[423048]:            "vg_name": "ceph_vg2"
Dec  5 01:55:34 compute-0 practical_solomon[423048]:        }
Dec  5 01:55:34 compute-0 practical_solomon[423048]:    ]
Dec  5 01:55:34 compute-0 practical_solomon[423048]: }
Dec  5 01:55:34 compute-0 systemd[1]: libpod-3030b76e5f1507734042ea8d00b2966dc05b0827b82b49e5c8afd437f15fa797.scope: Deactivated successfully.
Dec  5 01:55:34 compute-0 conmon[423048]: conmon 3030b76e5f1507734042 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3030b76e5f1507734042ea8d00b2966dc05b0827b82b49e5c8afd437f15fa797.scope/container/memory.events
Dec  5 01:55:34 compute-0 podman[423033]: 2025-12-05 01:55:34.204763405 +0000 UTC m=+1.176845109 container died 3030b76e5f1507734042ea8d00b2966dc05b0827b82b49e5c8afd437f15fa797 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_solomon, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Dec  5 01:55:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-1292ff254ae9b24cc5b6f2b322fe228e3acf30d4aaea87d25cc691752038fe6a-merged.mount: Deactivated successfully.
Dec  5 01:55:34 compute-0 podman[423033]: 2025-12-05 01:55:34.2902592 +0000 UTC m=+1.262340904 container remove 3030b76e5f1507734042ea8d00b2966dc05b0827b82b49e5c8afd437f15fa797 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:55:34 compute-0 systemd[1]: libpod-conmon-3030b76e5f1507734042ea8d00b2966dc05b0827b82b49e5c8afd437f15fa797.scope: Deactivated successfully.
Dec  5 01:55:34 compute-0 podman[423066]: 2025-12-05 01:55:34.389178 +0000 UTC m=+0.138149660 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 01:55:34 compute-0 podman[423057]: 2025-12-05 01:55:34.393378568 +0000 UTC m=+0.137097391 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Dec  5 01:55:35 compute-0 podman[423245]: 2025-12-05 01:55:35.317797217 +0000 UTC m=+0.074353083 container create a318ec1dc6302e43931e5ee1fd3d71a5b087328e9cb4459ace1f834079d9c8e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:55:35 compute-0 podman[423245]: 2025-12-05 01:55:35.283441425 +0000 UTC m=+0.039997371 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:55:35 compute-0 systemd[1]: Started libpod-conmon-a318ec1dc6302e43931e5ee1fd3d71a5b087328e9cb4459ace1f834079d9c8e5.scope.
Dec  5 01:55:35 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:55:35 compute-0 podman[423245]: 2025-12-05 01:55:35.483144658 +0000 UTC m=+0.239700544 container init a318ec1dc6302e43931e5ee1fd3d71a5b087328e9cb4459ace1f834079d9c8e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_chebyshev, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  5 01:55:35 compute-0 podman[423245]: 2025-12-05 01:55:35.496951384 +0000 UTC m=+0.253507260 container start a318ec1dc6302e43931e5ee1fd3d71a5b087328e9cb4459ace1f834079d9c8e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_chebyshev, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:55:35 compute-0 podman[423245]: 2025-12-05 01:55:35.502466059 +0000 UTC m=+0.259021955 container attach a318ec1dc6302e43931e5ee1fd3d71a5b087328e9cb4459ace1f834079d9c8e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_chebyshev, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  5 01:55:35 compute-0 loving_chebyshev[423261]: 167 167
Dec  5 01:55:35 compute-0 systemd[1]: libpod-a318ec1dc6302e43931e5ee1fd3d71a5b087328e9cb4459ace1f834079d9c8e5.scope: Deactivated successfully.
Dec  5 01:55:35 compute-0 podman[423245]: 2025-12-05 01:55:35.506308676 +0000 UTC m=+0.262864542 container died a318ec1dc6302e43931e5ee1fd3d71a5b087328e9cb4459ace1f834079d9c8e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_chebyshev, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  5 01:55:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-301a7b3eb0f995923555a74a7215040f3bf0646079a7b3cadc6d13af1160d8ee-merged.mount: Deactivated successfully.
Dec  5 01:55:35 compute-0 podman[423245]: 2025-12-05 01:55:35.556739639 +0000 UTC m=+0.313295495 container remove a318ec1dc6302e43931e5ee1fd3d71a5b087328e9cb4459ace1f834079d9c8e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_chebyshev, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Dec  5 01:55:35 compute-0 systemd[1]: libpod-conmon-a318ec1dc6302e43931e5ee1fd3d71a5b087328e9cb4459ace1f834079d9c8e5.scope: Deactivated successfully.
Dec  5 01:55:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1386: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:55:35 compute-0 podman[423283]: 2025-12-05 01:55:35.795170916 +0000 UTC m=+0.057689306 container create 8f61047b2ae902a7479640a044c3ba40d7d73fa16282456434008dd77a10a769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_moore, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  5 01:55:35 compute-0 nova_compute[349548]: 2025-12-05 01:55:35.798 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:55:35 compute-0 podman[423283]: 2025-12-05 01:55:35.766497623 +0000 UTC m=+0.029016023 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:55:35 compute-0 systemd[1]: Started libpod-conmon-8f61047b2ae902a7479640a044c3ba40d7d73fa16282456434008dd77a10a769.scope.
Dec  5 01:55:35 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:55:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62184cc27abc65ecb78da31d30479fe7c25746b0ee493570f5d2380924558c4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:55:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62184cc27abc65ecb78da31d30479fe7c25746b0ee493570f5d2380924558c4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:55:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62184cc27abc65ecb78da31d30479fe7c25746b0ee493570f5d2380924558c4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:55:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62184cc27abc65ecb78da31d30479fe7c25746b0ee493570f5d2380924558c4a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:55:35 compute-0 podman[423283]: 2025-12-05 01:55:35.95096941 +0000 UTC m=+0.213487840 container init 8f61047b2ae902a7479640a044c3ba40d7d73fa16282456434008dd77a10a769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  5 01:55:35 compute-0 podman[423283]: 2025-12-05 01:55:35.966719701 +0000 UTC m=+0.229238081 container start 8f61047b2ae902a7479640a044c3ba40d7d73fa16282456434008dd77a10a769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_moore, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:55:35 compute-0 podman[423283]: 2025-12-05 01:55:35.971464083 +0000 UTC m=+0.233982493 container attach 8f61047b2ae902a7479640a044c3ba40d7d73fa16282456434008dd77a10a769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_moore, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  5 01:55:36 compute-0 nova_compute[349548]: 2025-12-05 01:55:36.164 349552 DEBUG oslo_concurrency.lockutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:55:36 compute-0 nova_compute[349548]: 2025-12-05 01:55:36.166 349552 DEBUG oslo_concurrency.lockutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:55:36 compute-0 nova_compute[349548]: 2025-12-05 01:55:36.190 349552 DEBUG nova.compute.manager [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  5 01:55:36 compute-0 nova_compute[349548]: 2025-12-05 01:55:36.301 349552 DEBUG oslo_concurrency.lockutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:55:36 compute-0 nova_compute[349548]: 2025-12-05 01:55:36.303 349552 DEBUG oslo_concurrency.lockutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:55:36 compute-0 nova_compute[349548]: 2025-12-05 01:55:36.313 349552 DEBUG nova.virt.hardware [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  5 01:55:36 compute-0 nova_compute[349548]: 2025-12-05 01:55:36.314 349552 INFO nova.compute.claims [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  5 01:55:36 compute-0 nova_compute[349548]: 2025-12-05 01:55:36.495 349552 DEBUG oslo_concurrency.processutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:55:36 compute-0 nova_compute[349548]: 2025-12-05 01:55:36.562 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:55:37 compute-0 charming_moore[423299]: {
Dec  5 01:55:37 compute-0 charming_moore[423299]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 01:55:37 compute-0 charming_moore[423299]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:55:37 compute-0 charming_moore[423299]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 01:55:37 compute-0 charming_moore[423299]:        "osd_id": 0,
Dec  5 01:55:37 compute-0 charming_moore[423299]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:55:37 compute-0 charming_moore[423299]:        "type": "bluestore"
Dec  5 01:55:37 compute-0 charming_moore[423299]:    },
Dec  5 01:55:37 compute-0 charming_moore[423299]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 01:55:37 compute-0 charming_moore[423299]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:55:37 compute-0 charming_moore[423299]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 01:55:37 compute-0 charming_moore[423299]:        "osd_id": 1,
Dec  5 01:55:37 compute-0 charming_moore[423299]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:55:37 compute-0 charming_moore[423299]:        "type": "bluestore"
Dec  5 01:55:37 compute-0 charming_moore[423299]:    },
Dec  5 01:55:37 compute-0 charming_moore[423299]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 01:55:37 compute-0 charming_moore[423299]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:55:37 compute-0 charming_moore[423299]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 01:55:37 compute-0 charming_moore[423299]:        "osd_id": 2,
Dec  5 01:55:37 compute-0 charming_moore[423299]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:55:37 compute-0 charming_moore[423299]:        "type": "bluestore"
Dec  5 01:55:37 compute-0 charming_moore[423299]:    }
Dec  5 01:55:37 compute-0 charming_moore[423299]: }
Dec  5 01:55:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:55:37 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1887424559' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:55:37 compute-0 systemd[1]: libpod-8f61047b2ae902a7479640a044c3ba40d7d73fa16282456434008dd77a10a769.scope: Deactivated successfully.
Dec  5 01:55:37 compute-0 podman[423283]: 2025-12-05 01:55:37.109430742 +0000 UTC m=+1.371949132 container died 8f61047b2ae902a7479640a044c3ba40d7d73fa16282456434008dd77a10a769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  5 01:55:37 compute-0 systemd[1]: libpod-8f61047b2ae902a7479640a044c3ba40d7d73fa16282456434008dd77a10a769.scope: Consumed 1.117s CPU time.
Dec  5 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.125 349552 DEBUG oslo_concurrency.processutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:55:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-62184cc27abc65ecb78da31d30479fe7c25746b0ee493570f5d2380924558c4a-merged.mount: Deactivated successfully.
Dec  5 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.156 349552 DEBUG nova.compute.provider_tree [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.182 349552 DEBUG nova.scheduler.client.report [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 01:55:37 compute-0 podman[423283]: 2025-12-05 01:55:37.19003287 +0000 UTC m=+1.452551250 container remove 8f61047b2ae902a7479640a044c3ba40d7d73fa16282456434008dd77a10a769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  5 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.214 349552 DEBUG oslo_concurrency.lockutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.912s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.216 349552 DEBUG nova.compute.manager [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  5 01:55:37 compute-0 systemd[1]: libpod-conmon-8f61047b2ae902a7479640a044c3ba40d7d73fa16282456434008dd77a10a769.scope: Deactivated successfully.
Dec  5 01:55:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:55:37 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:55:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:55:37 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:55:37 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 3196e5ff-30a2-45fb-8aa2-738d35f1adf1 does not exist
Dec  5 01:55:37 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 600b7757-4334-43ab-97c9-87dd0f29cb56 does not exist
Dec  5 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.288 349552 DEBUG nova.compute.manager [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  5 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.289 349552 DEBUG nova.network.neutron [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  5 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.311 349552 INFO nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  5 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.359 349552 DEBUG nova.compute.manager [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  5 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.461 349552 DEBUG nova.compute.manager [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  5 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.466 349552 DEBUG nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  5 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.467 349552 INFO nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Creating image(s)#033[00m
Dec  5 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.512 349552 DEBUG nova.storage.rbd_utils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image 3611d2ae-da33-4e55-aec7-0bec88d3b4e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.572 349552 DEBUG nova.storage.rbd_utils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image 3611d2ae-da33-4e55-aec7-0bec88d3b4e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 01:55:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1387: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.623 349552 DEBUG nova.storage.rbd_utils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image 3611d2ae-da33-4e55-aec7-0bec88d3b4e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.634 349552 DEBUG oslo_concurrency.processutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:55:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.723 349552 DEBUG oslo_concurrency.processutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.724 349552 DEBUG oslo_concurrency.lockutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "af0f6d73e40706411141d751e7ebef271f1a5b42" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.725 349552 DEBUG oslo_concurrency.lockutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "af0f6d73e40706411141d751e7ebef271f1a5b42" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.726 349552 DEBUG oslo_concurrency.lockutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "af0f6d73e40706411141d751e7ebef271f1a5b42" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.759 349552 DEBUG nova.storage.rbd_utils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image 3611d2ae-da33-4e55-aec7-0bec88d3b4e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 01:55:37 compute-0 nova_compute[349548]: 2025-12-05 01:55:37.767 349552 DEBUG oslo_concurrency.processutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42 3611d2ae-da33-4e55-aec7-0bec88d3b4e0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:55:38 compute-0 nova_compute[349548]: 2025-12-05 01:55:38.176 349552 DEBUG oslo_concurrency.processutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42 3611d2ae-da33-4e55-aec7-0bec88d3b4e0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:55:38 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:55:38 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:55:38 compute-0 nova_compute[349548]: 2025-12-05 01:55:38.322 349552 DEBUG nova.storage.rbd_utils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] resizing rbd image 3611d2ae-da33-4e55-aec7-0bec88d3b4e0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  5 01:55:38 compute-0 nova_compute[349548]: 2025-12-05 01:55:38.578 349552 DEBUG nova.objects.instance [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lazy-loading 'migration_context' on Instance uuid 3611d2ae-da33-4e55-aec7-0bec88d3b4e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 01:55:38 compute-0 nova_compute[349548]: 2025-12-05 01:55:38.645 349552 DEBUG nova.storage.rbd_utils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image 3611d2ae-da33-4e55-aec7-0bec88d3b4e0_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 01:55:38 compute-0 podman[423580]: 2025-12-05 01:55:38.715059189 +0000 UTC m=+0.123947702 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.openshift.expose-services=, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9, release=1214.1726694543, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  5 01:55:38 compute-0 nova_compute[349548]: 2025-12-05 01:55:38.715 349552 DEBUG nova.storage.rbd_utils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image 3611d2ae-da33-4e55-aec7-0bec88d3b4e0_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 01:55:38 compute-0 nova_compute[349548]: 2025-12-05 01:55:38.727 349552 DEBUG oslo_concurrency.processutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:55:38 compute-0 nova_compute[349548]: 2025-12-05 01:55:38.786 349552 DEBUG oslo_concurrency.processutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:55:38 compute-0 nova_compute[349548]: 2025-12-05 01:55:38.787 349552 DEBUG oslo_concurrency.lockutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:55:38 compute-0 nova_compute[349548]: 2025-12-05 01:55:38.788 349552 DEBUG oslo_concurrency.lockutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:55:38 compute-0 nova_compute[349548]: 2025-12-05 01:55:38.788 349552 DEBUG oslo_concurrency.lockutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:55:38 compute-0 nova_compute[349548]: 2025-12-05 01:55:38.830 349552 DEBUG nova.storage.rbd_utils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image 3611d2ae-da33-4e55-aec7-0bec88d3b4e0_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 01:55:38 compute-0 nova_compute[349548]: 2025-12-05 01:55:38.840 349552 DEBUG oslo_concurrency.processutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 3611d2ae-da33-4e55-aec7-0bec88d3b4e0_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:55:38 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:55:38.989 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8dd76c1c-ab01-42af-b35e-2e870841b6ad, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 01:55:39 compute-0 nova_compute[349548]: 2025-12-05 01:55:39.291 349552 DEBUG oslo_concurrency.processutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 3611d2ae-da33-4e55-aec7-0bec88d3b4e0_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:55:39 compute-0 nova_compute[349548]: 2025-12-05 01:55:39.496 349552 DEBUG nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  5 01:55:39 compute-0 nova_compute[349548]: 2025-12-05 01:55:39.497 349552 DEBUG nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Ensure instance console log exists: /var/lib/nova/instances/3611d2ae-da33-4e55-aec7-0bec88d3b4e0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  5 01:55:39 compute-0 nova_compute[349548]: 2025-12-05 01:55:39.497 349552 DEBUG oslo_concurrency.lockutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:55:39 compute-0 nova_compute[349548]: 2025-12-05 01:55:39.498 349552 DEBUG oslo_concurrency.lockutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:55:39 compute-0 nova_compute[349548]: 2025-12-05 01:55:39.498 349552 DEBUG oslo_concurrency.lockutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:55:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1388: 321 pgs: 321 active+clean; 201 MiB data, 301 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:55:39 compute-0 nova_compute[349548]: 2025-12-05 01:55:39.811 349552 DEBUG nova.network.neutron [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Successfully updated port: 2799035c-b9e1-4c24-b031-9824b684480c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  5 01:55:39 compute-0 nova_compute[349548]: 2025-12-05 01:55:39.833 349552 DEBUG oslo_concurrency.lockutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "refresh_cache-3611d2ae-da33-4e55-aec7-0bec88d3b4e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 01:55:39 compute-0 nova_compute[349548]: 2025-12-05 01:55:39.833 349552 DEBUG oslo_concurrency.lockutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquired lock "refresh_cache-3611d2ae-da33-4e55-aec7-0bec88d3b4e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 01:55:39 compute-0 nova_compute[349548]: 2025-12-05 01:55:39.833 349552 DEBUG nova.network.neutron [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  5 01:55:39 compute-0 nova_compute[349548]: 2025-12-05 01:55:39.928 349552 DEBUG nova.compute.manager [req-a06338e6-84f4-47fc-b2e8-d4c2087a5730 req-d0c3e7c6-692f-47fe-85a5-8cb57ff55f27 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Received event network-changed-2799035c-b9e1-4c24-b031-9824b684480c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 01:55:39 compute-0 nova_compute[349548]: 2025-12-05 01:55:39.929 349552 DEBUG nova.compute.manager [req-a06338e6-84f4-47fc-b2e8-d4c2087a5730 req-d0c3e7c6-692f-47fe-85a5-8cb57ff55f27 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Refreshing instance network info cache due to event network-changed-2799035c-b9e1-4c24-b031-9824b684480c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  5 01:55:39 compute-0 nova_compute[349548]: 2025-12-05 01:55:39.929 349552 DEBUG oslo_concurrency.lockutils [req-a06338e6-84f4-47fc-b2e8-d4c2087a5730 req-d0c3e7c6-692f-47fe-85a5-8cb57ff55f27 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-3611d2ae-da33-4e55-aec7-0bec88d3b4e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 01:55:39 compute-0 nova_compute[349548]: 2025-12-05 01:55:39.973 349552 DEBUG nova.network.neutron [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  5 01:55:40 compute-0 nova_compute[349548]: 2025-12-05 01:55:40.803 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.172 349552 DEBUG nova.network.neutron [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Updating instance_info_cache with network_info: [{"id": "2799035c-b9e1-4c24-b031-9824b684480c", "address": "fa:16:3e:10:64:51", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2799035c-b9", "ovs_interfaceid": "2799035c-b9e1-4c24-b031-9824b684480c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.205 349552 DEBUG oslo_concurrency.lockutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Releasing lock "refresh_cache-3611d2ae-da33-4e55-aec7-0bec88d3b4e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.206 349552 DEBUG nova.compute.manager [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Instance network_info: |[{"id": "2799035c-b9e1-4c24-b031-9824b684480c", "address": "fa:16:3e:10:64:51", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2799035c-b9", "ovs_interfaceid": "2799035c-b9e1-4c24-b031-9824b684480c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  5 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.206 349552 DEBUG oslo_concurrency.lockutils [req-a06338e6-84f4-47fc-b2e8-d4c2087a5730 req-d0c3e7c6-692f-47fe-85a5-8cb57ff55f27 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-3611d2ae-da33-4e55-aec7-0bec88d3b4e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.207 349552 DEBUG nova.network.neutron [req-a06338e6-84f4-47fc-b2e8-d4c2087a5730 req-d0c3e7c6-692f-47fe-85a5-8cb57ff55f27 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Refreshing network info cache for port 2799035c-b9e1-4c24-b031-9824b684480c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  5 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.213 349552 DEBUG nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Start _get_guest_xml network_info=[{"id": "2799035c-b9e1-4c24-b031-9824b684480c", "address": "fa:16:3e:10:64:51", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2799035c-b9", "ovs_interfaceid": "2799035c-b9e1-4c24-b031-9824b684480c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-05T01:46:34Z,direct_url=<?>,disk_format='qcow2',id=aa58c1e9-bdcc-4e60-9cee-eaeee0741251,min_disk=0,min_ram=0,name='cirros',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-05T01:46:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'image_id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}], 'ephemerals': [{'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'device_name': '/dev/vdb', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 1}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  5 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.229 349552 WARNING nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.252 349552 DEBUG nova.virt.libvirt.host [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  5 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.254 349552 DEBUG nova.virt.libvirt.host [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  5 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.263 349552 DEBUG nova.virt.libvirt.host [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  5 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.263 349552 DEBUG nova.virt.libvirt.host [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  5 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.264 349552 DEBUG nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  5 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.264 349552 DEBUG nova.virt.hardware [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-05T01:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='7d473820-6f66-40b4-b8d1-decd466d7dd2',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-05T01:46:34Z,direct_url=<?>,disk_format='qcow2',id=aa58c1e9-bdcc-4e60-9cee-eaeee0741251,min_disk=0,min_ram=0,name='cirros',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-05T01:46:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  5 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.265 349552 DEBUG nova.virt.hardware [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  5 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.265 349552 DEBUG nova.virt.hardware [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  5 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.265 349552 DEBUG nova.virt.hardware [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  5 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.266 349552 DEBUG nova.virt.hardware [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  5 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.266 349552 DEBUG nova.virt.hardware [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  5 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.266 349552 DEBUG nova.virt.hardware [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  5 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.267 349552 DEBUG nova.virt.hardware [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  5 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.267 349552 DEBUG nova.virt.hardware [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  5 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.267 349552 DEBUG nova.virt.hardware [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  5 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.268 349552 DEBUG nova.virt.hardware [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  5 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.271 349552 DEBUG oslo_concurrency.processutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.565 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:55:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1389: 321 pgs: 321 active+clean; 210 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 561 KiB/s wr, 30 op/s
Dec  5 01:55:41 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  5 01:55:41 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1003878946' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  5 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.756 349552 DEBUG oslo_concurrency.processutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:55:41 compute-0 nova_compute[349548]: 2025-12-05 01:55:41.757 349552 DEBUG oslo_concurrency.processutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:55:41 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec  5 01:55:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  5 01:55:42 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4263597187' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  5 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.237 349552 DEBUG oslo_concurrency.processutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.285 349552 DEBUG nova.storage.rbd_utils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image 3611d2ae-da33-4e55-aec7-0bec88d3b4e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.292 349552 DEBUG oslo_concurrency.processutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:55:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:55:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  5 01:55:42 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1715641047' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  5 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.784 349552 DEBUG oslo_concurrency.processutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.787 349552 DEBUG nova.virt.libvirt.vif [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T01:55:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-4ysdpfw-etyk2gsqvxro-nwtay2ho224x-vnf-wh6pa34aydpq',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-4ysdpfw-etyk2gsqvxro-nwtay2ho224x-vnf-wh6pa34aydpq',id=4,image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ad982b73954486390215862ee62239f',ramdisk_id='',reservation_id='r-105jpxj7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T01:55:37Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04ODA2MDY4NjMzMjAxNTAxMzcxPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTg4MDYwNjg2MzMyMDE1MDEzNzE9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODgwNjA2ODYzMzIwMTUwMTM3MT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTg4MDYwNjg2MzMyMDE1MDEzNzE9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04ODA2MDY4NjMzMjAxNTAxMzcxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04ODA2MDY4NjMzMjAxNTAxMzcxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjI
Dec  5 01:55:42 compute-0 nova_compute[349548]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODgwNjA2ODYzMzIwMTUwMTM3MT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTg4MDYwNjg2MzMyMDE1MDEzNzE9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04ODA2MDY4NjMzMjAxNTAxMzcxPT0tLQo=',user_id='ff880837791d4f49a54672b8d0e705ff',uuid=3611d2ae-da33-4e55-aec7-0bec88d3b4e0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2799035c-b9e1-4c24-b031-9824b684480c", "address": "fa:16:3e:10:64:51", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2799035c-b9", "ovs_interfaceid": "2799035c-b9e1-4c24-b031-9824b684480c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  5 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.788 349552 DEBUG nova.network.os_vif_util [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converting VIF {"id": "2799035c-b9e1-4c24-b031-9824b684480c", "address": "fa:16:3e:10:64:51", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2799035c-b9", "ovs_interfaceid": "2799035c-b9e1-4c24-b031-9824b684480c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.790 349552 DEBUG nova.network.os_vif_util [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:64:51,bridge_name='br-int',has_traffic_filtering=True,id=2799035c-b9e1-4c24-b031-9824b684480c,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2799035c-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.793 349552 DEBUG nova.objects.instance [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lazy-loading 'pci_devices' on Instance uuid 3611d2ae-da33-4e55-aec7-0bec88d3b4e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.824 349552 DEBUG nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] End _get_guest_xml xml=<domain type="kvm">
Dec  5 01:55:42 compute-0 nova_compute[349548]:  <uuid>3611d2ae-da33-4e55-aec7-0bec88d3b4e0</uuid>
Dec  5 01:55:42 compute-0 nova_compute[349548]:  <name>instance-00000004</name>
Dec  5 01:55:42 compute-0 nova_compute[349548]:  <memory>524288</memory>
Dec  5 01:55:42 compute-0 nova_compute[349548]:  <vcpu>1</vcpu>
Dec  5 01:55:42 compute-0 nova_compute[349548]:  <metadata>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  5 01:55:42 compute-0 nova_compute[349548]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:      <nova:name>vn-4ysdpfw-etyk2gsqvxro-nwtay2ho224x-vnf-wh6pa34aydpq</nova:name>
Dec  5 01:55:42 compute-0 nova_compute[349548]:      <nova:creationTime>2025-12-05 01:55:41</nova:creationTime>
Dec  5 01:55:42 compute-0 nova_compute[349548]:      <nova:flavor name="m1.small">
Dec  5 01:55:42 compute-0 nova_compute[349548]:        <nova:memory>512</nova:memory>
Dec  5 01:55:42 compute-0 nova_compute[349548]:        <nova:disk>1</nova:disk>
Dec  5 01:55:42 compute-0 nova_compute[349548]:        <nova:swap>0</nova:swap>
Dec  5 01:55:42 compute-0 nova_compute[349548]:        <nova:ephemeral>1</nova:ephemeral>
Dec  5 01:55:42 compute-0 nova_compute[349548]:        <nova:vcpus>1</nova:vcpus>
Dec  5 01:55:42 compute-0 nova_compute[349548]:      </nova:flavor>
Dec  5 01:55:42 compute-0 nova_compute[349548]:      <nova:owner>
Dec  5 01:55:42 compute-0 nova_compute[349548]:        <nova:user uuid="ff880837791d4f49a54672b8d0e705ff">admin</nova:user>
Dec  5 01:55:42 compute-0 nova_compute[349548]:        <nova:project uuid="6ad982b73954486390215862ee62239f">admin</nova:project>
Dec  5 01:55:42 compute-0 nova_compute[349548]:      </nova:owner>
Dec  5 01:55:42 compute-0 nova_compute[349548]:      <nova:root type="image" uuid="aa58c1e9-bdcc-4e60-9cee-eaeee0741251"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:      <nova:ports>
Dec  5 01:55:42 compute-0 nova_compute[349548]:        <nova:port uuid="2799035c-b9e1-4c24-b031-9824b684480c">
Dec  5 01:55:42 compute-0 nova_compute[349548]:          <nova:ip type="fixed" address="192.168.0.169" ipVersion="4"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:        </nova:port>
Dec  5 01:55:42 compute-0 nova_compute[349548]:      </nova:ports>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    </nova:instance>
Dec  5 01:55:42 compute-0 nova_compute[349548]:  </metadata>
Dec  5 01:55:42 compute-0 nova_compute[349548]:  <sysinfo type="smbios">
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <system>
Dec  5 01:55:42 compute-0 nova_compute[349548]:      <entry name="manufacturer">RDO</entry>
Dec  5 01:55:42 compute-0 nova_compute[349548]:      <entry name="product">OpenStack Compute</entry>
Dec  5 01:55:42 compute-0 nova_compute[349548]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  5 01:55:42 compute-0 nova_compute[349548]:      <entry name="serial">3611d2ae-da33-4e55-aec7-0bec88d3b4e0</entry>
Dec  5 01:55:42 compute-0 nova_compute[349548]:      <entry name="uuid">3611d2ae-da33-4e55-aec7-0bec88d3b4e0</entry>
Dec  5 01:55:42 compute-0 nova_compute[349548]:      <entry name="family">Virtual Machine</entry>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    </system>
Dec  5 01:55:42 compute-0 nova_compute[349548]:  </sysinfo>
Dec  5 01:55:42 compute-0 nova_compute[349548]:  <os>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <boot dev="hd"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <smbios mode="sysinfo"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:  </os>
Dec  5 01:55:42 compute-0 nova_compute[349548]:  <features>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <acpi/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <apic/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <vmcoreinfo/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:  </features>
Dec  5 01:55:42 compute-0 nova_compute[349548]:  <clock offset="utc">
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <timer name="pit" tickpolicy="delay"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <timer name="hpet" present="no"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:  </clock>
Dec  5 01:55:42 compute-0 nova_compute[349548]:  <cpu mode="host-model" match="exact">
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <topology sockets="1" cores="1" threads="1"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:  </cpu>
Dec  5 01:55:42 compute-0 nova_compute[349548]:  <devices>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <disk type="network" device="disk">
Dec  5 01:55:42 compute-0 nova_compute[349548]:      <driver type="raw" cache="none"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:      <source protocol="rbd" name="vms/3611d2ae-da33-4e55-aec7-0bec88d3b4e0_disk">
Dec  5 01:55:42 compute-0 nova_compute[349548]:        <host name="192.168.122.100" port="6789"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:      </source>
Dec  5 01:55:42 compute-0 nova_compute[349548]:      <auth username="openstack">
Dec  5 01:55:42 compute-0 nova_compute[349548]:        <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:      </auth>
Dec  5 01:55:42 compute-0 nova_compute[349548]:      <target dev="vda" bus="virtio"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    </disk>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <disk type="network" device="disk">
Dec  5 01:55:42 compute-0 nova_compute[349548]:      <driver type="raw" cache="none"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:      <source protocol="rbd" name="vms/3611d2ae-da33-4e55-aec7-0bec88d3b4e0_disk.eph0">
Dec  5 01:55:42 compute-0 nova_compute[349548]:        <host name="192.168.122.100" port="6789"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:      </source>
Dec  5 01:55:42 compute-0 nova_compute[349548]:      <auth username="openstack">
Dec  5 01:55:42 compute-0 nova_compute[349548]:        <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:      </auth>
Dec  5 01:55:42 compute-0 nova_compute[349548]:      <target dev="vdb" bus="virtio"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    </disk>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <disk type="network" device="cdrom">
Dec  5 01:55:42 compute-0 nova_compute[349548]:      <driver type="raw" cache="none"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:      <source protocol="rbd" name="vms/3611d2ae-da33-4e55-aec7-0bec88d3b4e0_disk.config">
Dec  5 01:55:42 compute-0 nova_compute[349548]:        <host name="192.168.122.100" port="6789"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:      </source>
Dec  5 01:55:42 compute-0 nova_compute[349548]:      <auth username="openstack">
Dec  5 01:55:42 compute-0 nova_compute[349548]:        <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:      </auth>
Dec  5 01:55:42 compute-0 nova_compute[349548]:      <target dev="sda" bus="sata"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    </disk>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <interface type="ethernet">
Dec  5 01:55:42 compute-0 nova_compute[349548]:      <mac address="fa:16:3e:10:64:51"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:      <model type="virtio"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:      <driver name="vhost" rx_queue_size="512"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:      <mtu size="1442"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:      <target dev="tap2799035c-b9"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    </interface>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <serial type="pty">
Dec  5 01:55:42 compute-0 nova_compute[349548]:      <log file="/var/lib/nova/instances/3611d2ae-da33-4e55-aec7-0bec88d3b4e0/console.log" append="off"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    </serial>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <video>
Dec  5 01:55:42 compute-0 nova_compute[349548]:      <model type="virtio"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    </video>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <input type="tablet" bus="usb"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <rng model="virtio">
Dec  5 01:55:42 compute-0 nova_compute[349548]:      <backend model="random">/dev/urandom</backend>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    </rng>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <controller type="usb" index="0"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    <memballoon model="virtio">
Dec  5 01:55:42 compute-0 nova_compute[349548]:      <stats period="10"/>
Dec  5 01:55:42 compute-0 nova_compute[349548]:    </memballoon>
Dec  5 01:55:42 compute-0 nova_compute[349548]:  </devices>
Dec  5 01:55:42 compute-0 nova_compute[349548]: </domain>
Dec  5 01:55:42 compute-0 nova_compute[349548]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  5 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.824 349552 DEBUG nova.compute.manager [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Preparing to wait for external event network-vif-plugged-2799035c-b9e1-4c24-b031-9824b684480c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  5 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.825 349552 DEBUG oslo_concurrency.lockutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.825 349552 DEBUG oslo_concurrency.lockutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.825 349552 DEBUG oslo_concurrency.lockutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.826 349552 DEBUG nova.virt.libvirt.vif [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T01:55:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-4ysdpfw-etyk2gsqvxro-nwtay2ho224x-vnf-wh6pa34aydpq',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-4ysdpfw-etyk2gsqvxro-nwtay2ho224x-vnf-wh6pa34aydpq',id=4,image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6ad982b73954486390215862ee62239f',ramdisk_id='',reservation_id='r-105jpxj7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T01:55:37Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04ODA2MDY4NjMzMjAxNTAxMzcxPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTg4MDYwNjg2MzMyMDE1MDEzNzE9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODgwNjA2ODYzMzIwMTUwMTM3MT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTg4MDYwNjg2MzMyMDE1MDEzNzE9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04ODA2MDY4NjMzMjAxNTAxMzcxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04ODA2MDY4NjMzMjAxNTAxMzcxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJ
Dec  5 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.826 349552 DEBUG nova.network.os_vif_util [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converting VIF {"id": "2799035c-b9e1-4c24-b031-9824b684480c", "address": "fa:16:3e:10:64:51", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2799035c-b9", "ovs_interfaceid": "2799035c-b9e1-4c24-b031-9824b684480c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.826 349552 DEBUG nova.network.os_vif_util [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:64:51,bridge_name='br-int',has_traffic_filtering=True,id=2799035c-b9e1-4c24-b031-9824b684480c,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2799035c-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.827 349552 DEBUG os_vif [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:64:51,bridge_name='br-int',has_traffic_filtering=True,id=2799035c-b9e1-4c24-b031-9824b684480c,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2799035c-b9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  5 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.827 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.829 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.829 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.837 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.837 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2799035c-b9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.838 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2799035c-b9, col_values=(('external_ids', {'iface-id': '2799035c-b9e1-4c24-b031-9824b684480c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:10:64:51', 'vm-uuid': '3611d2ae-da33-4e55-aec7-0bec88d3b4e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.841 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:55:42 compute-0 NetworkManager[49092]: <info>  [1764899742.8435] manager: (tap2799035c-b9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Dec  5 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.844 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  5 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.853 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.854 349552 INFO os_vif [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:64:51,bridge_name='br-int',has_traffic_filtering=True,id=2799035c-b9e1-4c24-b031-9824b684480c,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2799035c-b9')#033[00m
Dec  5 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.914 349552 DEBUG nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  5 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.915 349552 DEBUG nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  5 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.915 349552 DEBUG nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  5 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.916 349552 DEBUG nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] No VIF found with MAC fa:16:3e:10:64:51, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  5 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.916 349552 INFO nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Using config drive#033[00m
Dec  5 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.962 349552 DEBUG nova.storage.rbd_utils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image 3611d2ae-da33-4e55-aec7-0bec88d3b4e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 01:55:42 compute-0 rsyslogd[188644]: message too long (8192) with configured size 8096, begin of message is: 2025-12-05 01:55:42.787 349552 DEBUG nova.virt.libvirt.vif [None req-32dbcdbb-11 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  5 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.985 349552 DEBUG nova.network.neutron [req-a06338e6-84f4-47fc-b2e8-d4c2087a5730 req-d0c3e7c6-692f-47fe-85a5-8cb57ff55f27 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Updated VIF entry in instance network info cache for port 2799035c-b9e1-4c24-b031-9824b684480c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  5 01:55:42 compute-0 nova_compute[349548]: 2025-12-05 01:55:42.987 349552 DEBUG nova.network.neutron [req-a06338e6-84f4-47fc-b2e8-d4c2087a5730 req-d0c3e7c6-692f-47fe-85a5-8cb57ff55f27 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Updating instance_info_cache with network_info: [{"id": "2799035c-b9e1-4c24-b031-9824b684480c", "address": "fa:16:3e:10:64:51", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2799035c-b9", "ovs_interfaceid": "2799035c-b9e1-4c24-b031-9824b684480c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 01:55:43 compute-0 nova_compute[349548]: 2025-12-05 01:55:43.022 349552 DEBUG oslo_concurrency.lockutils [req-a06338e6-84f4-47fc-b2e8-d4c2087a5730 req-d0c3e7c6-692f-47fe-85a5-8cb57ff55f27 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-3611d2ae-da33-4e55-aec7-0bec88d3b4e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 01:55:43 compute-0 nova_compute[349548]: 2025-12-05 01:55:43.315 349552 INFO nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Creating config drive at /var/lib/nova/instances/3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.config#033[00m
Dec  5 01:55:43 compute-0 nova_compute[349548]: 2025-12-05 01:55:43.328 349552 DEBUG oslo_concurrency.processutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb0gn_s2h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:55:43 compute-0 nova_compute[349548]: 2025-12-05 01:55:43.475 349552 DEBUG oslo_concurrency.processutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb0gn_s2h" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:55:43 compute-0 nova_compute[349548]: 2025-12-05 01:55:43.519 349552 DEBUG nova.storage.rbd_utils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image 3611d2ae-da33-4e55-aec7-0bec88d3b4e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 01:55:43 compute-0 nova_compute[349548]: 2025-12-05 01:55:43.527 349552 DEBUG oslo_concurrency.processutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.config 3611d2ae-da33-4e55-aec7-0bec88d3b4e0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:55:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1390: 321 pgs: 321 active+clean; 234 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.4 MiB/s wr, 37 op/s
Dec  5 01:55:43 compute-0 nova_compute[349548]: 2025-12-05 01:55:43.782 349552 DEBUG oslo_concurrency.processutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.config 3611d2ae-da33-4e55-aec7-0bec88d3b4e0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.255s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:55:43 compute-0 nova_compute[349548]: 2025-12-05 01:55:43.783 349552 INFO nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Deleting local config drive /var/lib/nova/instances/3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.config because it was imported into RBD.#033[00m
Dec  5 01:55:43 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec  5 01:55:43 compute-0 systemd[1]: Started libvirt secret daemon.
Dec  5 01:55:43 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  5 01:55:43 compute-0 kernel: tap2799035c-b9: entered promiscuous mode
Dec  5 01:55:43 compute-0 NetworkManager[49092]: <info>  [1764899743.9312] manager: (tap2799035c-b9): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Dec  5 01:55:43 compute-0 nova_compute[349548]: 2025-12-05 01:55:43.936 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:55:43 compute-0 ovn_controller[89286]: 2025-12-05T01:55:43Z|00045|binding|INFO|Claiming lport 2799035c-b9e1-4c24-b031-9824b684480c for this chassis.
Dec  5 01:55:43 compute-0 ovn_controller[89286]: 2025-12-05T01:55:43Z|00046|binding|INFO|2799035c-b9e1-4c24-b031-9824b684480c: Claiming fa:16:3e:10:64:51 192.168.0.169
Dec  5 01:55:43 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:55:43.947 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:64:51 192.168.0.169'], port_security=['fa:16:3e:10:64:51 192.168.0.169'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-qkgif4ysdpfw-etyk2gsqvxro-nwtay2ho224x-port-44wmftlb3hgo', 'neutron:cidrs': '192.168.0.169/24', 'neutron:device_id': '3611d2ae-da33-4e55-aec7-0bec88d3b4e0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-qkgif4ysdpfw-etyk2gsqvxro-nwtay2ho224x-port-44wmftlb3hgo', 'neutron:project_id': '6ad982b73954486390215862ee62239f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cf07c149-4b4f-4cc9-a5b5-cfd139acbede', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.221'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8440543a-d57d-422f-b491-49a678c2776e, chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=2799035c-b9e1-4c24-b031-9824b684480c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 01:55:43 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:55:43.948 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 2799035c-b9e1-4c24-b031-9824b684480c in datapath 49f7d2f1-f1ff-4dcc-94db-d088dc8d3183 bound to our chassis#033[00m
Dec  5 01:55:43 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:55:43.949 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 49f7d2f1-f1ff-4dcc-94db-d088dc8d3183#033[00m
Dec  5 01:55:43 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:55:43.966 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[e6c4521d-3edd-476d-9615-6e046ecc924e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:55:43 compute-0 ovn_controller[89286]: 2025-12-05T01:55:43Z|00047|binding|INFO|Setting lport 2799035c-b9e1-4c24-b031-9824b684480c ovn-installed in OVS
Dec  5 01:55:43 compute-0 ovn_controller[89286]: 2025-12-05T01:55:43Z|00048|binding|INFO|Setting lport 2799035c-b9e1-4c24-b031-9824b684480c up in Southbound
Dec  5 01:55:43 compute-0 nova_compute[349548]: 2025-12-05 01:55:43.984 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:55:43 compute-0 systemd-machined[138700]: New machine qemu-4-instance-00000004.
Dec  5 01:55:43 compute-0 systemd-udevd[423912]: Network interface NamePolicy= disabled on kernel command line.
Dec  5 01:55:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:55:44.001 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[c17b6095-3e1b-4b05-87e1-f8694653e056]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:55:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:55:44.005 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[ba4a56c6-2bd2-4ddf-aa33-3a47ca72f5f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:55:44 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Dec  5 01:55:44 compute-0 NetworkManager[49092]: <info>  [1764899744.0145] device (tap2799035c-b9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  5 01:55:44 compute-0 NetworkManager[49092]: <info>  [1764899744.0154] device (tap2799035c-b9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  5 01:55:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:55:44.051 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[ab91b78e-12ae-43b7-a08c-ffbac88847a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:55:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:55:44.079 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[4eaf7c27-f557-4fab-ae51-94d1ad3d9f5a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap49f7d2f1-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:8a:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 10, 'rx_bytes': 616, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 10, 'rx_bytes': 616, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537514, 'reachable_time': 15952, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 423917, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:55:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:55:44.104 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[df49c28f-ad28-451c-9cfc-79b6ce7e61ab]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap49f7d2f1-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 537531, 'tstamp': 537531}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 423923, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap49f7d2f1-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 537536, 'tstamp': 537536}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 423923, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:55:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:55:44.106 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap49f7d2f1-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 01:55:44 compute-0 nova_compute[349548]: 2025-12-05 01:55:44.108 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:55:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:55:44.110 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap49f7d2f1-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 01:55:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:55:44.110 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 01:55:44 compute-0 nova_compute[349548]: 2025-12-05 01:55:44.110 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:55:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:55:44.110 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap49f7d2f1-f0, col_values=(('external_ids', {'iface-id': '35b0af3f-4a87-44c5-9b77-2f08261b9985'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 01:55:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:55:44.110 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 01:55:44 compute-0 nova_compute[349548]: 2025-12-05 01:55:44.871 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764899744.8700614, 3611d2ae-da33-4e55-aec7-0bec88d3b4e0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 01:55:44 compute-0 nova_compute[349548]: 2025-12-05 01:55:44.872 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] VM Started (Lifecycle Event)#033[00m
Dec  5 01:55:44 compute-0 nova_compute[349548]: 2025-12-05 01:55:44.895 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 01:55:44 compute-0 nova_compute[349548]: 2025-12-05 01:55:44.904 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764899744.8702343, 3611d2ae-da33-4e55-aec7-0bec88d3b4e0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 01:55:44 compute-0 nova_compute[349548]: 2025-12-05 01:55:44.904 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] VM Paused (Lifecycle Event)#033[00m
Dec  5 01:55:44 compute-0 nova_compute[349548]: 2025-12-05 01:55:44.923 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 01:55:44 compute-0 nova_compute[349548]: 2025-12-05 01:55:44.929 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  5 01:55:44 compute-0 nova_compute[349548]: 2025-12-05 01:55:44.954 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  5 01:55:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 01:55:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2230527893' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 01:55:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 01:55:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2230527893' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 01:55:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1391: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 1.4 MiB/s wr, 42 op/s
Dec  5 01:55:45 compute-0 podman[423985]: 2025-12-05 01:55:45.711011155 +0000 UTC m=+0.112222664 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 01:55:45 compute-0 podman[423988]: 2025-12-05 01:55:45.730961863 +0000 UTC m=+0.117163972 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, name=ubi9-minimal, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.openshift.tags=minimal rhel9, vcs-type=git)
Dec  5 01:55:45 compute-0 podman[423986]: 2025-12-05 01:55:45.745372697 +0000 UTC m=+0.144222800 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 01:55:45 compute-0 podman[423987]: 2025-12-05 01:55:45.768716691 +0000 UTC m=+0.166147314 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  5 01:55:45 compute-0 nova_compute[349548]: 2025-12-05 01:55:45.973 349552 DEBUG nova.compute.manager [req-f03de3e7-7fbf-4a8a-875f-e993c3c56995 req-7ed506b9-b874-4836-b393-279c856938d2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Received event network-vif-plugged-2799035c-b9e1-4c24-b031-9824b684480c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 01:55:45 compute-0 nova_compute[349548]: 2025-12-05 01:55:45.973 349552 DEBUG oslo_concurrency.lockutils [req-f03de3e7-7fbf-4a8a-875f-e993c3c56995 req-7ed506b9-b874-4836-b393-279c856938d2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:55:45 compute-0 nova_compute[349548]: 2025-12-05 01:55:45.974 349552 DEBUG oslo_concurrency.lockutils [req-f03de3e7-7fbf-4a8a-875f-e993c3c56995 req-7ed506b9-b874-4836-b393-279c856938d2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:55:45 compute-0 nova_compute[349548]: 2025-12-05 01:55:45.974 349552 DEBUG oslo_concurrency.lockutils [req-f03de3e7-7fbf-4a8a-875f-e993c3c56995 req-7ed506b9-b874-4836-b393-279c856938d2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:55:45 compute-0 nova_compute[349548]: 2025-12-05 01:55:45.974 349552 DEBUG nova.compute.manager [req-f03de3e7-7fbf-4a8a-875f-e993c3c56995 req-7ed506b9-b874-4836-b393-279c856938d2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Processing event network-vif-plugged-2799035c-b9e1-4c24-b031-9824b684480c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  5 01:55:45 compute-0 nova_compute[349548]: 2025-12-05 01:55:45.975 349552 DEBUG nova.compute.manager [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  5 01:55:45 compute-0 nova_compute[349548]: 2025-12-05 01:55:45.982 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764899745.98215, 3611d2ae-da33-4e55-aec7-0bec88d3b4e0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 01:55:45 compute-0 nova_compute[349548]: 2025-12-05 01:55:45.983 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] VM Resumed (Lifecycle Event)#033[00m
Dec  5 01:55:45 compute-0 nova_compute[349548]: 2025-12-05 01:55:45.986 349552 DEBUG nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  5 01:55:45 compute-0 nova_compute[349548]: 2025-12-05 01:55:45.994 349552 INFO nova.virt.libvirt.driver [-] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Instance spawned successfully.#033[00m
Dec  5 01:55:45 compute-0 nova_compute[349548]: 2025-12-05 01:55:45.996 349552 DEBUG nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  5 01:55:46 compute-0 nova_compute[349548]: 2025-12-05 01:55:46.008 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 01:55:46 compute-0 nova_compute[349548]: 2025-12-05 01:55:46.019 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  5 01:55:46 compute-0 nova_compute[349548]: 2025-12-05 01:55:46.030 349552 DEBUG nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 01:55:46 compute-0 nova_compute[349548]: 2025-12-05 01:55:46.030 349552 DEBUG nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 01:55:46 compute-0 nova_compute[349548]: 2025-12-05 01:55:46.031 349552 DEBUG nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 01:55:46 compute-0 nova_compute[349548]: 2025-12-05 01:55:46.032 349552 DEBUG nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 01:55:46 compute-0 nova_compute[349548]: 2025-12-05 01:55:46.032 349552 DEBUG nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 01:55:46 compute-0 nova_compute[349548]: 2025-12-05 01:55:46.033 349552 DEBUG nova.virt.libvirt.driver [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 01:55:46 compute-0 nova_compute[349548]: 2025-12-05 01:55:46.041 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  5 01:55:46 compute-0 nova_compute[349548]: 2025-12-05 01:55:46.086 349552 INFO nova.compute.manager [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Took 8.62 seconds to spawn the instance on the hypervisor.#033[00m
Dec  5 01:55:46 compute-0 nova_compute[349548]: 2025-12-05 01:55:46.086 349552 DEBUG nova.compute.manager [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 01:55:46 compute-0 nova_compute[349548]: 2025-12-05 01:55:46.143 349552 INFO nova.compute.manager [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Took 9.87 seconds to build instance.#033[00m
Dec  5 01:55:46 compute-0 nova_compute[349548]: 2025-12-05 01:55:46.160 349552 DEBUG oslo_concurrency.lockutils [None req-32dbcdbb-1118-4818-b0f6-291598c749e3 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.994s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:55:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:55:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:55:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:55:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:55:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:55:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:55:46 compute-0 nova_compute[349548]: 2025-12-05 01:55:46.568 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:55:46 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec  5 01:55:46 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec  5 01:55:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1392: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.4 MiB/s wr, 49 op/s
Dec  5 01:55:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:55:47 compute-0 nova_compute[349548]: 2025-12-05 01:55:47.841 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:55:48 compute-0 nova_compute[349548]: 2025-12-05 01:55:48.058 349552 DEBUG nova.compute.manager [req-9c4da4f4-de89-41b2-bd72-030f6b13beb4 req-9662a099-80c6-48fe-99ed-291181789645 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Received event network-vif-plugged-2799035c-b9e1-4c24-b031-9824b684480c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 01:55:48 compute-0 nova_compute[349548]: 2025-12-05 01:55:48.058 349552 DEBUG oslo_concurrency.lockutils [req-9c4da4f4-de89-41b2-bd72-030f6b13beb4 req-9662a099-80c6-48fe-99ed-291181789645 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:55:48 compute-0 nova_compute[349548]: 2025-12-05 01:55:48.059 349552 DEBUG oslo_concurrency.lockutils [req-9c4da4f4-de89-41b2-bd72-030f6b13beb4 req-9662a099-80c6-48fe-99ed-291181789645 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:55:48 compute-0 nova_compute[349548]: 2025-12-05 01:55:48.059 349552 DEBUG oslo_concurrency.lockutils [req-9c4da4f4-de89-41b2-bd72-030f6b13beb4 req-9662a099-80c6-48fe-99ed-291181789645 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:55:48 compute-0 nova_compute[349548]: 2025-12-05 01:55:48.059 349552 DEBUG nova.compute.manager [req-9c4da4f4-de89-41b2-bd72-030f6b13beb4 req-9662a099-80c6-48fe-99ed-291181789645 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] No waiting events found dispatching network-vif-plugged-2799035c-b9e1-4c24-b031-9824b684480c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 01:55:48 compute-0 nova_compute[349548]: 2025-12-05 01:55:48.060 349552 WARNING nova.compute.manager [req-9c4da4f4-de89-41b2-bd72-030f6b13beb4 req-9662a099-80c6-48fe-99ed-291181789645 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Received unexpected event network-vif-plugged-2799035c-b9e1-4c24-b031-9824b684480c for instance with vm_state active and task_state None.#033[00m
Dec  5 01:55:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1393: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.4 MiB/s wr, 49 op/s
Dec  5 01:55:51 compute-0 nova_compute[349548]: 2025-12-05 01:55:51.572 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:55:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1394: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 631 KiB/s rd, 1.4 MiB/s wr, 68 op/s
Dec  5 01:55:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:55:52 compute-0 nova_compute[349548]: 2025-12-05 01:55:52.845 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:55:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1395: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 850 KiB/s wr, 67 op/s
Dec  5 01:55:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1396: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 22 KiB/s wr, 60 op/s
Dec  5 01:55:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:55:56.186 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:55:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:55:56.187 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:55:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:55:56.187 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:55:56 compute-0 nova_compute[349548]: 2025-12-05 01:55:56.577 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:55:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1397: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.4 KiB/s wr, 55 op/s
Dec  5 01:55:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:55:57 compute-0 nova_compute[349548]: 2025-12-05 01:55:57.850 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:55:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1398: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 48 op/s
Dec  5 01:55:59 compute-0 podman[158197]: time="2025-12-05T01:55:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:55:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:55:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 01:55:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:55:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8619 "" "Go-http-client/1.1"
Dec  5 01:56:01 compute-0 openstack_network_exporter[366555]: ERROR   01:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:56:01 compute-0 openstack_network_exporter[366555]: ERROR   01:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:56:01 compute-0 openstack_network_exporter[366555]: ERROR   01:56:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:56:01 compute-0 openstack_network_exporter[366555]: ERROR   01:56:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:56:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:56:01 compute-0 openstack_network_exporter[366555]: ERROR   01:56:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:56:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:56:01 compute-0 nova_compute[349548]: 2025-12-05 01:56:01.580 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:56:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1399: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 48 op/s
Dec  5 01:56:02 compute-0 nova_compute[349548]: 2025-12-05 01:56:02.077 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:56:02 compute-0 nova_compute[349548]: 2025-12-05 01:56:02.115 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Triggering sync for uuid b69a0e24-1bc4-46a5-92d7-367c1efd53df _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  5 01:56:02 compute-0 nova_compute[349548]: 2025-12-05 01:56:02.115 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Triggering sync for uuid b82c3f0e-6d6a-4a7b-9556-b609ad63e497 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  5 01:56:02 compute-0 nova_compute[349548]: 2025-12-05 01:56:02.116 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Triggering sync for uuid 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  5 01:56:02 compute-0 nova_compute[349548]: 2025-12-05 01:56:02.116 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Triggering sync for uuid 3611d2ae-da33-4e55-aec7-0bec88d3b4e0 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  5 01:56:02 compute-0 nova_compute[349548]: 2025-12-05 01:56:02.117 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:56:02 compute-0 nova_compute[349548]: 2025-12-05 01:56:02.118 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:56:02 compute-0 nova_compute[349548]: 2025-12-05 01:56:02.119 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:56:02 compute-0 nova_compute[349548]: 2025-12-05 01:56:02.119 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:56:02 compute-0 nova_compute[349548]: 2025-12-05 01:56:02.120 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:56:02 compute-0 nova_compute[349548]: 2025-12-05 01:56:02.121 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:56:02 compute-0 nova_compute[349548]: 2025-12-05 01:56:02.122 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:56:02 compute-0 nova_compute[349548]: 2025-12-05 01:56:02.122 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:56:02 compute-0 nova_compute[349548]: 2025-12-05 01:56:02.227 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.108s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:56:02 compute-0 nova_compute[349548]: 2025-12-05 01:56:02.229 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.109s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:56:02 compute-0 nova_compute[349548]: 2025-12-05 01:56:02.238 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:56:02 compute-0 nova_compute[349548]: 2025-12-05 01:56:02.241 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:56:02 compute-0 podman[424101]: 2025-12-05 01:56:02.703051166 +0000 UTC m=+0.116586656 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 01:56:02 compute-0 podman[424100]: 2025-12-05 01:56:02.710024711 +0000 UTC m=+0.117864991 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible)
Dec  5 01:56:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:56:02 compute-0 nova_compute[349548]: 2025-12-05 01:56:02.852 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:56:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1400: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 914 KiB/s rd, 29 op/s
Dec  5 01:56:04 compute-0 podman[424142]: 2025-12-05 01:56:04.705778564 +0000 UTC m=+0.111081902 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  5 01:56:04 compute-0 podman[424141]: 2025-12-05 01:56:04.729993602 +0000 UTC m=+0.134364874 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec  5 01:56:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1401: 321 pgs: 321 active+clean; 234 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 0 B/s wr, 11 op/s
Dec  5 01:56:06 compute-0 nova_compute[349548]: 2025-12-05 01:56:06.582 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:56:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1402: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 49 op/s
Dec  5 01:56:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:56:07 compute-0 nova_compute[349548]: 2025-12-05 01:56:07.857 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:56:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1403: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 49 op/s
Dec  5 01:56:09 compute-0 podman[424178]: 2025-12-05 01:56:09.736636796 +0000 UTC m=+0.131308229 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vcs-type=git, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, version=9.4, io.buildah.version=1.29.0, io.openshift.expose-services=, release-0.7.12=)
Dec  5 01:56:11 compute-0 nova_compute[349548]: 2025-12-05 01:56:11.586 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:56:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1404: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  5 01:56:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:56:12 compute-0 nova_compute[349548]: 2025-12-05 01:56:12.861 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:56:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1405: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  5 01:56:14 compute-0 ovn_controller[89286]: 2025-12-05T01:56:14Z|00049|memory_trim|INFO|Detected inactivity (last active 30014 ms ago): trimming memory
Dec  5 01:56:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1406: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 341 B/s wr, 59 op/s
Dec  5 01:56:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:56:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:56:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:56:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:56:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:56:16
Dec  5 01:56:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 01:56:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 01:56:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', 'backups', 'vms']
Dec  5 01:56:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 01:56:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:56:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:56:16 compute-0 nova_compute[349548]: 2025-12-05 01:56:16.589 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:56:16 compute-0 podman[424199]: 2025-12-05 01:56:16.70075982 +0000 UTC m=+0.106957136 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  5 01:56:16 compute-0 podman[424200]: 2025-12-05 01:56:16.713531428 +0000 UTC m=+0.113463099 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 01:56:16 compute-0 podman[424212]: 2025-12-05 01:56:16.733103326 +0000 UTC m=+0.093278374 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.openshift.expose-services=, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, vcs-type=git, version=9.6, io.openshift.tags=minimal rhel9, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, managed_by=edpm_ansible, name=ubi9-minimal, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  5 01:56:16 compute-0 podman[424206]: 2025-12-05 01:56:16.74504528 +0000 UTC m=+0.135930788 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  5 01:56:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 01:56:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:56:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 01:56:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:56:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:56:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:56:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:56:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:56:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:56:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:56:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1407: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 341 B/s wr, 47 op/s
Dec  5 01:56:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:56:17 compute-0 nova_compute[349548]: 2025-12-05 01:56:17.865 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:56:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1408: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 341 B/s wr, 10 op/s
Dec  5 01:56:21 compute-0 nova_compute[349548]: 2025-12-05 01:56:21.592 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:56:21 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec  5 01:56:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1409: 321 pgs: 321 active+clean; 234 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 341 B/s wr, 10 op/s
Dec  5 01:56:22 compute-0 nova_compute[349548]: 2025-12-05 01:56:22.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:56:22 compute-0 nova_compute[349548]: 2025-12-05 01:56:22.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 01:56:22 compute-0 ovn_controller[89286]: 2025-12-05T01:56:22Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:10:64:51 192.168.0.169
Dec  5 01:56:22 compute-0 ovn_controller[89286]: 2025-12-05T01:56:22Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:10:64:51 192.168.0.169
Dec  5 01:56:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:56:22 compute-0 nova_compute[349548]: 2025-12-05 01:56:22.869 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:56:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1410: 321 pgs: 321 active+clean; 243 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 934 KiB/s wr, 15 op/s
Dec  5 01:56:24 compute-0 nova_compute[349548]: 2025-12-05 01:56:24.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:56:25 compute-0 nova_compute[349548]: 2025-12-05 01:56:25.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:56:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1411: 321 pgs: 321 active+clean; 254 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 1.4 MiB/s wr, 35 op/s
Dec  5 01:56:26 compute-0 nova_compute[349548]: 2025-12-05 01:56:26.062 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:56:26 compute-0 nova_compute[349548]: 2025-12-05 01:56:26.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:56:26 compute-0 nova_compute[349548]: 2025-12-05 01:56:26.066 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 01:56:26 compute-0 nova_compute[349548]: 2025-12-05 01:56:26.327 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 01:56:26 compute-0 nova_compute[349548]: 2025-12-05 01:56:26.328 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 01:56:26 compute-0 nova_compute[349548]: 2025-12-05 01:56:26.329 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  5 01:56:26 compute-0 nova_compute[349548]: 2025-12-05 01:56:26.596 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021977584529856093 of space, bias 1.0, pg target 0.6593275358956828 quantized to 32 (current 32)
Dec  5 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  5 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:56:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 01:56:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1412: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Dec  5 01:56:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:56:27 compute-0 nova_compute[349548]: 2025-12-05 01:56:27.873 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:56:27 compute-0 nova_compute[349548]: 2025-12-05 01:56:27.923 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Updating instance_info_cache with network_info: [{"id": "554930d3-ff53-4ef1-af0a-bad6acef1456", "address": "fa:16:3e:43:63:18", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554930d3-ff", "ovs_interfaceid": "554930d3-ff53-4ef1-af0a-bad6acef1456", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 01:56:27 compute-0 nova_compute[349548]: 2025-12-05 01:56:27.943 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 01:56:27 compute-0 nova_compute[349548]: 2025-12-05 01:56:27.944 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  5 01:56:27 compute-0 nova_compute[349548]: 2025-12-05 01:56:27.945 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:56:28 compute-0 nova_compute[349548]: 2025-12-05 01:56:28.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:56:29 compute-0 nova_compute[349548]: 2025-12-05 01:56:29.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:56:29 compute-0 nova_compute[349548]: 2025-12-05 01:56:29.095 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:56:29 compute-0 nova_compute[349548]: 2025-12-05 01:56:29.096 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:56:29 compute-0 nova_compute[349548]: 2025-12-05 01:56:29.097 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:56:29 compute-0 nova_compute[349548]: 2025-12-05 01:56:29.098 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 01:56:29 compute-0 nova_compute[349548]: 2025-12-05 01:56:29.098 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:56:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:56:29 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/735773691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:56:29 compute-0 nova_compute[349548]: 2025-12-05 01:56:29.573 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:56:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1413: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Dec  5 01:56:29 compute-0 nova_compute[349548]: 2025-12-05 01:56:29.746 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:56:29 compute-0 nova_compute[349548]: 2025-12-05 01:56:29.747 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:56:29 compute-0 nova_compute[349548]: 2025-12-05 01:56:29.748 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:56:29 compute-0 podman[158197]: time="2025-12-05T01:56:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:56:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:56:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 01:56:29 compute-0 nova_compute[349548]: 2025-12-05 01:56:29.759 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:56:29 compute-0 nova_compute[349548]: 2025-12-05 01:56:29.760 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:56:29 compute-0 nova_compute[349548]: 2025-12-05 01:56:29.761 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:56:29 compute-0 nova_compute[349548]: 2025-12-05 01:56:29.768 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:56:29 compute-0 nova_compute[349548]: 2025-12-05 01:56:29.769 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:56:29 compute-0 nova_compute[349548]: 2025-12-05 01:56:29.770 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:56:29 compute-0 nova_compute[349548]: 2025-12-05 01:56:29.779 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:56:29 compute-0 nova_compute[349548]: 2025-12-05 01:56:29.780 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:56:29 compute-0 nova_compute[349548]: 2025-12-05 01:56:29.780 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:56:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:56:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8636 "" "Go-http-client/1.1"
Dec  5 01:56:30 compute-0 nova_compute[349548]: 2025-12-05 01:56:30.304 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 01:56:30 compute-0 nova_compute[349548]: 2025-12-05 01:56:30.307 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3305MB free_disk=59.855751037597656GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 01:56:30 compute-0 nova_compute[349548]: 2025-12-05 01:56:30.308 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:56:30 compute-0 nova_compute[349548]: 2025-12-05 01:56:30.309 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:56:30 compute-0 nova_compute[349548]: 2025-12-05 01:56:30.405 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b69a0e24-1bc4-46a5-92d7-367c1efd53df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 01:56:30 compute-0 nova_compute[349548]: 2025-12-05 01:56:30.406 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b82c3f0e-6d6a-4a7b-9556-b609ad63e497 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 01:56:30 compute-0 nova_compute[349548]: 2025-12-05 01:56:30.407 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 01:56:30 compute-0 nova_compute[349548]: 2025-12-05 01:56:30.408 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 3611d2ae-da33-4e55-aec7-0bec88d3b4e0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 01:56:30 compute-0 nova_compute[349548]: 2025-12-05 01:56:30.408 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 01:56:30 compute-0 nova_compute[349548]: 2025-12-05 01:56:30.409 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2560MB phys_disk=59GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 01:56:30 compute-0 nova_compute[349548]: 2025-12-05 01:56:30.527 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:56:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:56:30 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3431019213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:56:31 compute-0 nova_compute[349548]: 2025-12-05 01:56:31.013 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:56:31 compute-0 nova_compute[349548]: 2025-12-05 01:56:31.027 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 01:56:31 compute-0 nova_compute[349548]: 2025-12-05 01:56:31.050 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 01:56:31 compute-0 nova_compute[349548]: 2025-12-05 01:56:31.079 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 01:56:31 compute-0 nova_compute[349548]: 2025-12-05 01:56:31.081 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:56:31 compute-0 openstack_network_exporter[366555]: ERROR   01:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:56:31 compute-0 openstack_network_exporter[366555]: ERROR   01:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:56:31 compute-0 openstack_network_exporter[366555]: ERROR   01:56:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:56:31 compute-0 openstack_network_exporter[366555]: ERROR   01:56:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:56:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:56:31 compute-0 openstack_network_exporter[366555]: ERROR   01:56:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:56:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:56:31 compute-0 nova_compute[349548]: 2025-12-05 01:56:31.600 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:56:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1414: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Dec  5 01:56:32 compute-0 nova_compute[349548]: 2025-12-05 01:56:32.084 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:56:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:56:32 compute-0 nova_compute[349548]: 2025-12-05 01:56:32.877 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:56:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1415: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 166 KiB/s rd, 1.5 MiB/s wr, 57 op/s
Dec  5 01:56:33 compute-0 podman[424329]: 2025-12-05 01:56:33.740632671 +0000 UTC m=+0.140742423 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  5 01:56:33 compute-0 podman[424330]: 2025-12-05 01:56:33.74884571 +0000 UTC m=+0.140248768 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  5 01:56:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1416: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 585 KiB/s wr, 42 op/s
Dec  5 01:56:35 compute-0 podman[424370]: 2025-12-05 01:56:35.67739049 +0000 UTC m=+0.086887264 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec  5 01:56:35 compute-0 podman[424371]: 2025-12-05 01:56:35.710834836 +0000 UTC m=+0.108962192 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  5 01:56:36 compute-0 nova_compute[349548]: 2025-12-05 01:56:36.603 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:56:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1417: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 88 KiB/s wr, 22 op/s
Dec  5 01:56:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:56:37 compute-0 nova_compute[349548]: 2025-12-05 01:56:37.879 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.318 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  5 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.318 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  5 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.319 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01cf20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.325 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5', 'name': 'vn-4ysdpfw-vyar5vmyxehf-7qpgpa3gxwp3-vnf-gvxpa75bo2i7', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {'metering.server_group': 'b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  5 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.328 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b69a0e24-1bc4-46a5-92d7-367c1efd53df', 'name': 'test_0', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  5 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.330 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b82c3f0e-6d6a-4a7b-9556-b609ad63e497', 'name': 'vn-4ysdpfw-vozvkqjb7v2u-n3c5nyx5kkkm-vnf-x5qm3qqtonfj', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {'metering.server_group': 'b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  5 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.332 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 3611d2ae-da33-4e55-aec7-0bec88d3b4e0 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  5 01:56:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:38.333 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/3611d2ae-da33-4e55-aec7-0bec88d3b4e0 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}03a5c5085f72a10a14834caf2c8f725d7bea9761ee1da0af3d318eb89d91a8ae" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  5 01:56:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:56:38 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:56:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:56:38 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:56:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:56:39 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:56:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 01:56:39 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:56:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 01:56:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1418: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Dec  5 01:56:39 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:56:39 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev e4fe887e-5743-4df5-8506-014503a75178 does not exist
Dec  5 01:56:39 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 586d32f4-310e-4074-a718-e859376d22bc does not exist
Dec  5 01:56:39 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 1ce057b9-988d-4d7c-a59b-c2e26647e00d does not exist
Dec  5 01:56:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:39.929 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Fri, 05 Dec 2025 01:56:38 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-d3424ee8-b311-4519-9f51-5448cf4cd270 x-openstack-request-id: req-d3424ee8-b311-4519-9f51-5448cf4cd270 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  5 01:56:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:39.930 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "3611d2ae-da33-4e55-aec7-0bec88d3b4e0", "name": "vn-4ysdpfw-etyk2gsqvxro-nwtay2ho224x-vnf-wh6pa34aydpq", "status": "ACTIVE", "tenant_id": "6ad982b73954486390215862ee62239f", "user_id": "ff880837791d4f49a54672b8d0e705ff", "metadata": {"metering.server_group": "b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1"}, "hostId": "c00078154b620f81ef3acab090afa15b914aca6c57286253be564282", "image": {"id": "aa58c1e9-bdcc-4e60-9cee-eaeee0741251", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/aa58c1e9-bdcc-4e60-9cee-eaeee0741251"}]}, "flavor": {"id": "7d473820-6f66-40b4-b8d1-decd466d7dd2", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/7d473820-6f66-40b4-b8d1-decd466d7dd2"}]}, "created": "2025-12-05T01:55:34Z", "updated": "2025-12-05T01:55:46Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.169", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:10:64:51"}, {"version": 4, "addr": "192.168.122.221", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:10:64:51"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/3611d2ae-da33-4e55-aec7-0bec88d3b4e0"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/3611d2ae-da33-4e55-aec7-0bec88d3b4e0"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-05T01:55:46.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000004", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  5 01:56:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:39.930 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/3611d2ae-da33-4e55-aec7-0bec88d3b4e0 used request id req-d3424ee8-b311-4519-9f51-5448cf4cd270 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  5 01:56:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:39.933 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '3611d2ae-da33-4e55-aec7-0bec88d3b4e0', 'name': 'vn-4ysdpfw-etyk2gsqvxro-nwtay2ho224x-vnf-wh6pa34aydpq', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {'metering.server_group': 'b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  5 01:56:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:39.934 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  5 01:56:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:39.934 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd61438050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:56:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:39.935 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd61438050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:56:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:39.935 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:56:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:39.938 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  5 01:56:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:39.938 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:56:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:39.939 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  5 01:56:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:39.940 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:56:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:39.940 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:56:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:39.940 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:56:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:39.940 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-05T01:56:39.935319) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:56:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:39.941 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-05T01:56:39.940786) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:56:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:39.979 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:39.980 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:39.981 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.021 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.022 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.023 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.060 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.061 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.062 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 01:56:40 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 01:56:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 01:56:40 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:56:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:56:40 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.106 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.106 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.107 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.108 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.108 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.109 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.109 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.109 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.110 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-05T01:56:40.109550) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.109 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.112 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.112 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.112 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.112 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.112 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.112 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.112 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.112 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-4ysdpfw-etyk2gsqvxro-nwtay2ho224x-vnf-wh6pa34aydpq>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-4ysdpfw-etyk2gsqvxro-nwtay2ho224x-vnf-wh6pa34aydpq>]
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.113 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.113 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.113 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.113 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.113 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.114 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-05T01:56:40.112683) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.114 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-05T01:56:40.113586) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.259 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.260 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.261 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.344 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.344 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.345 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:56:40 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:56:40 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:56:40 compute-0 podman[424682]: 2025-12-05 01:56:40.400647517 +0000 UTC m=+0.108484299 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, vendor=Red Hat, Inc., version=9.4, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, architecture=x86_64, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, name=ubi9, managed_by=edpm_ansible)
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.433 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.434 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.434 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.513 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.513 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.514 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.515 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.515 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.516 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.516 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.516 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.517 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.517 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.latency volume: 1788689993 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.517 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.latency volume: 318906117 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.518 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.latency volume: 246265233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.519 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 2043636416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.519 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 325714825 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.520 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 190759187 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.521 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-05T01:56:40.516997) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.520 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.latency volume: 2069488567 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.521 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.latency volume: 288882839 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.522 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.latency volume: 182154388 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.522 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.latency volume: 1726190004 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.523 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.latency volume: 302563806 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.523 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.latency volume: 198504004 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.525 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.525 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.525 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.526 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.526 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.526 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.526 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.527 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.527 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.528 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.528 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-05T01:56:40.526656) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.529 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.529 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.530 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.530 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.531 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.531 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.532 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.532 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.533 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.533 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.534 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.534 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.534 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.534 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.534 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.534 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-05T01:56:40.534398) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.534 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.535 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.535 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.535 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.536 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.536 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.536 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.537 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.537 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.537 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.538 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.538 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.538 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.539 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.539 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.539 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.539 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.539 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.539 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-05T01:56:40.539365) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.540 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.540 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.540 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 41762816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.540 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.541 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.541 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.bytes volume: 41840640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.541 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.542 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.542 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.bytes volume: 41697280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.542 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.543 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.544 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.544 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.544 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.544 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.544 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.544 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.545 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-05T01:56:40.544763) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.584 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.611 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.640 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.671 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.672 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.672 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.672 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.673 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.673 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.673 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.673 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.latency volume: 7184458071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.673 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.latency volume: 30429022 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.674 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.674 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 7524740776 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.675 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-05T01:56:40.673296) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.674 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 28454640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.675 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.676 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.latency volume: 9233370301 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.676 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.latency volume: 32028870 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.676 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.676 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.latency volume: 7901573506 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.677 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.latency volume: 33331693 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.677 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.678 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.678 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.678 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.678 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.678 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.678 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.678 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.679 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.679 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.679 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.679 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.680 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.680 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.requests volume: 240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.680 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.681 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.681 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.requests volume: 221 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.681 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.682 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.682 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.683 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.683 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.683 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.683 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.683 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.684 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-05T01:56:40.678665) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.684 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-05T01:56:40.683544) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.688 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.692 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.695 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.packets volume: 54 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.699 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 3611d2ae-da33-4e55-aec7-0bec88d3b4e0 / tap2799035c-b9 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.700 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.700 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.700 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.700 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.700 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.701 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.701 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.701 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.701 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.701 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.702 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.702 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.702 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.702 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-05T01:56:40.701084) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.703 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.703 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.703 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.703 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.703 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.703 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.704 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.704 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.705 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-05T01:56:40.703331) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.705 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.705 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.706 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.706 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.706 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.706 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.707 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.707 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.708 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.708 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.708 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.708 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.708 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.708 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.709 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.709 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.709 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.710 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.710 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.710 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.710 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.710 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.710 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.outgoing.bytes volume: 2216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.711 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.711 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.bytes volume: 7370 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.711 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.bytes volume: 1906 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.711 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-05T01:56:40.708654) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.712 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-05T01:56:40.710673) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.712 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.712 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.712 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.712 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.712 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.712 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.712 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.outgoing.bytes.delta volume: 225 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.713 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.713 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.bytes.delta volume: 2672 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.713 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-05T01:56:40.712650) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.713 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.714 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.714 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.714 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.714 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.714 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.714 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.714 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.714 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-4ysdpfw-etyk2gsqvxro-nwtay2ho224x-vnf-wh6pa34aydpq>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-4ysdpfw-etyk2gsqvxro-nwtay2ho224x-vnf-wh6pa34aydpq>]
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.715 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.715 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.715 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.715 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.715 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/memory.usage volume: 49.02734375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.716 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/memory.usage volume: 48.91015625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.716 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/memory.usage volume: 49.0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.716 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/memory.usage volume: 49.62890625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.717 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.717 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.717 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.717 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.717 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.717 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.717 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.718 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.bytes volume: 2136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.718 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.bytes volume: 8364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.718 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.719 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.719 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.719 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-05T01:56:40.714657) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.719 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-05T01:56:40.715752) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.719 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.719 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-05T01:56:40.717669) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.719 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.719 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.719 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.719 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.outgoing.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.720 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.720 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.packets volume: 63 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.720 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.721 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.721 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.721 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-05T01:56:40.719706) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.721 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.721 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.722 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.722 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.722 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-05T01:56:40.722339) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.722 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.722 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.723 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.bytes.delta volume: 3431 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.723 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.723 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.724 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.724 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.724 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.724 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.724 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-05T01:56:40.724530) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.724 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.724 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/cpu volume: 35830000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.725 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/cpu volume: 42350000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.725 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/cpu volume: 337280000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.725 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/cpu volume: 35680000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.726 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.726 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.726 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.726 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.726 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.726 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.726 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.727 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.727 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.727 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.727 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.728 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.728 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.728 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.728 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.728 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.728 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.728 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-05T01:56:40.726718) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.728 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-05T01:56:40.728416) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.728 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.728 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.729 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.729 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.730 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.733 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.733 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.733 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.733 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.733 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.733 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.734 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.734 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.734 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.734 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.734 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:56:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:56:40.734 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:56:41 compute-0 podman[424814]: 2025-12-05 01:56:41.024742534 +0000 UTC m=+0.038995453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:56:41 compute-0 podman[424814]: 2025-12-05 01:56:41.43877657 +0000 UTC m=+0.453029479 container create 0200ba125748b0e0151dfa7dc16c129620a04439fafdfc0c74fd95a3ddfa0948 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_keldysh, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:56:41 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:56:41 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:56:41 compute-0 systemd[1]: Started libpod-conmon-0200ba125748b0e0151dfa7dc16c129620a04439fafdfc0c74fd95a3ddfa0948.scope.
Dec  5 01:56:41 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:56:41 compute-0 nova_compute[349548]: 2025-12-05 01:56:41.605 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:56:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1419: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Dec  5 01:56:41 compute-0 podman[424814]: 2025-12-05 01:56:41.753482113 +0000 UTC m=+0.767735012 container init 0200ba125748b0e0151dfa7dc16c129620a04439fafdfc0c74fd95a3ddfa0948 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_keldysh, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:56:41 compute-0 podman[424814]: 2025-12-05 01:56:41.764666527 +0000 UTC m=+0.778919416 container start 0200ba125748b0e0151dfa7dc16c129620a04439fafdfc0c74fd95a3ddfa0948 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_keldysh, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:56:41 compute-0 podman[424814]: 2025-12-05 01:56:41.768972157 +0000 UTC m=+0.783225086 container attach 0200ba125748b0e0151dfa7dc16c129620a04439fafdfc0c74fd95a3ddfa0948 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_keldysh, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:56:41 compute-0 optimistic_keldysh[424830]: 167 167
Dec  5 01:56:41 compute-0 systemd[1]: libpod-0200ba125748b0e0151dfa7dc16c129620a04439fafdfc0c74fd95a3ddfa0948.scope: Deactivated successfully.
Dec  5 01:56:41 compute-0 podman[424814]: 2025-12-05 01:56:41.774113391 +0000 UTC m=+0.788366290 container died 0200ba125748b0e0151dfa7dc16c129620a04439fafdfc0c74fd95a3ddfa0948 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  5 01:56:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-57871eb64a3b10a7800d51cba02a93e09211b35c584f4d9adc276ee0ec14966f-merged.mount: Deactivated successfully.
Dec  5 01:56:41 compute-0 podman[424814]: 2025-12-05 01:56:41.978533796 +0000 UTC m=+0.992786685 container remove 0200ba125748b0e0151dfa7dc16c129620a04439fafdfc0c74fd95a3ddfa0948 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_keldysh, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  5 01:56:42 compute-0 systemd[1]: libpod-conmon-0200ba125748b0e0151dfa7dc16c129620a04439fafdfc0c74fd95a3ddfa0948.scope: Deactivated successfully.
Dec  5 01:56:42 compute-0 podman[424855]: 2025-12-05 01:56:42.218059904 +0000 UTC m=+0.046554565 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:56:42 compute-0 podman[424855]: 2025-12-05 01:56:42.33647503 +0000 UTC m=+0.164969651 container create 3c50516b163fbf0246378ea730475f0f480ae46b34a2987409fae246d7372197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:56:42 compute-0 systemd[1]: Started libpod-conmon-3c50516b163fbf0246378ea730475f0f480ae46b34a2987409fae246d7372197.scope.
Dec  5 01:56:42 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:56:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41456f03d9399cf135598541630039adfac88d3672ab82f06cf63b73903db38a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:56:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41456f03d9399cf135598541630039adfac88d3672ab82f06cf63b73903db38a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:56:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41456f03d9399cf135598541630039adfac88d3672ab82f06cf63b73903db38a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:56:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41456f03d9399cf135598541630039adfac88d3672ab82f06cf63b73903db38a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:56:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41456f03d9399cf135598541630039adfac88d3672ab82f06cf63b73903db38a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:56:42 compute-0 podman[424855]: 2025-12-05 01:56:42.51141761 +0000 UTC m=+0.339912261 container init 3c50516b163fbf0246378ea730475f0f480ae46b34a2987409fae246d7372197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_galois, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  5 01:56:42 compute-0 podman[424855]: 2025-12-05 01:56:42.54427368 +0000 UTC m=+0.372768301 container start 3c50516b163fbf0246378ea730475f0f480ae46b34a2987409fae246d7372197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_galois, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:56:42 compute-0 podman[424855]: 2025-12-05 01:56:42.550743641 +0000 UTC m=+0.379238302 container attach 3c50516b163fbf0246378ea730475f0f480ae46b34a2987409fae246d7372197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_galois, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  5 01:56:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:56:42 compute-0 nova_compute[349548]: 2025-12-05 01:56:42.883 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:56:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1420: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s wr, 0 op/s
Dec  5 01:56:43 compute-0 awesome_galois[424870]: --> passed data devices: 0 physical, 3 LVM
Dec  5 01:56:43 compute-0 awesome_galois[424870]: --> relative data size: 1.0
Dec  5 01:56:43 compute-0 awesome_galois[424870]: --> All data devices are unavailable
Dec  5 01:56:43 compute-0 systemd[1]: libpod-3c50516b163fbf0246378ea730475f0f480ae46b34a2987409fae246d7372197.scope: Deactivated successfully.
Dec  5 01:56:43 compute-0 systemd[1]: libpod-3c50516b163fbf0246378ea730475f0f480ae46b34a2987409fae246d7372197.scope: Consumed 1.176s CPU time.
Dec  5 01:56:43 compute-0 podman[424855]: 2025-12-05 01:56:43.8118478 +0000 UTC m=+1.640342521 container died 3c50516b163fbf0246378ea730475f0f480ae46b34a2987409fae246d7372197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_galois, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  5 01:56:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-41456f03d9399cf135598541630039adfac88d3672ab82f06cf63b73903db38a-merged.mount: Deactivated successfully.
Dec  5 01:56:43 compute-0 podman[424855]: 2025-12-05 01:56:43.890235355 +0000 UTC m=+1.718729956 container remove 3c50516b163fbf0246378ea730475f0f480ae46b34a2987409fae246d7372197 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_galois, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  5 01:56:43 compute-0 systemd[1]: libpod-conmon-3c50516b163fbf0246378ea730475f0f480ae46b34a2987409fae246d7372197.scope: Deactivated successfully.
Dec  5 01:56:44 compute-0 podman[425047]: 2025-12-05 01:56:44.784735225 +0000 UTC m=+0.048297404 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:56:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 01:56:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2936409340' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 01:56:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 01:56:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2936409340' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 01:56:45 compute-0 podman[425047]: 2025-12-05 01:56:45.446585511 +0000 UTC m=+0.710147700 container create d9f2b31da11ecda6bee4127b5389d76fd25541542274a45bcdcf51bb1047cd03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kepler, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  5 01:56:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1421: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:56:45 compute-0 systemd[1]: Started libpod-conmon-d9f2b31da11ecda6bee4127b5389d76fd25541542274a45bcdcf51bb1047cd03.scope.
Dec  5 01:56:45 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:56:45 compute-0 podman[425047]: 2025-12-05 01:56:45.825020219 +0000 UTC m=+1.088582458 container init d9f2b31da11ecda6bee4127b5389d76fd25541542274a45bcdcf51bb1047cd03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kepler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Dec  5 01:56:45 compute-0 podman[425047]: 2025-12-05 01:56:45.838188518 +0000 UTC m=+1.101750667 container start d9f2b31da11ecda6bee4127b5389d76fd25541542274a45bcdcf51bb1047cd03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kepler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  5 01:56:45 compute-0 zealous_kepler[425062]: 167 167
Dec  5 01:56:45 compute-0 podman[425047]: 2025-12-05 01:56:45.842788257 +0000 UTC m=+1.106350486 container attach d9f2b31da11ecda6bee4127b5389d76fd25541542274a45bcdcf51bb1047cd03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kepler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  5 01:56:45 compute-0 systemd[1]: libpod-d9f2b31da11ecda6bee4127b5389d76fd25541542274a45bcdcf51bb1047cd03.scope: Deactivated successfully.
Dec  5 01:56:45 compute-0 podman[425047]: 2025-12-05 01:56:45.845153433 +0000 UTC m=+1.108715582 container died d9f2b31da11ecda6bee4127b5389d76fd25541542274a45bcdcf51bb1047cd03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kepler, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:56:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c4ec6db6c38af05f41bb09a82824e46d237b84147e83ead80d191a3c0bd94df-merged.mount: Deactivated successfully.
Dec  5 01:56:45 compute-0 podman[425047]: 2025-12-05 01:56:45.901444429 +0000 UTC m=+1.165006578 container remove d9f2b31da11ecda6bee4127b5389d76fd25541542274a45bcdcf51bb1047cd03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_kepler, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  5 01:56:45 compute-0 systemd[1]: libpod-conmon-d9f2b31da11ecda6bee4127b5389d76fd25541542274a45bcdcf51bb1047cd03.scope: Deactivated successfully.
Dec  5 01:56:46 compute-0 podman[425087]: 2025-12-05 01:56:46.134720903 +0000 UTC m=+0.064768865 container create 06a4f352c74d481edf2ce3738eb4eb9d373fb5370a19522f04d6f3c09416be4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  5 01:56:46 compute-0 systemd[1]: Started libpod-conmon-06a4f352c74d481edf2ce3738eb4eb9d373fb5370a19522f04d6f3c09416be4a.scope.
Dec  5 01:56:46 compute-0 podman[425087]: 2025-12-05 01:56:46.110591557 +0000 UTC m=+0.040639509 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:56:46 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:56:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4c8c141d6121c94e266e07eba23445fb94328ae759b541887c2db76da2565c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:56:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4c8c141d6121c94e266e07eba23445fb94328ae759b541887c2db76da2565c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:56:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4c8c141d6121c94e266e07eba23445fb94328ae759b541887c2db76da2565c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:56:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4c8c141d6121c94e266e07eba23445fb94328ae759b541887c2db76da2565c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:56:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:56:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:56:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:56:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:56:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:56:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:56:46 compute-0 podman[425087]: 2025-12-05 01:56:46.297541522 +0000 UTC m=+0.227589464 container init 06a4f352c74d481edf2ce3738eb4eb9d373fb5370a19522f04d6f3c09416be4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_galileo, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  5 01:56:46 compute-0 podman[425087]: 2025-12-05 01:56:46.311185885 +0000 UTC m=+0.241233827 container start 06a4f352c74d481edf2ce3738eb4eb9d373fb5370a19522f04d6f3c09416be4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  5 01:56:46 compute-0 podman[425087]: 2025-12-05 01:56:46.317359347 +0000 UTC m=+0.247407359 container attach 06a4f352c74d481edf2ce3738eb4eb9d373fb5370a19522f04d6f3c09416be4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_galileo, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  5 01:56:46 compute-0 nova_compute[349548]: 2025-12-05 01:56:46.607 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:56:47 compute-0 competent_galileo[425103]: {
Dec  5 01:56:47 compute-0 competent_galileo[425103]:    "0": [
Dec  5 01:56:47 compute-0 competent_galileo[425103]:        {
Dec  5 01:56:47 compute-0 competent_galileo[425103]:            "devices": [
Dec  5 01:56:47 compute-0 competent_galileo[425103]:                "/dev/loop3"
Dec  5 01:56:47 compute-0 competent_galileo[425103]:            ],
Dec  5 01:56:47 compute-0 competent_galileo[425103]:            "lv_name": "ceph_lv0",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:            "lv_size": "21470642176",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:            "name": "ceph_lv0",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:            "tags": {
Dec  5 01:56:47 compute-0 competent_galileo[425103]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:                "ceph.cluster_name": "ceph",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:                "ceph.crush_device_class": "",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:                "ceph.encrypted": "0",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:                "ceph.osd_id": "0",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:                "ceph.type": "block",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:                "ceph.vdo": "0"
Dec  5 01:56:47 compute-0 competent_galileo[425103]:            },
Dec  5 01:56:47 compute-0 competent_galileo[425103]:            "type": "block",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:            "vg_name": "ceph_vg0"
Dec  5 01:56:47 compute-0 competent_galileo[425103]:        }
Dec  5 01:56:47 compute-0 competent_galileo[425103]:    ],
Dec  5 01:56:47 compute-0 competent_galileo[425103]:    "1": [
Dec  5 01:56:47 compute-0 competent_galileo[425103]:        {
Dec  5 01:56:47 compute-0 competent_galileo[425103]:            "devices": [
Dec  5 01:56:47 compute-0 competent_galileo[425103]:                "/dev/loop4"
Dec  5 01:56:47 compute-0 competent_galileo[425103]:            ],
Dec  5 01:56:47 compute-0 competent_galileo[425103]:            "lv_name": "ceph_lv1",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:            "lv_size": "21470642176",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:            "name": "ceph_lv1",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:            "tags": {
Dec  5 01:56:47 compute-0 competent_galileo[425103]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:                "ceph.cluster_name": "ceph",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:                "ceph.crush_device_class": "",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:                "ceph.encrypted": "0",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:                "ceph.osd_id": "1",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:                "ceph.type": "block",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:                "ceph.vdo": "0"
Dec  5 01:56:47 compute-0 competent_galileo[425103]:            },
Dec  5 01:56:47 compute-0 competent_galileo[425103]:            "type": "block",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:            "vg_name": "ceph_vg1"
Dec  5 01:56:47 compute-0 competent_galileo[425103]:        }
Dec  5 01:56:47 compute-0 competent_galileo[425103]:    ],
Dec  5 01:56:47 compute-0 competent_galileo[425103]:    "2": [
Dec  5 01:56:47 compute-0 competent_galileo[425103]:        {
Dec  5 01:56:47 compute-0 competent_galileo[425103]:            "devices": [
Dec  5 01:56:47 compute-0 competent_galileo[425103]:                "/dev/loop5"
Dec  5 01:56:47 compute-0 competent_galileo[425103]:            ],
Dec  5 01:56:47 compute-0 competent_galileo[425103]:            "lv_name": "ceph_lv2",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:            "lv_size": "21470642176",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:            "name": "ceph_lv2",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:            "tags": {
Dec  5 01:56:47 compute-0 competent_galileo[425103]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:                "ceph.cluster_name": "ceph",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:                "ceph.crush_device_class": "",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:                "ceph.encrypted": "0",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:                "ceph.osd_id": "2",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:                "ceph.type": "block",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:                "ceph.vdo": "0"
Dec  5 01:56:47 compute-0 competent_galileo[425103]:            },
Dec  5 01:56:47 compute-0 competent_galileo[425103]:            "type": "block",
Dec  5 01:56:47 compute-0 competent_galileo[425103]:            "vg_name": "ceph_vg2"
Dec  5 01:56:47 compute-0 competent_galileo[425103]:        }
Dec  5 01:56:47 compute-0 competent_galileo[425103]:    ]
Dec  5 01:56:47 compute-0 competent_galileo[425103]: }
Dec  5 01:56:47 compute-0 systemd[1]: libpod-06a4f352c74d481edf2ce3738eb4eb9d373fb5370a19522f04d6f3c09416be4a.scope: Deactivated successfully.
Dec  5 01:56:47 compute-0 conmon[425103]: conmon 06a4f352c74d481edf2c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-06a4f352c74d481edf2ce3738eb4eb9d373fb5370a19522f04d6f3c09416be4a.scope/container/memory.events
Dec  5 01:56:47 compute-0 podman[425087]: 2025-12-05 01:56:47.222384533 +0000 UTC m=+1.152432506 container died 06a4f352c74d481edf2ce3738eb4eb9d373fb5370a19522f04d6f3c09416be4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_galileo, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:56:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1422: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:56:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:56:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4c8c141d6121c94e266e07eba23445fb94328ae759b541887c2db76da2565c4-merged.mount: Deactivated successfully.
Dec  5 01:56:47 compute-0 nova_compute[349548]: 2025-12-05 01:56:47.885 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:56:48 compute-0 podman[425087]: 2025-12-05 01:56:48.400788965 +0000 UTC m=+2.330836927 container remove 06a4f352c74d481edf2ce3738eb4eb9d373fb5370a19522f04d6f3c09416be4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_galileo, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  5 01:56:48 compute-0 systemd[1]: libpod-conmon-06a4f352c74d481edf2ce3738eb4eb9d373fb5370a19522f04d6f3c09416be4a.scope: Deactivated successfully.
Dec  5 01:56:48 compute-0 podman[425113]: 2025-12-05 01:56:48.502690909 +0000 UTC m=+1.211586172 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  5 01:56:48 compute-0 podman[425122]: 2025-12-05 01:56:48.530035004 +0000 UTC m=+1.251832058 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.buildah.version=1.33.7, vendor=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_id=edpm, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=openstack_network_exporter)
Dec  5 01:56:48 compute-0 podman[425120]: 2025-12-05 01:56:48.537123813 +0000 UTC m=+1.266907201 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 01:56:48 compute-0 podman[425121]: 2025-12-05 01:56:48.570733154 +0000 UTC m=+1.307249461 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  5 01:56:49 compute-0 podman[425348]: 2025-12-05 01:56:49.419257938 +0000 UTC m=+0.077983485 container create 96e88a2d8def95930a68fdcc9c25cdcc492b6936ba4649b1a2bf6a7e14487909 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  5 01:56:49 compute-0 podman[425348]: 2025-12-05 01:56:49.389001361 +0000 UTC m=+0.047726978 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:56:49 compute-0 systemd[1]: Started libpod-conmon-96e88a2d8def95930a68fdcc9c25cdcc492b6936ba4649b1a2bf6a7e14487909.scope.
Dec  5 01:56:49 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:56:49 compute-0 podman[425348]: 2025-12-05 01:56:49.557576892 +0000 UTC m=+0.216302479 container init 96e88a2d8def95930a68fdcc9c25cdcc492b6936ba4649b1a2bf6a7e14487909 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_shaw, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:56:49 compute-0 podman[425348]: 2025-12-05 01:56:49.567774287 +0000 UTC m=+0.226499814 container start 96e88a2d8def95930a68fdcc9c25cdcc492b6936ba4649b1a2bf6a7e14487909 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_shaw, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:56:49 compute-0 podman[425348]: 2025-12-05 01:56:49.573653692 +0000 UTC m=+0.232379259 container attach 96e88a2d8def95930a68fdcc9c25cdcc492b6936ba4649b1a2bf6a7e14487909 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_shaw, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  5 01:56:49 compute-0 hopeful_shaw[425365]: 167 167
Dec  5 01:56:49 compute-0 systemd[1]: libpod-96e88a2d8def95930a68fdcc9c25cdcc492b6936ba4649b1a2bf6a7e14487909.scope: Deactivated successfully.
Dec  5 01:56:49 compute-0 conmon[425365]: conmon 96e88a2d8def95930a68 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-96e88a2d8def95930a68fdcc9c25cdcc492b6936ba4649b1a2bf6a7e14487909.scope/container/memory.events
Dec  5 01:56:49 compute-0 podman[425348]: 2025-12-05 01:56:49.578082426 +0000 UTC m=+0.236807983 container died 96e88a2d8def95930a68fdcc9c25cdcc492b6936ba4649b1a2bf6a7e14487909 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:56:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ae67ae8a120ff661d5e4551e7ce1dc8adfa8bc747b22a72841b09899b2e1048-merged.mount: Deactivated successfully.
Dec  5 01:56:49 compute-0 podman[425348]: 2025-12-05 01:56:49.632011756 +0000 UTC m=+0.290737293 container remove 96e88a2d8def95930a68fdcc9c25cdcc492b6936ba4649b1a2bf6a7e14487909 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_shaw, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  5 01:56:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1423: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:56:49 compute-0 systemd[1]: libpod-conmon-96e88a2d8def95930a68fdcc9c25cdcc492b6936ba4649b1a2bf6a7e14487909.scope: Deactivated successfully.
Dec  5 01:56:49 compute-0 podman[425387]: 2025-12-05 01:56:49.965480805 +0000 UTC m=+0.092845741 container create a0eb49d59c6d09a13eccea4945815ef76dddce7c271a21eb735ce093f56a6310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Dec  5 01:56:50 compute-0 podman[425387]: 2025-12-05 01:56:49.940271509 +0000 UTC m=+0.067636455 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:56:50 compute-0 systemd[1]: Started libpod-conmon-a0eb49d59c6d09a13eccea4945815ef76dddce7c271a21eb735ce093f56a6310.scope.
Dec  5 01:56:50 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:56:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83ab54b8a74c99c14cd060713eed461e2112184157a925019c44cfb27ad27f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:56:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83ab54b8a74c99c14cd060713eed461e2112184157a925019c44cfb27ad27f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:56:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83ab54b8a74c99c14cd060713eed461e2112184157a925019c44cfb27ad27f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:56:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c83ab54b8a74c99c14cd060713eed461e2112184157a925019c44cfb27ad27f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:56:50 compute-0 podman[425387]: 2025-12-05 01:56:50.141794553 +0000 UTC m=+0.269159529 container init a0eb49d59c6d09a13eccea4945815ef76dddce7c271a21eb735ce093f56a6310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  5 01:56:50 compute-0 podman[425387]: 2025-12-05 01:56:50.163587873 +0000 UTC m=+0.290952839 container start a0eb49d59c6d09a13eccea4945815ef76dddce7c271a21eb735ce093f56a6310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cray, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  5 01:56:50 compute-0 podman[425387]: 2025-12-05 01:56:50.1706103 +0000 UTC m=+0.297975256 container attach a0eb49d59c6d09a13eccea4945815ef76dddce7c271a21eb735ce093f56a6310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cray, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:56:51 compute-0 wonderful_cray[425403]: {
Dec  5 01:56:51 compute-0 wonderful_cray[425403]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 01:56:51 compute-0 wonderful_cray[425403]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:56:51 compute-0 wonderful_cray[425403]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 01:56:51 compute-0 wonderful_cray[425403]:        "osd_id": 0,
Dec  5 01:56:51 compute-0 wonderful_cray[425403]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:56:51 compute-0 wonderful_cray[425403]:        "type": "bluestore"
Dec  5 01:56:51 compute-0 wonderful_cray[425403]:    },
Dec  5 01:56:51 compute-0 wonderful_cray[425403]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 01:56:51 compute-0 wonderful_cray[425403]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:56:51 compute-0 wonderful_cray[425403]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 01:56:51 compute-0 wonderful_cray[425403]:        "osd_id": 1,
Dec  5 01:56:51 compute-0 wonderful_cray[425403]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:56:51 compute-0 wonderful_cray[425403]:        "type": "bluestore"
Dec  5 01:56:51 compute-0 wonderful_cray[425403]:    },
Dec  5 01:56:51 compute-0 wonderful_cray[425403]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 01:56:51 compute-0 wonderful_cray[425403]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:56:51 compute-0 wonderful_cray[425403]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 01:56:51 compute-0 wonderful_cray[425403]:        "osd_id": 2,
Dec  5 01:56:51 compute-0 wonderful_cray[425403]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:56:51 compute-0 wonderful_cray[425403]:        "type": "bluestore"
Dec  5 01:56:51 compute-0 wonderful_cray[425403]:    }
Dec  5 01:56:51 compute-0 wonderful_cray[425403]: }
Dec  5 01:56:51 compute-0 systemd[1]: libpod-a0eb49d59c6d09a13eccea4945815ef76dddce7c271a21eb735ce093f56a6310.scope: Deactivated successfully.
Dec  5 01:56:51 compute-0 podman[425387]: 2025-12-05 01:56:51.428734314 +0000 UTC m=+1.556099260 container died a0eb49d59c6d09a13eccea4945815ef76dddce7c271a21eb735ce093f56a6310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  5 01:56:51 compute-0 systemd[1]: libpod-a0eb49d59c6d09a13eccea4945815ef76dddce7c271a21eb735ce093f56a6310.scope: Consumed 1.257s CPU time.
Dec  5 01:56:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-c83ab54b8a74c99c14cd060713eed461e2112184157a925019c44cfb27ad27f7-merged.mount: Deactivated successfully.
Dec  5 01:56:51 compute-0 podman[425387]: 2025-12-05 01:56:51.547117469 +0000 UTC m=+1.674482435 container remove a0eb49d59c6d09a13eccea4945815ef76dddce7c271a21eb735ce093f56a6310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Dec  5 01:56:51 compute-0 systemd[1]: libpod-conmon-a0eb49d59c6d09a13eccea4945815ef76dddce7c271a21eb735ce093f56a6310.scope: Deactivated successfully.
Dec  5 01:56:51 compute-0 nova_compute[349548]: 2025-12-05 01:56:51.610 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:56:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:56:51 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:56:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:56:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1424: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:56:51 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:56:51 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 7d13f7ae-d493-41ee-8ae5-22aaba576c2c does not exist
Dec  5 01:56:51 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev d438c367-67fd-4a97-9c7c-88ed7b6b26ba does not exist
Dec  5 01:56:52 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:56:52 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:56:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:56:52 compute-0 nova_compute[349548]: 2025-12-05 01:56:52.891 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:56:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1425: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:56:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1426: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Dec  5 01:56:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:56:56.188 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:56:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:56:56.188 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:56:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:56:56.189 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:56:56 compute-0 nova_compute[349548]: 2025-12-05 01:56:56.614 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:56:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1427: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Dec  5 01:56:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:56:57 compute-0 nova_compute[349548]: 2025-12-05 01:56:57.899 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:56:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1428: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Dec  5 01:56:59 compute-0 podman[158197]: time="2025-12-05T01:56:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:56:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:56:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 01:56:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:56:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8619 "" "Go-http-client/1.1"
Dec  5 01:57:01 compute-0 openstack_network_exporter[366555]: ERROR   01:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:57:01 compute-0 openstack_network_exporter[366555]: ERROR   01:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:57:01 compute-0 openstack_network_exporter[366555]: ERROR   01:57:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:57:01 compute-0 openstack_network_exporter[366555]: ERROR   01:57:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:57:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:57:01 compute-0 openstack_network_exporter[366555]: ERROR   01:57:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:57:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:57:01 compute-0 nova_compute[349548]: 2025-12-05 01:57:01.617 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:57:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1429: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Dec  5 01:57:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:57:02 compute-0 nova_compute[349548]: 2025-12-05 01:57:02.904 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:57:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1430: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Dec  5 01:57:04 compute-0 podman[425496]: 2025-12-05 01:57:04.689465579 +0000 UTC m=+0.090442674 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  5 01:57:04 compute-0 podman[425497]: 2025-12-05 01:57:04.690397335 +0000 UTC m=+0.083371906 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 01:57:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1431: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 1 op/s
Dec  5 01:57:06 compute-0 nova_compute[349548]: 2025-12-05 01:57:06.621 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:57:06 compute-0 podman[425536]: 2025-12-05 01:57:06.716491696 +0000 UTC m=+0.119894388 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  5 01:57:06 compute-0 podman[425537]: 2025-12-05 01:57:06.743582365 +0000 UTC m=+0.132468901 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Dec  5 01:57:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1432: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s wr, 0 op/s
Dec  5 01:57:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:57:07 compute-0 nova_compute[349548]: 2025-12-05 01:57:07.908 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:57:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1433: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:57:10 compute-0 podman[425575]: 2025-12-05 01:57:10.715313185 +0000 UTC m=+0.122255595 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, release=1214.1726694543, architecture=x86_64, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_id=edpm, com.redhat.component=ubi9-container, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, name=ubi9, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4)
Dec  5 01:57:11 compute-0 nova_compute[349548]: 2025-12-05 01:57:11.625 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:57:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1434: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:57:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:57:12 compute-0 nova_compute[349548]: 2025-12-05 01:57:12.911 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:57:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1435: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:57:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1436: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:57:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:57:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:57:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:57:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:57:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:57:16
Dec  5 01:57:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 01:57:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 01:57:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['.rgw.root', 'backups', '.mgr', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', 'volumes', 'default.rgw.log']
Dec  5 01:57:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:57:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:57:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 01:57:16 compute-0 nova_compute[349548]: 2025-12-05 01:57:16.628 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:57:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 01:57:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:57:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 01:57:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:57:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:57:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:57:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:57:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:57:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:57:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:57:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1437: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:57:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:57:17 compute-0 nova_compute[349548]: 2025-12-05 01:57:17.914 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:57:18 compute-0 podman[425597]: 2025-12-05 01:57:18.693431725 +0000 UTC m=+0.090405033 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 01:57:18 compute-0 podman[425598]: 2025-12-05 01:57:18.758783255 +0000 UTC m=+0.149927240 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.buildah.version=1.33.7, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, version=9.6, io.openshift.tags=minimal rhel9)
Dec  5 01:57:18 compute-0 podman[425596]: 2025-12-05 01:57:18.776633055 +0000 UTC m=+0.168990664 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  5 01:57:18 compute-0 podman[425599]: 2025-12-05 01:57:18.803017584 +0000 UTC m=+0.183478780 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller)
Dec  5 01:57:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1438: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:57:21 compute-0 nova_compute[349548]: 2025-12-05 01:57:21.631 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:57:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1439: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:57:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:57:22 compute-0 nova_compute[349548]: 2025-12-05 01:57:22.918 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:57:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1440: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:57:24 compute-0 nova_compute[349548]: 2025-12-05 01:57:24.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:57:24 compute-0 nova_compute[349548]: 2025-12-05 01:57:24.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 01:57:25 compute-0 nova_compute[349548]: 2025-12-05 01:57:25.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:57:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1441: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:57:26 compute-0 nova_compute[349548]: 2025-12-05 01:57:26.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:57:26 compute-0 nova_compute[349548]: 2025-12-05 01:57:26.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 01:57:26 compute-0 nova_compute[349548]: 2025-12-05 01:57:26.363 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 01:57:26 compute-0 nova_compute[349548]: 2025-12-05 01:57:26.364 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 01:57:26 compute-0 nova_compute[349548]: 2025-12-05 01:57:26.365 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  5 01:57:26 compute-0 nova_compute[349548]: 2025-12-05 01:57:26.635 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00221085813879664 of space, bias 1.0, pg target 0.663257441638992 quantized to 32 (current 32)
Dec  5 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  5 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:57:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 01:57:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1442: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:57:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:57:27 compute-0 nova_compute[349548]: 2025-12-05 01:57:27.923 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:57:28 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Dec  5 01:57:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:57:28.109733) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  5 01:57:28 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Dec  5 01:57:28 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899848109780, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 1163, "num_deletes": 251, "total_data_size": 1789397, "memory_usage": 1810976, "flush_reason": "Manual Compaction"}
Dec  5 01:57:28 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Dec  5 01:57:28 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899848129567, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 1750966, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28960, "largest_seqno": 30122, "table_properties": {"data_size": 1745322, "index_size": 3039, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11827, "raw_average_key_size": 19, "raw_value_size": 1734127, "raw_average_value_size": 2895, "num_data_blocks": 136, "num_entries": 599, "num_filter_entries": 599, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764899733, "oldest_key_time": 1764899733, "file_creation_time": 1764899848, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Dec  5 01:57:28 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 19932 microseconds, and 10784 cpu microseconds.
Dec  5 01:57:28 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 01:57:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:57:28.129663) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 1750966 bytes OK
Dec  5 01:57:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:57:28.129690) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Dec  5 01:57:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:57:28.133652) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Dec  5 01:57:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:57:28.133718) EVENT_LOG_v1 {"time_micros": 1764899848133701, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  5 01:57:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:57:28.133759) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  5 01:57:28 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 1784053, prev total WAL file size 1784053, number of live WAL files 2.
Dec  5 01:57:28 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 01:57:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:57:28.136554) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Dec  5 01:57:28 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  5 01:57:28 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(1709KB)], [65(7006KB)]
Dec  5 01:57:28 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899848136690, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 8925940, "oldest_snapshot_seqno": -1}
Dec  5 01:57:28 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 4928 keys, 7205078 bytes, temperature: kUnknown
Dec  5 01:57:28 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899848263879, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 7205078, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7173268, "index_size": 18388, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12357, "raw_key_size": 124573, "raw_average_key_size": 25, "raw_value_size": 7085077, "raw_average_value_size": 1437, "num_data_blocks": 757, "num_entries": 4928, "num_filter_entries": 4928, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764899848, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Dec  5 01:57:28 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 01:57:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:57:28.264178) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 7205078 bytes
Dec  5 01:57:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:57:28.266651) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 70.1 rd, 56.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 6.8 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(9.2) write-amplify(4.1) OK, records in: 5442, records dropped: 514 output_compression: NoCompression
Dec  5 01:57:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:57:28.266672) EVENT_LOG_v1 {"time_micros": 1764899848266662, "job": 36, "event": "compaction_finished", "compaction_time_micros": 127321, "compaction_time_cpu_micros": 27712, "output_level": 6, "num_output_files": 1, "total_output_size": 7205078, "num_input_records": 5442, "num_output_records": 4928, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  5 01:57:28 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 01:57:28 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899848267303, "job": 36, "event": "table_file_deletion", "file_number": 67}
Dec  5 01:57:28 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 01:57:28 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764899848268810, "job": 36, "event": "table_file_deletion", "file_number": 65}
Dec  5 01:57:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:57:28.135754) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:57:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:57:28.269012) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:57:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:57:28.269016) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:57:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:57:28.269018) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:57:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:57:28.269019) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:57:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-01:57:28.269021) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 01:57:28 compute-0 nova_compute[349548]: 2025-12-05 01:57:28.779 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Updating instance_info_cache with network_info: [{"id": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "address": "fa:16:3e:68:a7:22", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4341bf52-6b", "ovs_interfaceid": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 01:57:28 compute-0 nova_compute[349548]: 2025-12-05 01:57:28.809 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 01:57:28 compute-0 nova_compute[349548]: 2025-12-05 01:57:28.809 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  5 01:57:28 compute-0 nova_compute[349548]: 2025-12-05 01:57:28.810 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:57:28 compute-0 nova_compute[349548]: 2025-12-05 01:57:28.811 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:57:29 compute-0 nova_compute[349548]: 2025-12-05 01:57:29.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:57:29 compute-0 nova_compute[349548]: 2025-12-05 01:57:29.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:57:29 compute-0 nova_compute[349548]: 2025-12-05 01:57:29.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:57:29 compute-0 nova_compute[349548]: 2025-12-05 01:57:29.102 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:57:29 compute-0 nova_compute[349548]: 2025-12-05 01:57:29.103 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:57:29 compute-0 nova_compute[349548]: 2025-12-05 01:57:29.104 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:57:29 compute-0 nova_compute[349548]: 2025-12-05 01:57:29.104 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 01:57:29 compute-0 nova_compute[349548]: 2025-12-05 01:57:29.105 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:57:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:57:29 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3471495524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:57:29 compute-0 nova_compute[349548]: 2025-12-05 01:57:29.614 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:57:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1443: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:57:29 compute-0 podman[158197]: time="2025-12-05T01:57:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:57:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:57:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 01:57:29 compute-0 nova_compute[349548]: 2025-12-05 01:57:29.772 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:57:29 compute-0 nova_compute[349548]: 2025-12-05 01:57:29.773 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:57:29 compute-0 nova_compute[349548]: 2025-12-05 01:57:29.775 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:57:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:57:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8636 "" "Go-http-client/1.1"
Dec  5 01:57:29 compute-0 nova_compute[349548]: 2025-12-05 01:57:29.786 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:57:29 compute-0 nova_compute[349548]: 2025-12-05 01:57:29.787 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:57:29 compute-0 nova_compute[349548]: 2025-12-05 01:57:29.789 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:57:29 compute-0 nova_compute[349548]: 2025-12-05 01:57:29.799 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:57:29 compute-0 nova_compute[349548]: 2025-12-05 01:57:29.799 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:57:29 compute-0 nova_compute[349548]: 2025-12-05 01:57:29.800 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:57:29 compute-0 nova_compute[349548]: 2025-12-05 01:57:29.809 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:57:29 compute-0 nova_compute[349548]: 2025-12-05 01:57:29.809 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:57:29 compute-0 nova_compute[349548]: 2025-12-05 01:57:29.810 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:57:30 compute-0 nova_compute[349548]: 2025-12-05 01:57:30.408 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 01:57:30 compute-0 nova_compute[349548]: 2025-12-05 01:57:30.409 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3209MB free_disk=59.855655670166016GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 01:57:30 compute-0 nova_compute[349548]: 2025-12-05 01:57:30.409 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:57:30 compute-0 nova_compute[349548]: 2025-12-05 01:57:30.410 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:57:30 compute-0 nova_compute[349548]: 2025-12-05 01:57:30.532 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b69a0e24-1bc4-46a5-92d7-367c1efd53df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 01:57:30 compute-0 nova_compute[349548]: 2025-12-05 01:57:30.533 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b82c3f0e-6d6a-4a7b-9556-b609ad63e497 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 01:57:30 compute-0 nova_compute[349548]: 2025-12-05 01:57:30.533 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 01:57:30 compute-0 nova_compute[349548]: 2025-12-05 01:57:30.533 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 3611d2ae-da33-4e55-aec7-0bec88d3b4e0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 01:57:30 compute-0 nova_compute[349548]: 2025-12-05 01:57:30.533 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 01:57:30 compute-0 nova_compute[349548]: 2025-12-05 01:57:30.533 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2560MB phys_disk=59GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 01:57:30 compute-0 nova_compute[349548]: 2025-12-05 01:57:30.679 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:57:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:57:31 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2845196599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:57:31 compute-0 nova_compute[349548]: 2025-12-05 01:57:31.187 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:57:31 compute-0 nova_compute[349548]: 2025-12-05 01:57:31.203 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 01:57:31 compute-0 nova_compute[349548]: 2025-12-05 01:57:31.220 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 01:57:31 compute-0 nova_compute[349548]: 2025-12-05 01:57:31.222 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 01:57:31 compute-0 nova_compute[349548]: 2025-12-05 01:57:31.222 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:57:31 compute-0 openstack_network_exporter[366555]: ERROR   01:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:57:31 compute-0 openstack_network_exporter[366555]: ERROR   01:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:57:31 compute-0 openstack_network_exporter[366555]: ERROR   01:57:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:57:31 compute-0 openstack_network_exporter[366555]: ERROR   01:57:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:57:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:57:31 compute-0 openstack_network_exporter[366555]: ERROR   01:57:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:57:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:57:31 compute-0 nova_compute[349548]: 2025-12-05 01:57:31.638 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:57:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1444: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:57:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:57:32 compute-0 nova_compute[349548]: 2025-12-05 01:57:32.929 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:57:33 compute-0 nova_compute[349548]: 2025-12-05 01:57:33.222 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:57:33 compute-0 nova_compute[349548]: 2025-12-05 01:57:33.255 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:57:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1445: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:57:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1446: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:57:35 compute-0 podman[425723]: 2025-12-05 01:57:35.677365629 +0000 UTC m=+0.083037767 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:57:35 compute-0 podman[425724]: 2025-12-05 01:57:35.693240113 +0000 UTC m=+0.092383688 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 01:57:36 compute-0 nova_compute[349548]: 2025-12-05 01:57:36.640 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:57:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1447: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:57:37 compute-0 podman[425764]: 2025-12-05 01:57:37.716433524 +0000 UTC m=+0.115202737 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute)
Dec  5 01:57:37 compute-0 podman[425765]: 2025-12-05 01:57:37.718870503 +0000 UTC m=+0.114759245 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 01:57:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:57:37 compute-0 nova_compute[349548]: 2025-12-05 01:57:37.934 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:57:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1448: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:57:41 compute-0 nova_compute[349548]: 2025-12-05 01:57:41.643 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:57:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1449: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:57:41 compute-0 podman[425803]: 2025-12-05 01:57:41.705819179 +0000 UTC m=+0.102444910 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.expose-services=, vcs-type=git, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4)
Dec  5 01:57:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:57:42 compute-0 nova_compute[349548]: 2025-12-05 01:57:42.938 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:57:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1450: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:57:43 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec  5 01:57:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 01:57:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3550525541' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 01:57:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 01:57:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3550525541' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 01:57:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1451: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:57:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:57:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:57:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:57:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:57:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:57:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:57:46 compute-0 nova_compute[349548]: 2025-12-05 01:57:46.646 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:57:46 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  5 01:57:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1452: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:57:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:57:47 compute-0 nova_compute[349548]: 2025-12-05 01:57:47.942 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:57:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1453: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:57:49 compute-0 podman[425826]: 2025-12-05 01:57:49.722636844 +0000 UTC m=+0.111510664 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  5 01:57:49 compute-0 podman[425825]: 2025-12-05 01:57:49.738733265 +0000 UTC m=+0.147597185 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  5 01:57:49 compute-0 podman[425833]: 2025-12-05 01:57:49.748220841 +0000 UTC m=+0.109523329 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, vendor=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, distribution-scope=public, release=1755695350, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc.)
Dec  5 01:57:49 compute-0 podman[425827]: 2025-12-05 01:57:49.78355607 +0000 UTC m=+0.159479487 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Dec  5 01:57:51 compute-0 nova_compute[349548]: 2025-12-05 01:57:51.649 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:57:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1454: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:57:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:57:52 compute-0 nova_compute[349548]: 2025-12-05 01:57:52.947 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:57:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec  5 01:57:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  5 01:57:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:57:53 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:57:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 01:57:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:57:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 01:57:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:57:53 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 285f0195-8471-4e9a-964d-3e6be48094b6 does not exist
Dec  5 01:57:53 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev e3cf2af1-8a62-4b85-adff-20657386a7a8 does not exist
Dec  5 01:57:53 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev bfcc8e7c-021b-47f0-8331-32712b614db6 does not exist
Dec  5 01:57:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 01:57:53 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 01:57:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 01:57:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:57:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:57:53 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:57:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1455: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:57:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  5 01:57:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:57:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:57:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:57:54 compute-0 podman[426170]: 2025-12-05 01:57:54.106463415 +0000 UTC m=+0.067094760 container create 679b2cc98a0de2e05222273a33a1eebe7b49173f8b849f6b1092832d4e9ea0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_jang, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  5 01:57:54 compute-0 podman[426170]: 2025-12-05 01:57:54.075790456 +0000 UTC m=+0.036421841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:57:54 compute-0 systemd[1]: Started libpod-conmon-679b2cc98a0de2e05222273a33a1eebe7b49173f8b849f6b1092832d4e9ea0dd.scope.
Dec  5 01:57:54 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:57:54 compute-0 podman[426170]: 2025-12-05 01:57:54.264324986 +0000 UTC m=+0.224956361 container init 679b2cc98a0de2e05222273a33a1eebe7b49173f8b849f6b1092832d4e9ea0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_jang, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  5 01:57:54 compute-0 podman[426170]: 2025-12-05 01:57:54.282943318 +0000 UTC m=+0.243574663 container start 679b2cc98a0de2e05222273a33a1eebe7b49173f8b849f6b1092832d4e9ea0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_jang, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:57:54 compute-0 podman[426170]: 2025-12-05 01:57:54.288477463 +0000 UTC m=+0.249108848 container attach 679b2cc98a0de2e05222273a33a1eebe7b49173f8b849f6b1092832d4e9ea0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_jang, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:57:54 compute-0 peaceful_jang[426186]: 167 167
Dec  5 01:57:54 compute-0 systemd[1]: libpod-679b2cc98a0de2e05222273a33a1eebe7b49173f8b849f6b1092832d4e9ea0dd.scope: Deactivated successfully.
Dec  5 01:57:54 compute-0 podman[426191]: 2025-12-05 01:57:54.389715528 +0000 UTC m=+0.063594532 container died 679b2cc98a0de2e05222273a33a1eebe7b49173f8b849f6b1092832d4e9ea0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:57:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf9e22a6837e1cf8ff94cb67fe1e42cc0b7bd70b1fcf6fce5e3e55a0306a04d3-merged.mount: Deactivated successfully.
Dec  5 01:57:54 compute-0 podman[426191]: 2025-12-05 01:57:54.464033619 +0000 UTC m=+0.137912573 container remove 679b2cc98a0de2e05222273a33a1eebe7b49173f8b849f6b1092832d4e9ea0dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_jang, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:57:54 compute-0 systemd[1]: libpod-conmon-679b2cc98a0de2e05222273a33a1eebe7b49173f8b849f6b1092832d4e9ea0dd.scope: Deactivated successfully.
Dec  5 01:57:54 compute-0 podman[426212]: 2025-12-05 01:57:54.794051712 +0000 UTC m=+0.107355798 container create bba846903659aabfc73628b19ab8333cb8479500d066927db8f69d0eae09e1af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_neumann, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:57:54 compute-0 podman[426212]: 2025-12-05 01:57:54.747370794 +0000 UTC m=+0.060674930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:57:54 compute-0 systemd[1]: Started libpod-conmon-bba846903659aabfc73628b19ab8333cb8479500d066927db8f69d0eae09e1af.scope.
Dec  5 01:57:54 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:57:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49bc63b93ed6759d56e6a8c8cf7c297f5208dd35e7ca266f9ef0ffbc515dd538/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:57:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49bc63b93ed6759d56e6a8c8cf7c297f5208dd35e7ca266f9ef0ffbc515dd538/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:57:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49bc63b93ed6759d56e6a8c8cf7c297f5208dd35e7ca266f9ef0ffbc515dd538/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:57:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49bc63b93ed6759d56e6a8c8cf7c297f5208dd35e7ca266f9ef0ffbc515dd538/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:57:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49bc63b93ed6759d56e6a8c8cf7c297f5208dd35e7ca266f9ef0ffbc515dd538/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:57:54 compute-0 podman[426212]: 2025-12-05 01:57:54.991427679 +0000 UTC m=+0.304731755 container init bba846903659aabfc73628b19ab8333cb8479500d066927db8f69d0eae09e1af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_neumann, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  5 01:57:55 compute-0 podman[426212]: 2025-12-05 01:57:55.017260153 +0000 UTC m=+0.330564209 container start bba846903659aabfc73628b19ab8333cb8479500d066927db8f69d0eae09e1af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:57:55 compute-0 podman[426212]: 2025-12-05 01:57:55.023429375 +0000 UTC m=+0.336733501 container attach bba846903659aabfc73628b19ab8333cb8479500d066927db8f69d0eae09e1af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_neumann, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:57:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1456: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:57:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:57:56.189 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:57:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:57:56.190 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:57:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:57:56.190 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:57:56 compute-0 unruffled_neumann[426229]: --> passed data devices: 0 physical, 3 LVM
Dec  5 01:57:56 compute-0 unruffled_neumann[426229]: --> relative data size: 1.0
Dec  5 01:57:56 compute-0 unruffled_neumann[426229]: --> All data devices are unavailable
Dec  5 01:57:56 compute-0 systemd[1]: libpod-bba846903659aabfc73628b19ab8333cb8479500d066927db8f69d0eae09e1af.scope: Deactivated successfully.
Dec  5 01:57:56 compute-0 systemd[1]: libpod-bba846903659aabfc73628b19ab8333cb8479500d066927db8f69d0eae09e1af.scope: Consumed 1.281s CPU time.
Dec  5 01:57:56 compute-0 conmon[426229]: conmon bba846903659aabfc736 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bba846903659aabfc73628b19ab8333cb8479500d066927db8f69d0eae09e1af.scope/container/memory.events
Dec  5 01:57:56 compute-0 podman[426258]: 2025-12-05 01:57:56.450049588 +0000 UTC m=+0.048842299 container died bba846903659aabfc73628b19ab8333cb8479500d066927db8f69d0eae09e1af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_neumann, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:57:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-49bc63b93ed6759d56e6a8c8cf7c297f5208dd35e7ca266f9ef0ffbc515dd538-merged.mount: Deactivated successfully.
Dec  5 01:57:56 compute-0 podman[426258]: 2025-12-05 01:57:56.564563605 +0000 UTC m=+0.163356226 container remove bba846903659aabfc73628b19ab8333cb8479500d066927db8f69d0eae09e1af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Dec  5 01:57:56 compute-0 systemd[1]: libpod-conmon-bba846903659aabfc73628b19ab8333cb8479500d066927db8f69d0eae09e1af.scope: Deactivated successfully.
Dec  5 01:57:56 compute-0 nova_compute[349548]: 2025-12-05 01:57:56.653 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:57:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1457: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:57:57 compute-0 podman[426407]: 2025-12-05 01:57:57.677104733 +0000 UTC m=+0.054376194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:57:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:57:57 compute-0 nova_compute[349548]: 2025-12-05 01:57:57.952 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:57:58 compute-0 podman[426407]: 2025-12-05 01:57:58.125225993 +0000 UTC m=+0.502497354 container create 27b0902208139ed6f9db56d79be8071854a1ca788c7d6396e5e62233a9ad15ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  5 01:57:58 compute-0 systemd[1]: Started libpod-conmon-27b0902208139ed6f9db56d79be8071854a1ca788c7d6396e5e62233a9ad15ae.scope.
Dec  5 01:57:58 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:57:58 compute-0 podman[426407]: 2025-12-05 01:57:58.258839475 +0000 UTC m=+0.636110916 container init 27b0902208139ed6f9db56d79be8071854a1ca788c7d6396e5e62233a9ad15ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_heyrovsky, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:57:58 compute-0 podman[426407]: 2025-12-05 01:57:58.275247394 +0000 UTC m=+0.652518765 container start 27b0902208139ed6f9db56d79be8071854a1ca788c7d6396e5e62233a9ad15ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_heyrovsky, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:57:58 compute-0 podman[426407]: 2025-12-05 01:57:58.280232164 +0000 UTC m=+0.657503575 container attach 27b0902208139ed6f9db56d79be8071854a1ca788c7d6396e5e62233a9ad15ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  5 01:57:58 compute-0 distracted_heyrovsky[426423]: 167 167
Dec  5 01:57:58 compute-0 systemd[1]: libpod-27b0902208139ed6f9db56d79be8071854a1ca788c7d6396e5e62233a9ad15ae.scope: Deactivated successfully.
Dec  5 01:57:58 compute-0 podman[426428]: 2025-12-05 01:57:58.353148206 +0000 UTC m=+0.042946664 container died 27b0902208139ed6f9db56d79be8071854a1ca788c7d6396e5e62233a9ad15ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:57:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-938d895de4356d7aacf703ebcc71115207f9a1cbb4db079a925be15f6797039d-merged.mount: Deactivated successfully.
Dec  5 01:57:58 compute-0 podman[426428]: 2025-12-05 01:57:58.412649862 +0000 UTC m=+0.102448300 container remove 27b0902208139ed6f9db56d79be8071854a1ca788c7d6396e5e62233a9ad15ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:57:58 compute-0 systemd[1]: libpod-conmon-27b0902208139ed6f9db56d79be8071854a1ca788c7d6396e5e62233a9ad15ae.scope: Deactivated successfully.
Dec  5 01:57:58 compute-0 podman[426449]: 2025-12-05 01:57:58.75179628 +0000 UTC m=+0.081996937 container create edb5115d9d5a975fd48562eef418dff34411bd6c8be8ba869eada334c59ae341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  5 01:57:58 compute-0 podman[426449]: 2025-12-05 01:57:58.717744587 +0000 UTC m=+0.047945294 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:57:58 compute-0 systemd[1]: Started libpod-conmon-edb5115d9d5a975fd48562eef418dff34411bd6c8be8ba869eada334c59ae341.scope.
Dec  5 01:57:58 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:57:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/033aa1161c6c27b4da6598fb8d9be093dbdc585a4b1ad80c0edb37aa9db83fff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:57:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/033aa1161c6c27b4da6598fb8d9be093dbdc585a4b1ad80c0edb37aa9db83fff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:57:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/033aa1161c6c27b4da6598fb8d9be093dbdc585a4b1ad80c0edb37aa9db83fff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:57:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/033aa1161c6c27b4da6598fb8d9be093dbdc585a4b1ad80c0edb37aa9db83fff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:57:58 compute-0 podman[426449]: 2025-12-05 01:57:58.905529646 +0000 UTC m=+0.235730323 container init edb5115d9d5a975fd48562eef418dff34411bd6c8be8ba869eada334c59ae341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_babbage, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:57:58 compute-0 podman[426449]: 2025-12-05 01:57:58.923492959 +0000 UTC m=+0.253693646 container start edb5115d9d5a975fd48562eef418dff34411bd6c8be8ba869eada334c59ae341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_babbage, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  5 01:57:58 compute-0 podman[426449]: 2025-12-05 01:57:58.929793765 +0000 UTC m=+0.259994522 container attach edb5115d9d5a975fd48562eef418dff34411bd6c8be8ba869eada334c59ae341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Dec  5 01:57:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1458: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:57:59 compute-0 funny_babbage[426466]: {
Dec  5 01:57:59 compute-0 funny_babbage[426466]:    "0": [
Dec  5 01:57:59 compute-0 funny_babbage[426466]:        {
Dec  5 01:57:59 compute-0 funny_babbage[426466]:            "devices": [
Dec  5 01:57:59 compute-0 funny_babbage[426466]:                "/dev/loop3"
Dec  5 01:57:59 compute-0 funny_babbage[426466]:            ],
Dec  5 01:57:59 compute-0 funny_babbage[426466]:            "lv_name": "ceph_lv0",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:            "lv_size": "21470642176",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:            "name": "ceph_lv0",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:            "tags": {
Dec  5 01:57:59 compute-0 funny_babbage[426466]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:                "ceph.cluster_name": "ceph",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:                "ceph.crush_device_class": "",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:                "ceph.encrypted": "0",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:                "ceph.osd_id": "0",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:                "ceph.type": "block",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:                "ceph.vdo": "0"
Dec  5 01:57:59 compute-0 funny_babbage[426466]:            },
Dec  5 01:57:59 compute-0 funny_babbage[426466]:            "type": "block",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:            "vg_name": "ceph_vg0"
Dec  5 01:57:59 compute-0 funny_babbage[426466]:        }
Dec  5 01:57:59 compute-0 funny_babbage[426466]:    ],
Dec  5 01:57:59 compute-0 funny_babbage[426466]:    "1": [
Dec  5 01:57:59 compute-0 funny_babbage[426466]:        {
Dec  5 01:57:59 compute-0 funny_babbage[426466]:            "devices": [
Dec  5 01:57:59 compute-0 funny_babbage[426466]:                "/dev/loop4"
Dec  5 01:57:59 compute-0 funny_babbage[426466]:            ],
Dec  5 01:57:59 compute-0 funny_babbage[426466]:            "lv_name": "ceph_lv1",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:            "lv_size": "21470642176",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:            "name": "ceph_lv1",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:            "tags": {
Dec  5 01:57:59 compute-0 funny_babbage[426466]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:                "ceph.cluster_name": "ceph",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:                "ceph.crush_device_class": "",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:                "ceph.encrypted": "0",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:                "ceph.osd_id": "1",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:                "ceph.type": "block",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:                "ceph.vdo": "0"
Dec  5 01:57:59 compute-0 funny_babbage[426466]:            },
Dec  5 01:57:59 compute-0 funny_babbage[426466]:            "type": "block",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:            "vg_name": "ceph_vg1"
Dec  5 01:57:59 compute-0 funny_babbage[426466]:        }
Dec  5 01:57:59 compute-0 funny_babbage[426466]:    ],
Dec  5 01:57:59 compute-0 funny_babbage[426466]:    "2": [
Dec  5 01:57:59 compute-0 podman[158197]: time="2025-12-05T01:57:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:57:59 compute-0 funny_babbage[426466]:        {
Dec  5 01:57:59 compute-0 funny_babbage[426466]:            "devices": [
Dec  5 01:57:59 compute-0 funny_babbage[426466]:                "/dev/loop5"
Dec  5 01:57:59 compute-0 funny_babbage[426466]:            ],
Dec  5 01:57:59 compute-0 funny_babbage[426466]:            "lv_name": "ceph_lv2",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:            "lv_size": "21470642176",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:            "name": "ceph_lv2",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:            "tags": {
Dec  5 01:57:59 compute-0 funny_babbage[426466]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:                "ceph.cluster_name": "ceph",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:                "ceph.crush_device_class": "",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:                "ceph.encrypted": "0",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:                "ceph.osd_id": "2",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:                "ceph.type": "block",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:                "ceph.vdo": "0"
Dec  5 01:57:59 compute-0 funny_babbage[426466]:            },
Dec  5 01:57:59 compute-0 funny_babbage[426466]:            "type": "block",
Dec  5 01:57:59 compute-0 funny_babbage[426466]:            "vg_name": "ceph_vg2"
Dec  5 01:57:59 compute-0 funny_babbage[426466]:        }
Dec  5 01:57:59 compute-0 funny_babbage[426466]:    ]
Dec  5 01:57:59 compute-0 funny_babbage[426466]: }
Dec  5 01:57:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:57:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45381 "" "Go-http-client/1.1"
Dec  5 01:57:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:57:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9047 "" "Go-http-client/1.1"
Dec  5 01:57:59 compute-0 systemd[1]: libpod-edb5115d9d5a975fd48562eef418dff34411bd6c8be8ba869eada334c59ae341.scope: Deactivated successfully.
Dec  5 01:57:59 compute-0 podman[426449]: 2025-12-05 01:57:59.791501977 +0000 UTC m=+1.121702634 container died edb5115d9d5a975fd48562eef418dff34411bd6c8be8ba869eada334c59ae341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_babbage, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:57:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-033aa1161c6c27b4da6598fb8d9be093dbdc585a4b1ad80c0edb37aa9db83fff-merged.mount: Deactivated successfully.
Dec  5 01:57:59 compute-0 podman[426449]: 2025-12-05 01:57:59.8776674 +0000 UTC m=+1.207868057 container remove edb5115d9d5a975fd48562eef418dff34411bd6c8be8ba869eada334c59ae341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  5 01:57:59 compute-0 systemd[1]: libpod-conmon-edb5115d9d5a975fd48562eef418dff34411bd6c8be8ba869eada334c59ae341.scope: Deactivated successfully.
Dec  5 01:58:00 compute-0 podman[426626]: 2025-12-05 01:58:00.886563715 +0000 UTC m=+0.030145325 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:58:01 compute-0 podman[426626]: 2025-12-05 01:58:01.016077003 +0000 UTC m=+0.159658633 container create f671a26ffe38e322612fb9f8ffa1c4e742eb91ae783711032aaf7b637dad7857 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_sinoussi, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Dec  5 01:58:01 compute-0 systemd[1]: Started libpod-conmon-f671a26ffe38e322612fb9f8ffa1c4e742eb91ae783711032aaf7b637dad7857.scope.
Dec  5 01:58:01 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:58:01 compute-0 podman[426626]: 2025-12-05 01:58:01.145367444 +0000 UTC m=+0.288949084 container init f671a26ffe38e322612fb9f8ffa1c4e742eb91ae783711032aaf7b637dad7857 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_sinoussi, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:58:01 compute-0 podman[426626]: 2025-12-05 01:58:01.155379714 +0000 UTC m=+0.298961314 container start f671a26ffe38e322612fb9f8ffa1c4e742eb91ae783711032aaf7b637dad7857 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:58:01 compute-0 podman[426626]: 2025-12-05 01:58:01.160740184 +0000 UTC m=+0.304321874 container attach f671a26ffe38e322612fb9f8ffa1c4e742eb91ae783711032aaf7b637dad7857 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:58:01 compute-0 agitated_sinoussi[426642]: 167 167
Dec  5 01:58:01 compute-0 systemd[1]: libpod-f671a26ffe38e322612fb9f8ffa1c4e742eb91ae783711032aaf7b637dad7857.scope: Deactivated successfully.
Dec  5 01:58:01 compute-0 podman[426626]: 2025-12-05 01:58:01.164307574 +0000 UTC m=+0.307889174 container died f671a26ffe38e322612fb9f8ffa1c4e742eb91ae783711032aaf7b637dad7857 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_sinoussi, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:58:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea0a2778602f8e5b7f4311324cca8d41a1bfdfd28f03fa9244e50d42a858f009-merged.mount: Deactivated successfully.
Dec  5 01:58:01 compute-0 podman[426626]: 2025-12-05 01:58:01.224245213 +0000 UTC m=+0.367826813 container remove f671a26ffe38e322612fb9f8ffa1c4e742eb91ae783711032aaf7b637dad7857 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:58:01 compute-0 systemd[1]: libpod-conmon-f671a26ffe38e322612fb9f8ffa1c4e742eb91ae783711032aaf7b637dad7857.scope: Deactivated successfully.
Dec  5 01:58:01 compute-0 openstack_network_exporter[366555]: ERROR   01:58:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:58:01 compute-0 openstack_network_exporter[366555]: ERROR   01:58:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:58:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:58:01 compute-0 openstack_network_exporter[366555]: ERROR   01:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:58:01 compute-0 openstack_network_exporter[366555]: ERROR   01:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:58:01 compute-0 openstack_network_exporter[366555]: ERROR   01:58:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:58:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:58:01 compute-0 podman[426664]: 2025-12-05 01:58:01.499374858 +0000 UTC m=+0.092416099 container create 9abe95a42b211e054f5a668caf8d5e0a0b040be1f5c73ae43241a1153393abcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  5 01:58:01 compute-0 podman[426664]: 2025-12-05 01:58:01.470080557 +0000 UTC m=+0.063121818 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:58:01 compute-0 systemd[1]: Started libpod-conmon-9abe95a42b211e054f5a668caf8d5e0a0b040be1f5c73ae43241a1153393abcd.scope.
Dec  5 01:58:01 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:58:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32572efe6e3727cd3bafbd8a37e13fb1b3d7f9a7fbc9125b8ff003a53053990d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:58:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32572efe6e3727cd3bafbd8a37e13fb1b3d7f9a7fbc9125b8ff003a53053990d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:58:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32572efe6e3727cd3bafbd8a37e13fb1b3d7f9a7fbc9125b8ff003a53053990d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:58:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32572efe6e3727cd3bafbd8a37e13fb1b3d7f9a7fbc9125b8ff003a53053990d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:58:01 compute-0 podman[426664]: 2025-12-05 01:58:01.643038291 +0000 UTC m=+0.236079582 container init 9abe95a42b211e054f5a668caf8d5e0a0b040be1f5c73ae43241a1153393abcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_cray, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Dec  5 01:58:01 compute-0 podman[426664]: 2025-12-05 01:58:01.662321161 +0000 UTC m=+0.255362372 container start 9abe95a42b211e054f5a668caf8d5e0a0b040be1f5c73ae43241a1153393abcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_cray, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:58:01 compute-0 podman[426664]: 2025-12-05 01:58:01.66797684 +0000 UTC m=+0.261018121 container attach 9abe95a42b211e054f5a668caf8d5e0a0b040be1f5c73ae43241a1153393abcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_cray, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:58:01 compute-0 nova_compute[349548]: 2025-12-05 01:58:01.669 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:58:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1459: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:58:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:58:02 compute-0 vigorous_cray[426679]: {
Dec  5 01:58:02 compute-0 vigorous_cray[426679]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 01:58:02 compute-0 vigorous_cray[426679]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:58:02 compute-0 vigorous_cray[426679]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 01:58:02 compute-0 vigorous_cray[426679]:        "osd_id": 0,
Dec  5 01:58:02 compute-0 vigorous_cray[426679]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:58:02 compute-0 vigorous_cray[426679]:        "type": "bluestore"
Dec  5 01:58:02 compute-0 vigorous_cray[426679]:    },
Dec  5 01:58:02 compute-0 vigorous_cray[426679]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 01:58:02 compute-0 vigorous_cray[426679]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:58:02 compute-0 vigorous_cray[426679]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 01:58:02 compute-0 vigorous_cray[426679]:        "osd_id": 1,
Dec  5 01:58:02 compute-0 vigorous_cray[426679]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:58:02 compute-0 vigorous_cray[426679]:        "type": "bluestore"
Dec  5 01:58:02 compute-0 vigorous_cray[426679]:    },
Dec  5 01:58:02 compute-0 vigorous_cray[426679]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 01:58:02 compute-0 vigorous_cray[426679]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:58:02 compute-0 vigorous_cray[426679]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 01:58:02 compute-0 vigorous_cray[426679]:        "osd_id": 2,
Dec  5 01:58:02 compute-0 vigorous_cray[426679]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:58:02 compute-0 vigorous_cray[426679]:        "type": "bluestore"
Dec  5 01:58:02 compute-0 vigorous_cray[426679]:    }
Dec  5 01:58:02 compute-0 vigorous_cray[426679]: }
Dec  5 01:58:02 compute-0 systemd[1]: libpod-9abe95a42b211e054f5a668caf8d5e0a0b040be1f5c73ae43241a1153393abcd.scope: Deactivated successfully.
Dec  5 01:58:02 compute-0 podman[426664]: 2025-12-05 01:58:02.846279699 +0000 UTC m=+1.439320920 container died 9abe95a42b211e054f5a668caf8d5e0a0b040be1f5c73ae43241a1153393abcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  5 01:58:02 compute-0 systemd[1]: libpod-9abe95a42b211e054f5a668caf8d5e0a0b040be1f5c73ae43241a1153393abcd.scope: Consumed 1.175s CPU time.
Dec  5 01:58:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-32572efe6e3727cd3bafbd8a37e13fb1b3d7f9a7fbc9125b8ff003a53053990d-merged.mount: Deactivated successfully.
Dec  5 01:58:02 compute-0 podman[426664]: 2025-12-05 01:58:02.922689589 +0000 UTC m=+1.515730800 container remove 9abe95a42b211e054f5a668caf8d5e0a0b040be1f5c73ae43241a1153393abcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_cray, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Dec  5 01:58:02 compute-0 systemd[1]: libpod-conmon-9abe95a42b211e054f5a668caf8d5e0a0b040be1f5c73ae43241a1153393abcd.scope: Deactivated successfully.
Dec  5 01:58:02 compute-0 nova_compute[349548]: 2025-12-05 01:58:02.956 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:58:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:58:02 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:58:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:58:02 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:58:02 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 116ffca0-c04b-43b3-96ea-fe3552f6b185 does not exist
Dec  5 01:58:02 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev b3f8ea15-c77a-481b-9e0e-b17d64055afd does not exist
Dec  5 01:58:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1460: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:58:03 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:58:03 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:58:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1461: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:58:06 compute-0 nova_compute[349548]: 2025-12-05 01:58:06.659 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:58:06 compute-0 podman[426776]: 2025-12-05 01:58:06.71806728 +0000 UTC m=+0.113329615 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  5 01:58:06 compute-0 podman[426777]: 2025-12-05 01:58:06.718151242 +0000 UTC m=+0.103926451 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 01:58:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1462: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:58:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:58:07 compute-0 nova_compute[349548]: 2025-12-05 01:58:07.961 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:58:08 compute-0 podman[426816]: 2025-12-05 01:58:08.707819484 +0000 UTC m=+0.110755852 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  5 01:58:08 compute-0 podman[426817]: 2025-12-05 01:58:08.719221814 +0000 UTC m=+0.105794684 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  5 01:58:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1463: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:58:11 compute-0 nova_compute[349548]: 2025-12-05 01:58:11.662 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:58:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1464: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:58:12 compute-0 podman[426856]: 2025-12-05 01:58:12.708809265 +0000 UTC m=+0.125856556 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, managed_by=edpm_ansible, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, maintainer=Red Hat, Inc., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, version=9.4, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., architecture=x86_64, config_id=edpm, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  5 01:58:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:58:12 compute-0 nova_compute[349548]: 2025-12-05 01:58:12.966 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:58:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1465: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:58:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1466: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:58:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:58:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:58:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:58:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:58:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:58:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:58:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:58:16
Dec  5 01:58:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 01:58:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 01:58:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'backups', 'vms', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'images', 'default.rgw.log']
Dec  5 01:58:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 01:58:16 compute-0 nova_compute[349548]: 2025-12-05 01:58:16.663 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:58:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 01:58:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:58:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 01:58:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:58:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:58:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:58:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:58:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:58:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:58:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:58:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1467: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:58:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:58:17 compute-0 nova_compute[349548]: 2025-12-05 01:58:17.970 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:58:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1468: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:58:20 compute-0 podman[426878]: 2025-12-05 01:58:20.716751342 +0000 UTC m=+0.117596724 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 01:58:20 compute-0 podman[426877]: 2025-12-05 01:58:20.726869406 +0000 UTC m=+0.129617272 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  5 01:58:20 compute-0 podman[426880]: 2025-12-05 01:58:20.730787765 +0000 UTC m=+0.115100514 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, release=1755695350, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-type=git, config_id=edpm)
Dec  5 01:58:20 compute-0 podman[426879]: 2025-12-05 01:58:20.76094104 +0000 UTC m=+0.156870695 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  5 01:58:21 compute-0 nova_compute[349548]: 2025-12-05 01:58:21.667 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:58:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1469: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:58:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:58:22 compute-0 nova_compute[349548]: 2025-12-05 01:58:22.974 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:58:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1470: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:58:24 compute-0 nova_compute[349548]: 2025-12-05 01:58:24.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:58:24 compute-0 nova_compute[349548]: 2025-12-05 01:58:24.066 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 01:58:25 compute-0 nova_compute[349548]: 2025-12-05 01:58:25.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:58:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1471: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:58:26 compute-0 nova_compute[349548]: 2025-12-05 01:58:26.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:58:26 compute-0 nova_compute[349548]: 2025-12-05 01:58:26.670 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00221085813879664 of space, bias 1.0, pg target 0.663257441638992 quantized to 32 (current 32)
Dec  5 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  5 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:58:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 01:58:27 compute-0 nova_compute[349548]: 2025-12-05 01:58:27.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:58:27 compute-0 nova_compute[349548]: 2025-12-05 01:58:27.066 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 01:58:27 compute-0 nova_compute[349548]: 2025-12-05 01:58:27.066 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  5 01:58:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1472: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:58:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:58:27 compute-0 nova_compute[349548]: 2025-12-05 01:58:27.904 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 01:58:27 compute-0 nova_compute[349548]: 2025-12-05 01:58:27.905 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 01:58:27 compute-0 nova_compute[349548]: 2025-12-05 01:58:27.905 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  5 01:58:27 compute-0 nova_compute[349548]: 2025-12-05 01:58:27.906 349552 DEBUG nova.objects.instance [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b69a0e24-1bc4-46a5-92d7-367c1efd53df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 01:58:27 compute-0 nova_compute[349548]: 2025-12-05 01:58:27.979 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:58:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1473: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:58:29 compute-0 podman[158197]: time="2025-12-05T01:58:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:58:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:58:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 01:58:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:58:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8625 "" "Go-http-client/1.1"
Dec  5 01:58:30 compute-0 nova_compute[349548]: 2025-12-05 01:58:30.406 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updating instance_info_cache with network_info: [{"id": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "address": "fa:16:3e:0c:12:24", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.48", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68143c81-65", "ovs_interfaceid": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 01:58:30 compute-0 nova_compute[349548]: 2025-12-05 01:58:30.424 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 01:58:30 compute-0 nova_compute[349548]: 2025-12-05 01:58:30.425 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  5 01:58:30 compute-0 nova_compute[349548]: 2025-12-05 01:58:30.426 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:58:30 compute-0 nova_compute[349548]: 2025-12-05 01:58:30.427 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:58:30 compute-0 nova_compute[349548]: 2025-12-05 01:58:30.459 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:58:30 compute-0 nova_compute[349548]: 2025-12-05 01:58:30.460 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:58:30 compute-0 nova_compute[349548]: 2025-12-05 01:58:30.461 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:58:30 compute-0 nova_compute[349548]: 2025-12-05 01:58:30.462 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 01:58:30 compute-0 nova_compute[349548]: 2025-12-05 01:58:30.462 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:58:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:58:30 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/317975351' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.001 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.112 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.113 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.113 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.119 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.120 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.120 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.127 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.128 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.128 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000002 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.135 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.135 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.136 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:58:31 compute-0 openstack_network_exporter[366555]: ERROR   01:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:58:31 compute-0 openstack_network_exporter[366555]: ERROR   01:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:58:31 compute-0 openstack_network_exporter[366555]: ERROR   01:58:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:58:31 compute-0 openstack_network_exporter[366555]: ERROR   01:58:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:58:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:58:31 compute-0 openstack_network_exporter[366555]: ERROR   01:58:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:58:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.580 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.581 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3206MB free_disk=59.855655670166016GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.581 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.582 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.673 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:58:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1474: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.719 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b69a0e24-1bc4-46a5-92d7-367c1efd53df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.720 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b82c3f0e-6d6a-4a7b-9556-b609ad63e497 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.720 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.720 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 3611d2ae-da33-4e55-aec7-0bec88d3b4e0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.721 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.721 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2560MB phys_disk=59GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.762 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing inventories for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  5 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.782 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating ProviderTree inventory for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  5 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.782 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating inventory in ProviderTree for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  5 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.797 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing aggregate associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  5 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.817 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing trait associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, traits: HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,HW_CPU_X86_ABM,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE41,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI2,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  5 01:58:31 compute-0 nova_compute[349548]: 2025-12-05 01:58:31.919 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:58:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:58:32 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3627712230' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:58:32 compute-0 nova_compute[349548]: 2025-12-05 01:58:32.445 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:58:32 compute-0 nova_compute[349548]: 2025-12-05 01:58:32.454 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 01:58:32 compute-0 nova_compute[349548]: 2025-12-05 01:58:32.470 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 01:58:32 compute-0 nova_compute[349548]: 2025-12-05 01:58:32.472 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 01:58:32 compute-0 nova_compute[349548]: 2025-12-05 01:58:32.472 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.890s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:58:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:58:32 compute-0 nova_compute[349548]: 2025-12-05 01:58:32.984 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:58:33 compute-0 nova_compute[349548]: 2025-12-05 01:58:33.113 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:58:33 compute-0 nova_compute[349548]: 2025-12-05 01:58:33.114 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:58:33 compute-0 nova_compute[349548]: 2025-12-05 01:58:33.115 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:58:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1475: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:58:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1476: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:58:36 compute-0 nova_compute[349548]: 2025-12-05 01:58:36.677 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:58:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1477: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:58:37 compute-0 podman[427009]: 2025-12-05 01:58:37.71735372 +0000 UTC m=+0.117131162 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  5 01:58:37 compute-0 podman[427010]: 2025-12-05 01:58:37.745234611 +0000 UTC m=+0.126120993 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  5 01:58:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:58:37 compute-0 nova_compute[349548]: 2025-12-05 01:58:37.988 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.318 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.319 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.319 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.327 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5', 'name': 'vn-4ysdpfw-vyar5vmyxehf-7qpgpa3gxwp3-vnf-gvxpa75bo2i7', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {'metering.server_group': 'b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.330 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b69a0e24-1bc4-46a5-92d7-367c1efd53df', 'name': 'test_0', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.335 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b82c3f0e-6d6a-4a7b-9556-b609ad63e497', 'name': 'vn-4ysdpfw-vozvkqjb7v2u-n3c5nyx5kkkm-vnf-x5qm3qqtonfj', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {'metering.server_group': 'b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.338 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '3611d2ae-da33-4e55-aec7-0bec88d3b4e0', 'name': 'vn-4ysdpfw-etyk2gsqvxro-nwtay2ho224x-vnf-wh6pa34aydpq', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {'metering.server_group': 'b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.339 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.339 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd61438050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.339 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd61438050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.340 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.341 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.342 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.342 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.342 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.342 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-05T01:58:38.340069) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.342 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.343 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-05T01:58:38.342688) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.378 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.379 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.379 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.415 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.416 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.416 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.450 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.450 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.451 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.484 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.485 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.485 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.486 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.487 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.487 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.487 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.488 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.488 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.489 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.489 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.490 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.490 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.490 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.490 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.491 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.491 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.491 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-05T01:58:38.488256) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.492 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-05T01:58:38.491218) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.581 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.582 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.583 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.675 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.675 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.676 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.756 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.757 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.758 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.827 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.828 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.829 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.830 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.830 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.831 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.831 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.832 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.832 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.833 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.latency volume: 1788689993 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.834 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.latency volume: 318906117 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.834 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.latency volume: 246265233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.834 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-05T01:58:38.832764) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.835 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 2043636416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.836 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 325714825 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.837 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 190759187 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.837 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.latency volume: 2069488567 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.838 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.latency volume: 288882839 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.839 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.latency volume: 182154388 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.839 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.latency volume: 1726190004 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.840 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.latency volume: 302563806 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.840 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.latency volume: 198504004 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.841 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.842 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.842 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.843 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.843 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.843 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.844 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.844 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-05T01:58:38.843814) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.844 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.845 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.846 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.847 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.847 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.847 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.848 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.849 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.850 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.851 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.851 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.852 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.852 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.853 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.853 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.853 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.853 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.854 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-05T01:58:38.853553) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.854 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.854 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.855 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.855 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.856 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.856 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.856 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.857 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.857 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.858 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.858 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.858 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.859 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.860 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.860 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.860 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.860 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.860 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.861 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.861 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.862 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.862 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 41762816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.863 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.863 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.863 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.bytes volume: 41840640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.864 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.864 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.865 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.866 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.866 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.867 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-05T01:58:38.860781) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.868 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.868 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.868 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.868 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.868 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.869 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.870 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-05T01:58:38.869147) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.910 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.941 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:38.970 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.004 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.006 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.006 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.007 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.007 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.008 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.009 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.009 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-05T01:58:39.008855) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.009 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.latency volume: 7184458071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.011 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.latency volume: 30429022 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.012 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.012 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 7524740776 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.013 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 28454640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.013 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.014 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.latency volume: 9233370301 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.014 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.latency volume: 32028870 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.015 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.016 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.latency volume: 8278686410 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.016 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.latency volume: 33331693 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.017 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.018 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.018 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.018 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.019 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.019 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.019 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.020 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.019 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-05T01:58:39.019357) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.020 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.021 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.021 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.022 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.022 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.022 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.requests volume: 240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.023 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.023 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.024 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.024 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.025 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.026 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.027 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.027 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.027 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.027 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.027 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.028 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-05T01:58:39.027592) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.032 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.038 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.045 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.packets volume: 54 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.050 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.051 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.052 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.052 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.052 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.052 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.052 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.053 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.053 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-05T01:58:39.052774) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.054 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.054 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.054 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.055 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.056 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.056 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.056 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.056 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.057 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.057 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.058 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.058 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.059 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-05T01:58:39.056816) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.059 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.059 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.060 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.060 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.061 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.061 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.061 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.062 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.063 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.063 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.063 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.064 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.064 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.064 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.064 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.065 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.065 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.066 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.067 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.067 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.067 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.068 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.068 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.068 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.068 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.069 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.069 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.bytes volume: 7440 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.070 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.071 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.072 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.072 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.072 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-05T01:58:39.064484) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.072 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.073 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.073 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.073 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.073 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-05T01:58:39.068469) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.073 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.074 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.bytes.delta volume: 380 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.074 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-05T01:58:39.072982) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.075 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.075 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.075 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.075 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.075 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.076 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.076 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/memory.usage volume: 49.02734375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.076 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/memory.usage volume: 48.91015625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.077 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/memory.usage volume: 49.0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.077 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/memory.usage volume: 49.01171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.078 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.078 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-05T01:58:39.076087) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.078 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.078 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.078 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.079 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.079 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.080 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.081 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.bytes volume: 2136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.081 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.bytes volume: 8364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.081 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-05T01:58:39.079175) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.082 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.082 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.082 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.083 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.083 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.083 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.083 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.083 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.083 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.084 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.packets volume: 64 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.084 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.084 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-05T01:58:39.083410) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.085 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.085 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.085 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.085 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.085 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.085 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.085 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.086 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.086 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.086 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.087 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.087 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-05T01:58:39.085619) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.087 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.087 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.087 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.087 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.088 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.088 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/cpu volume: 37770000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.088 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/cpu volume: 44210000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.088 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-05T01:58:39.087997) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.088 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/cpu volume: 339210000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.089 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/cpu volume: 37720000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.089 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.089 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.089 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.089 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.089 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.090 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.090 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.090 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-05T01:58:39.090021) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.090 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.090 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.091 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.091 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.091 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.091 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.091 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.092 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.092 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.092 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.092 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-05T01:58:39.092343) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.093 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.093 14 DEBUG ceilometer.compute.pollsters [-] b82c3f0e-6d6a-4a7b-9556-b609ad63e497/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.093 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.094 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.096 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.096 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.097 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.097 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.097 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.097 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:58:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 01:58:39.100 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 01:58:39 compute-0 podman[427049]: 2025-12-05 01:58:39.700989732 +0000 UTC m=+0.099415485 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi)
Dec  5 01:58:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1478: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:58:39 compute-0 podman[427048]: 2025-12-05 01:58:39.716404554 +0000 UTC m=+0.131285508 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm)
Dec  5 01:58:41 compute-0 nova_compute[349548]: 2025-12-05 01:58:41.680 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:58:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1479: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:58:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:58:42 compute-0 nova_compute[349548]: 2025-12-05 01:58:42.991 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:58:43 compute-0 podman[427087]: 2025-12-05 01:58:43.648542416 +0000 UTC m=+0.070669770 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.buildah.version=1.29.0, name=ubi9, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, distribution-scope=public, release=1214.1726694543)
Dec  5 01:58:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1480: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:58:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 01:58:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4042815258' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 01:58:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 01:58:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4042815258' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 01:58:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1481: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:58:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:58:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:58:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:58:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:58:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:58:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:58:46 compute-0 nova_compute[349548]: 2025-12-05 01:58:46.683 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:58:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1482: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:58:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:58:47 compute-0 nova_compute[349548]: 2025-12-05 01:58:47.995 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:58:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1483: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:58:51 compute-0 nova_compute[349548]: 2025-12-05 01:58:51.689 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:58:51 compute-0 podman[427109]: 2025-12-05 01:58:51.713607272 +0000 UTC m=+0.099993101 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  5 01:58:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1484: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:58:51 compute-0 podman[427115]: 2025-12-05 01:58:51.731222435 +0000 UTC m=+0.101663618 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.buildah.version=1.33.7, version=9.6, build-date=2025-08-20T13:12:41, name=ubi9-minimal, distribution-scope=public, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, release=1755695350, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  5 01:58:51 compute-0 podman[427108]: 2025-12-05 01:58:51.739174968 +0000 UTC m=+0.135722282 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  5 01:58:51 compute-0 podman[427110]: 2025-12-05 01:58:51.777590314 +0000 UTC m=+0.150933518 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  5 01:58:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:58:53 compute-0 nova_compute[349548]: 2025-12-05 01:58:53.000 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:58:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1485: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:58:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1486: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:58:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:58:56.190 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:58:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:58:56.192 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:58:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:58:56.193 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:58:56 compute-0 nova_compute[349548]: 2025-12-05 01:58:56.690 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:58:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1487: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:58:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:58:58 compute-0 nova_compute[349548]: 2025-12-05 01:58:58.005 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:58:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1488: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:58:59 compute-0 podman[158197]: time="2025-12-05T01:58:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:58:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:58:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 01:58:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:58:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8631 "" "Go-http-client/1.1"
Dec  5 01:59:01 compute-0 openstack_network_exporter[366555]: ERROR   01:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:59:01 compute-0 openstack_network_exporter[366555]: ERROR   01:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:59:01 compute-0 openstack_network_exporter[366555]: ERROR   01:59:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:59:01 compute-0 openstack_network_exporter[366555]: ERROR   01:59:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:59:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:59:01 compute-0 openstack_network_exporter[366555]: ERROR   01:59:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:59:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:59:01 compute-0 nova_compute[349548]: 2025-12-05 01:59:01.694 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:59:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1489: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:59:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:59:03 compute-0 nova_compute[349548]: 2025-12-05 01:59:03.009 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:59:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1490: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:59:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:59:04 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:59:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 01:59:04 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:59:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 01:59:04 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:59:04 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 44bf2aef-3c19-4b47-b5bf-309d4d367874 does not exist
Dec  5 01:59:04 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 035531a3-d528-4db0-a724-16c7d23c724f does not exist
Dec  5 01:59:04 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev ef16358e-3dd2-47b6-9afa-1d81fbf5af89 does not exist
Dec  5 01:59:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 01:59:04 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 01:59:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 01:59:04 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:59:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 01:59:04 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 01:59:04 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 01:59:04 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:59:04 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 01:59:05 compute-0 podman[427463]: 2025-12-05 01:59:05.637620511 +0000 UTC m=+0.092192333 container create 7d377280190c3c5b55f23a497a4a4d28f5735b15bc378aa65b8e0dc04731ea61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_lederberg, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:59:05 compute-0 podman[427463]: 2025-12-05 01:59:05.597463726 +0000 UTC m=+0.052035628 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:59:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1491: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:59:05 compute-0 systemd[1]: Started libpod-conmon-7d377280190c3c5b55f23a497a4a4d28f5735b15bc378aa65b8e0dc04731ea61.scope.
Dec  5 01:59:05 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:59:05 compute-0 podman[427463]: 2025-12-05 01:59:05.79150783 +0000 UTC m=+0.246079682 container init 7d377280190c3c5b55f23a497a4a4d28f5735b15bc378aa65b8e0dc04731ea61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_lederberg, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 01:59:05 compute-0 podman[427463]: 2025-12-05 01:59:05.802462577 +0000 UTC m=+0.257034429 container start 7d377280190c3c5b55f23a497a4a4d28f5735b15bc378aa65b8e0dc04731ea61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_lederberg, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:59:05 compute-0 podman[427463]: 2025-12-05 01:59:05.809396911 +0000 UTC m=+0.263968763 container attach 7d377280190c3c5b55f23a497a4a4d28f5735b15bc378aa65b8e0dc04731ea61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_lederberg, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  5 01:59:05 compute-0 amazing_lederberg[427479]: 167 167
Dec  5 01:59:05 compute-0 systemd[1]: libpod-7d377280190c3c5b55f23a497a4a4d28f5735b15bc378aa65b8e0dc04731ea61.scope: Deactivated successfully.
Dec  5 01:59:05 compute-0 podman[427463]: 2025-12-05 01:59:05.813323871 +0000 UTC m=+0.267895683 container died 7d377280190c3c5b55f23a497a4a4d28f5735b15bc378aa65b8e0dc04731ea61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  5 01:59:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce929ff9a0beaa3e7a7b203e90a107a1c3e43836940be3d661bd2fbdc250397d-merged.mount: Deactivated successfully.
Dec  5 01:59:05 compute-0 podman[427463]: 2025-12-05 01:59:05.876759098 +0000 UTC m=+0.331330910 container remove 7d377280190c3c5b55f23a497a4a4d28f5735b15bc378aa65b8e0dc04731ea61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  5 01:59:05 compute-0 systemd[1]: libpod-conmon-7d377280190c3c5b55f23a497a4a4d28f5735b15bc378aa65b8e0dc04731ea61.scope: Deactivated successfully.
Dec  5 01:59:06 compute-0 podman[427501]: 2025-12-05 01:59:06.17179091 +0000 UTC m=+0.110195407 container create ee090d7b0b9a2bdebe9e66fab95668236f55be6ab019da283d987fbaf7c7f41f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mendeleev, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  5 01:59:06 compute-0 podman[427501]: 2025-12-05 01:59:06.131721598 +0000 UTC m=+0.070126205 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:59:06 compute-0 systemd[1]: Started libpod-conmon-ee090d7b0b9a2bdebe9e66fab95668236f55be6ab019da283d987fbaf7c7f41f.scope.
Dec  5 01:59:06 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:59:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bf5f20731a46d749607ae5da7dc40e83dc49b4aff46327b798eea13b6b8da51/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:59:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bf5f20731a46d749607ae5da7dc40e83dc49b4aff46327b798eea13b6b8da51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:59:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bf5f20731a46d749607ae5da7dc40e83dc49b4aff46327b798eea13b6b8da51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:59:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bf5f20731a46d749607ae5da7dc40e83dc49b4aff46327b798eea13b6b8da51/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:59:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bf5f20731a46d749607ae5da7dc40e83dc49b4aff46327b798eea13b6b8da51/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 01:59:06 compute-0 podman[427501]: 2025-12-05 01:59:06.332028508 +0000 UTC m=+0.270433055 container init ee090d7b0b9a2bdebe9e66fab95668236f55be6ab019da283d987fbaf7c7f41f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  5 01:59:06 compute-0 podman[427501]: 2025-12-05 01:59:06.351043491 +0000 UTC m=+0.289448008 container start ee090d7b0b9a2bdebe9e66fab95668236f55be6ab019da283d987fbaf7c7f41f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mendeleev, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  5 01:59:06 compute-0 podman[427501]: 2025-12-05 01:59:06.356147913 +0000 UTC m=+0.294552430 container attach ee090d7b0b9a2bdebe9e66fab95668236f55be6ab019da283d987fbaf7c7f41f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mendeleev, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:59:06 compute-0 nova_compute[349548]: 2025-12-05 01:59:06.697 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:59:07 compute-0 focused_mendeleev[427517]: --> passed data devices: 0 physical, 3 LVM
Dec  5 01:59:07 compute-0 focused_mendeleev[427517]: --> relative data size: 1.0
Dec  5 01:59:07 compute-0 focused_mendeleev[427517]: --> All data devices are unavailable
Dec  5 01:59:07 compute-0 systemd[1]: libpod-ee090d7b0b9a2bdebe9e66fab95668236f55be6ab019da283d987fbaf7c7f41f.scope: Deactivated successfully.
Dec  5 01:59:07 compute-0 podman[427501]: 2025-12-05 01:59:07.509111912 +0000 UTC m=+1.447516439 container died ee090d7b0b9a2bdebe9e66fab95668236f55be6ab019da283d987fbaf7c7f41f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  5 01:59:07 compute-0 systemd[1]: libpod-ee090d7b0b9a2bdebe9e66fab95668236f55be6ab019da283d987fbaf7c7f41f.scope: Consumed 1.087s CPU time.
Dec  5 01:59:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bf5f20731a46d749607ae5da7dc40e83dc49b4aff46327b798eea13b6b8da51-merged.mount: Deactivated successfully.
Dec  5 01:59:07 compute-0 podman[427501]: 2025-12-05 01:59:07.600353988 +0000 UTC m=+1.538758505 container remove ee090d7b0b9a2bdebe9e66fab95668236f55be6ab019da283d987fbaf7c7f41f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mendeleev, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  5 01:59:07 compute-0 systemd[1]: libpod-conmon-ee090d7b0b9a2bdebe9e66fab95668236f55be6ab019da283d987fbaf7c7f41f.scope: Deactivated successfully.
Dec  5 01:59:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1492: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:59:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:59:07 compute-0 podman[427583]: 2025-12-05 01:59:07.858489387 +0000 UTC m=+0.085506476 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  5 01:59:07 compute-0 podman[427584]: 2025-12-05 01:59:07.889239748 +0000 UTC m=+0.098393237 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 01:59:08 compute-0 nova_compute[349548]: 2025-12-05 01:59:08.013 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:59:08 compute-0 podman[427739]: 2025-12-05 01:59:08.644306494 +0000 UTC m=+0.091566995 container create 77e8ddf2186f991194c98bc89c36394292da9f242a920b7a44439b12df2c9cc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:59:08 compute-0 systemd[1]: Started libpod-conmon-77e8ddf2186f991194c98bc89c36394292da9f242a920b7a44439b12df2c9cc6.scope.
Dec  5 01:59:08 compute-0 podman[427739]: 2025-12-05 01:59:08.60774354 +0000 UTC m=+0.055004081 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:59:08 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:59:08 compute-0 podman[427739]: 2025-12-05 01:59:08.76735276 +0000 UTC m=+0.214613281 container init 77e8ddf2186f991194c98bc89c36394292da9f242a920b7a44439b12df2c9cc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 01:59:08 compute-0 podman[427739]: 2025-12-05 01:59:08.785732045 +0000 UTC m=+0.232992556 container start 77e8ddf2186f991194c98bc89c36394292da9f242a920b7a44439b12df2c9cc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_albattani, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:59:08 compute-0 podman[427739]: 2025-12-05 01:59:08.790311123 +0000 UTC m=+0.237571624 container attach 77e8ddf2186f991194c98bc89c36394292da9f242a920b7a44439b12df2c9cc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_albattani, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:59:08 compute-0 elated_albattani[427754]: 167 167
Dec  5 01:59:08 compute-0 systemd[1]: libpod-77e8ddf2186f991194c98bc89c36394292da9f242a920b7a44439b12df2c9cc6.scope: Deactivated successfully.
Dec  5 01:59:08 compute-0 podman[427739]: 2025-12-05 01:59:08.796007393 +0000 UTC m=+0.243267954 container died 77e8ddf2186f991194c98bc89c36394292da9f242a920b7a44439b12df2c9cc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:59:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f253a9a6c0a07a71477510dd348fbeb99b93b118fc7c4a7c9c10016c9179e36-merged.mount: Deactivated successfully.
Dec  5 01:59:08 compute-0 podman[427739]: 2025-12-05 01:59:08.8726603 +0000 UTC m=+0.319920811 container remove 77e8ddf2186f991194c98bc89c36394292da9f242a920b7a44439b12df2c9cc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_albattani, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  5 01:59:08 compute-0 systemd[1]: libpod-conmon-77e8ddf2186f991194c98bc89c36394292da9f242a920b7a44439b12df2c9cc6.scope: Deactivated successfully.
Dec  5 01:59:09 compute-0 podman[427776]: 2025-12-05 01:59:09.138488304 +0000 UTC m=+0.073811398 container create be29eaab8e1249549f0d659bc04b2101764681305bec7383666eb9b1cfb43d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_rosalind, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  5 01:59:09 compute-0 podman[427776]: 2025-12-05 01:59:09.112526947 +0000 UTC m=+0.047850051 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:59:09 compute-0 systemd[1]: Started libpod-conmon-be29eaab8e1249549f0d659bc04b2101764681305bec7383666eb9b1cfb43d99.scope.
Dec  5 01:59:09 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:59:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bcf5442bc6e13b4cb80edd10df6dbcf88a27b24cecf93987f0d5414a0fcec6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:59:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bcf5442bc6e13b4cb80edd10df6dbcf88a27b24cecf93987f0d5414a0fcec6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:59:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bcf5442bc6e13b4cb80edd10df6dbcf88a27b24cecf93987f0d5414a0fcec6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:59:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bcf5442bc6e13b4cb80edd10df6dbcf88a27b24cecf93987f0d5414a0fcec6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:59:09 compute-0 podman[427776]: 2025-12-05 01:59:09.28474657 +0000 UTC m=+0.220069694 container init be29eaab8e1249549f0d659bc04b2101764681305bec7383666eb9b1cfb43d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_rosalind, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  5 01:59:09 compute-0 podman[427776]: 2025-12-05 01:59:09.305546493 +0000 UTC m=+0.240869597 container start be29eaab8e1249549f0d659bc04b2101764681305bec7383666eb9b1cfb43d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:59:09 compute-0 podman[427776]: 2025-12-05 01:59:09.311762077 +0000 UTC m=+0.247085231 container attach be29eaab8e1249549f0d659bc04b2101764681305bec7383666eb9b1cfb43d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:59:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1493: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:59:10 compute-0 bold_rosalind[427792]: {
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:    "0": [
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:        {
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:            "devices": [
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:                "/dev/loop3"
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:            ],
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:            "lv_name": "ceph_lv0",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:            "lv_size": "21470642176",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:            "name": "ceph_lv0",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:            "tags": {
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:                "ceph.cluster_name": "ceph",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:                "ceph.crush_device_class": "",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:                "ceph.encrypted": "0",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:                "ceph.osd_id": "0",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:                "ceph.type": "block",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:                "ceph.vdo": "0"
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:            },
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:            "type": "block",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:            "vg_name": "ceph_vg0"
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:        }
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:    ],
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:    "1": [
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:        {
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:            "devices": [
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:                "/dev/loop4"
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:            ],
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:            "lv_name": "ceph_lv1",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:            "lv_size": "21470642176",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:            "name": "ceph_lv1",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:            "tags": {
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:                "ceph.cluster_name": "ceph",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:                "ceph.crush_device_class": "",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:                "ceph.encrypted": "0",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:                "ceph.osd_id": "1",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:                "ceph.type": "block",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:                "ceph.vdo": "0"
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:            },
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:            "type": "block",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:            "vg_name": "ceph_vg1"
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:        }
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:    ],
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:    "2": [
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:        {
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:            "devices": [
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:                "/dev/loop5"
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:            ],
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:            "lv_name": "ceph_lv2",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:            "lv_size": "21470642176",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:            "name": "ceph_lv2",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:            "tags": {
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:                "ceph.cephx_lockbox_secret": "",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:                "ceph.cluster_name": "ceph",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:                "ceph.crush_device_class": "",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:                "ceph.encrypted": "0",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:                "ceph.osd_id": "2",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:                "ceph.type": "block",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:                "ceph.vdo": "0"
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:            },
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:            "type": "block",
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:            "vg_name": "ceph_vg2"
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:        }
Dec  5 01:59:10 compute-0 bold_rosalind[427792]:    ]
Dec  5 01:59:10 compute-0 bold_rosalind[427792]: }
Dec  5 01:59:10 compute-0 systemd[1]: libpod-be29eaab8e1249549f0d659bc04b2101764681305bec7383666eb9b1cfb43d99.scope: Deactivated successfully.
Dec  5 01:59:10 compute-0 podman[427776]: 2025-12-05 01:59:10.198712377 +0000 UTC m=+1.134035471 container died be29eaab8e1249549f0d659bc04b2101764681305bec7383666eb9b1cfb43d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_rosalind, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 01:59:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-8bcf5442bc6e13b4cb80edd10df6dbcf88a27b24cecf93987f0d5414a0fcec6a-merged.mount: Deactivated successfully.
Dec  5 01:59:10 compute-0 podman[427776]: 2025-12-05 01:59:10.296416793 +0000 UTC m=+1.231739857 container remove be29eaab8e1249549f0d659bc04b2101764681305bec7383666eb9b1cfb43d99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_rosalind, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  5 01:59:10 compute-0 systemd[1]: libpod-conmon-be29eaab8e1249549f0d659bc04b2101764681305bec7383666eb9b1cfb43d99.scope: Deactivated successfully.
Dec  5 01:59:10 compute-0 podman[427809]: 2025-12-05 01:59:10.35486467 +0000 UTC m=+0.107295246 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  5 01:59:10 compute-0 podman[427802]: 2025-12-05 01:59:10.3680747 +0000 UTC m=+0.137456651 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm)
Dec  5 01:59:11 compute-0 podman[427980]: 2025-12-05 01:59:11.179709219 +0000 UTC m=+0.064916389 container create c16af941dc404c25ddf49b101601765f48e32844c129b70264ff4b03c8dfcee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  5 01:59:11 compute-0 systemd[1]: Started libpod-conmon-c16af941dc404c25ddf49b101601765f48e32844c129b70264ff4b03c8dfcee0.scope.
Dec  5 01:59:11 compute-0 podman[427980]: 2025-12-05 01:59:11.153093634 +0000 UTC m=+0.038300784 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:59:11 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:59:11 compute-0 podman[427980]: 2025-12-05 01:59:11.317331234 +0000 UTC m=+0.202538374 container init c16af941dc404c25ddf49b101601765f48e32844c129b70264ff4b03c8dfcee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shamir, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 01:59:11 compute-0 podman[427980]: 2025-12-05 01:59:11.336419598 +0000 UTC m=+0.221626768 container start c16af941dc404c25ddf49b101601765f48e32844c129b70264ff4b03c8dfcee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shamir, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  5 01:59:11 compute-0 podman[427980]: 2025-12-05 01:59:11.343703222 +0000 UTC m=+0.228910352 container attach c16af941dc404c25ddf49b101601765f48e32844c129b70264ff4b03c8dfcee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shamir, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:59:11 compute-0 vibrant_shamir[427996]: 167 167
Dec  5 01:59:11 compute-0 systemd[1]: libpod-c16af941dc404c25ddf49b101601765f48e32844c129b70264ff4b03c8dfcee0.scope: Deactivated successfully.
Dec  5 01:59:11 compute-0 podman[427980]: 2025-12-05 01:59:11.354596207 +0000 UTC m=+0.239803377 container died c16af941dc404c25ddf49b101601765f48e32844c129b70264ff4b03c8dfcee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 01:59:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-5098479006f29ae869b4d53340e512f5e5d9d7da71c5ddc67debb12b48935df3-merged.mount: Deactivated successfully.
Dec  5 01:59:11 compute-0 podman[427980]: 2025-12-05 01:59:11.419350021 +0000 UTC m=+0.304557171 container remove c16af941dc404c25ddf49b101601765f48e32844c129b70264ff4b03c8dfcee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_shamir, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  5 01:59:11 compute-0 systemd[1]: libpod-conmon-c16af941dc404c25ddf49b101601765f48e32844c129b70264ff4b03c8dfcee0.scope: Deactivated successfully.
Dec  5 01:59:11 compute-0 podman[428019]: 2025-12-05 01:59:11.687543722 +0000 UTC m=+0.070713442 container create b61de0cd05a14a402e9d7af00aee49c8dca307e27bfc43f870396ab19d4c5d40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  5 01:59:11 compute-0 nova_compute[349548]: 2025-12-05 01:59:11.699 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:59:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1494: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:59:11 compute-0 systemd[1]: Started libpod-conmon-b61de0cd05a14a402e9d7af00aee49c8dca307e27bfc43f870396ab19d4c5d40.scope.
Dec  5 01:59:11 compute-0 podman[428019]: 2025-12-05 01:59:11.661801871 +0000 UTC m=+0.044971631 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 01:59:11 compute-0 systemd[1]: Started libcrun container.
Dec  5 01:59:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa72de752d90970f2c2174266250838ea9943c3d806659182fc4f1151948444a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 01:59:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa72de752d90970f2c2174266250838ea9943c3d806659182fc4f1151948444a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 01:59:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa72de752d90970f2c2174266250838ea9943c3d806659182fc4f1151948444a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 01:59:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa72de752d90970f2c2174266250838ea9943c3d806659182fc4f1151948444a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 01:59:11 compute-0 podman[428019]: 2025-12-05 01:59:11.828431757 +0000 UTC m=+0.211601547 container init b61de0cd05a14a402e9d7af00aee49c8dca307e27bfc43f870396ab19d4c5d40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lederberg, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:59:11 compute-0 podman[428019]: 2025-12-05 01:59:11.847012418 +0000 UTC m=+0.230182128 container start b61de0cd05a14a402e9d7af00aee49c8dca307e27bfc43f870396ab19d4c5d40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lederberg, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  5 01:59:11 compute-0 podman[428019]: 2025-12-05 01:59:11.851924695 +0000 UTC m=+0.235094455 container attach b61de0cd05a14a402e9d7af00aee49c8dca307e27bfc43f870396ab19d4c5d40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:59:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:59:12 compute-0 relaxed_lederberg[428035]: {
Dec  5 01:59:12 compute-0 relaxed_lederberg[428035]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 01:59:12 compute-0 relaxed_lederberg[428035]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:59:12 compute-0 relaxed_lederberg[428035]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 01:59:12 compute-0 relaxed_lederberg[428035]:        "osd_id": 0,
Dec  5 01:59:12 compute-0 relaxed_lederberg[428035]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 01:59:12 compute-0 relaxed_lederberg[428035]:        "type": "bluestore"
Dec  5 01:59:12 compute-0 relaxed_lederberg[428035]:    },
Dec  5 01:59:12 compute-0 relaxed_lederberg[428035]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 01:59:12 compute-0 relaxed_lederberg[428035]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:59:12 compute-0 relaxed_lederberg[428035]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 01:59:12 compute-0 relaxed_lederberg[428035]:        "osd_id": 1,
Dec  5 01:59:12 compute-0 relaxed_lederberg[428035]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 01:59:12 compute-0 relaxed_lederberg[428035]:        "type": "bluestore"
Dec  5 01:59:12 compute-0 relaxed_lederberg[428035]:    },
Dec  5 01:59:12 compute-0 relaxed_lederberg[428035]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 01:59:12 compute-0 relaxed_lederberg[428035]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 01:59:12 compute-0 relaxed_lederberg[428035]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 01:59:12 compute-0 relaxed_lederberg[428035]:        "osd_id": 2,
Dec  5 01:59:12 compute-0 relaxed_lederberg[428035]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 01:59:12 compute-0 relaxed_lederberg[428035]:        "type": "bluestore"
Dec  5 01:59:12 compute-0 relaxed_lederberg[428035]:    }
Dec  5 01:59:12 compute-0 relaxed_lederberg[428035]: }
Dec  5 01:59:13 compute-0 nova_compute[349548]: 2025-12-05 01:59:13.017 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:59:13 compute-0 systemd[1]: libpod-b61de0cd05a14a402e9d7af00aee49c8dca307e27bfc43f870396ab19d4c5d40.scope: Deactivated successfully.
Dec  5 01:59:13 compute-0 podman[428019]: 2025-12-05 01:59:13.03183491 +0000 UTC m=+1.415004670 container died b61de0cd05a14a402e9d7af00aee49c8dca307e27bfc43f870396ab19d4c5d40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lederberg, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  5 01:59:13 compute-0 systemd[1]: libpod-b61de0cd05a14a402e9d7af00aee49c8dca307e27bfc43f870396ab19d4c5d40.scope: Consumed 1.170s CPU time.
Dec  5 01:59:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa72de752d90970f2c2174266250838ea9943c3d806659182fc4f1151948444a-merged.mount: Deactivated successfully.
Dec  5 01:59:13 compute-0 podman[428019]: 2025-12-05 01:59:13.096069669 +0000 UTC m=+1.479239379 container remove b61de0cd05a14a402e9d7af00aee49c8dca307e27bfc43f870396ab19d4c5d40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lederberg, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 01:59:13 compute-0 systemd[1]: libpod-conmon-b61de0cd05a14a402e9d7af00aee49c8dca307e27bfc43f870396ab19d4c5d40.scope: Deactivated successfully.
Dec  5 01:59:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 01:59:13 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:59:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 01:59:13 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:59:13 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 732790b1-4952-41ab-98e7-4198f1c2d4c0 does not exist
Dec  5 01:59:13 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev d8f4d3dd-0ce3-4d9a-b114-d21e26683301 does not exist
Dec  5 01:59:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1495: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:59:14 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:59:14 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 01:59:14 compute-0 podman[428129]: 2025-12-05 01:59:14.734647918 +0000 UTC m=+0.129192890 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, io.openshift.expose-services=, io.openshift.tags=base rhel9, release=1214.1726694543, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, version=9.4, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc.)
Dec  5 01:59:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1496: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:59:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:59:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:59:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:59:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:59:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:59:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:59:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_01:59:16
Dec  5 01:59:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 01:59:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 01:59:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['volumes', 'vms', 'cephfs.cephfs.data', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', '.rgw.root', 'default.rgw.meta', 'backups']
Dec  5 01:59:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 01:59:16 compute-0 nova_compute[349548]: 2025-12-05 01:59:16.703 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:59:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 01:59:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:59:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 01:59:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 01:59:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:59:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 01:59:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:59:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 01:59:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:59:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 01:59:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1497: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:59:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:59:18 compute-0 nova_compute[349548]: 2025-12-05 01:59:18.022 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:59:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1498: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:59:21 compute-0 nova_compute[349548]: 2025-12-05 01:59:21.705 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:59:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1499: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:59:22 compute-0 podman[428152]: 2025-12-05 01:59:22.70850148 +0000 UTC m=+0.100433443 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, maintainer=Red Hat, Inc., release=1755695350, container_name=openstack_network_exporter, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, managed_by=edpm_ansible, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container)
Dec  5 01:59:22 compute-0 podman[428150]: 2025-12-05 01:59:22.712029139 +0000 UTC m=+0.107844801 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  5 01:59:22 compute-0 podman[428149]: 2025-12-05 01:59:22.714964031 +0000 UTC m=+0.113968293 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 01:59:22 compute-0 podman[428151]: 2025-12-05 01:59:22.774328654 +0000 UTC m=+0.168038577 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec  5 01:59:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:59:23 compute-0 nova_compute[349548]: 2025-12-05 01:59:23.024 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:59:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1500: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:59:24 compute-0 nova_compute[349548]: 2025-12-05 01:59:24.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:59:24 compute-0 nova_compute[349548]: 2025-12-05 01:59:24.066 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 01:59:25 compute-0 nova_compute[349548]: 2025-12-05 01:59:25.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:59:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1501: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:59:26 compute-0 nova_compute[349548]: 2025-12-05 01:59:26.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:59:26 compute-0 nova_compute[349548]: 2025-12-05 01:59:26.708 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00221085813879664 of space, bias 1.0, pg target 0.663257441638992 quantized to 32 (current 32)
Dec  5 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  5 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 01:59:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 01:59:27 compute-0 nova_compute[349548]: 2025-12-05 01:59:27.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:59:27 compute-0 nova_compute[349548]: 2025-12-05 01:59:27.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 01:59:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1502: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:59:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:59:27 compute-0 nova_compute[349548]: 2025-12-05 01:59:27.864 349552 DEBUG oslo_concurrency.lockutils [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:59:27 compute-0 nova_compute[349548]: 2025-12-05 01:59:27.865 349552 DEBUG oslo_concurrency.lockutils [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:59:27 compute-0 nova_compute[349548]: 2025-12-05 01:59:27.866 349552 DEBUG oslo_concurrency.lockutils [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:59:27 compute-0 nova_compute[349548]: 2025-12-05 01:59:27.867 349552 DEBUG oslo_concurrency.lockutils [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:59:27 compute-0 nova_compute[349548]: 2025-12-05 01:59:27.868 349552 DEBUG oslo_concurrency.lockutils [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:59:27 compute-0 nova_compute[349548]: 2025-12-05 01:59:27.871 349552 INFO nova.compute.manager [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Terminating instance#033[00m
Dec  5 01:59:27 compute-0 nova_compute[349548]: 2025-12-05 01:59:27.873 349552 DEBUG nova.compute.manager [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  5 01:59:27 compute-0 nova_compute[349548]: 2025-12-05 01:59:27.921 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 01:59:27 compute-0 nova_compute[349548]: 2025-12-05 01:59:27.922 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 01:59:27 compute-0 nova_compute[349548]: 2025-12-05 01:59:27.923 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  5 01:59:28 compute-0 nova_compute[349548]: 2025-12-05 01:59:28.028 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:59:28 compute-0 kernel: tap554930d3-ff (unregistering): left promiscuous mode
Dec  5 01:59:28 compute-0 NetworkManager[49092]: <info>  [1764899968.0798] device (tap554930d3-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  5 01:59:28 compute-0 ovn_controller[89286]: 2025-12-05T01:59:28Z|00050|binding|INFO|Releasing lport 554930d3-ff53-4ef1-af0a-bad6acef1456 from this chassis (sb_readonly=0)
Dec  5 01:59:28 compute-0 ovn_controller[89286]: 2025-12-05T01:59:28Z|00051|binding|INFO|Setting lport 554930d3-ff53-4ef1-af0a-bad6acef1456 down in Southbound
Dec  5 01:59:28 compute-0 ovn_controller[89286]: 2025-12-05T01:59:28Z|00052|binding|INFO|Removing iface tap554930d3-ff ovn-installed in OVS
Dec  5 01:59:28 compute-0 nova_compute[349548]: 2025-12-05 01:59:28.106 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:59:28 compute-0 nova_compute[349548]: 2025-12-05 01:59:28.110 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:59:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:59:28.122 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:63:18 192.168.0.23'], port_security=['fa:16:3e:43:63:18 192.168.0.23'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-qkgif4ysdpfw-vozvkqjb7v2u-n3c5nyx5kkkm-port-nevnpfznt6pg', 'neutron:cidrs': '192.168.0.23/24', 'neutron:device_id': 'b82c3f0e-6d6a-4a7b-9556-b609ad63e497', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-qkgif4ysdpfw-vozvkqjb7v2u-n3c5nyx5kkkm-port-nevnpfznt6pg', 'neutron:project_id': '6ad982b73954486390215862ee62239f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cf07c149-4b4f-4cc9-a5b5-cfd139acbede', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.213', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8440543a-d57d-422f-b491-49a678c2776e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=554930d3-ff53-4ef1-af0a-bad6acef1456) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 01:59:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:59:28.124 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 554930d3-ff53-4ef1-af0a-bad6acef1456 in datapath 49f7d2f1-f1ff-4dcc-94db-d088dc8d3183 unbound from our chassis#033[00m
Dec  5 01:59:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:59:28.127 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 49f7d2f1-f1ff-4dcc-94db-d088dc8d3183#033[00m
Dec  5 01:59:28 compute-0 nova_compute[349548]: 2025-12-05 01:59:28.133 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:59:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:59:28.158 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[a1a55fb5-2a21-411f-bd26-7c6c4955db74]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:59:28 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Dec  5 01:59:28 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 7min 1.094s CPU time.
Dec  5 01:59:28 compute-0 systemd-machined[138700]: Machine qemu-2-instance-00000002 terminated.
Dec  5 01:59:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:59:28.200 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[0602af4f-8acb-4da8-9314-e41d44c7d307]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:59:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:59:28.205 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[ad160e95-d486-4684-a255-20e253527cc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:59:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:59:28.245 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[8eaaa3e3-574e-4be9-b77e-e333758dba23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:59:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:59:28.277 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[52499975-3e92-479c-9f56-e3d98b22e97b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap49f7d2f1-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:8a:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 12, 'rx_bytes': 616, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 12, 'rx_bytes': 616, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537514, 'reachable_time': 39496, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 428246, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:59:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:59:28.299 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[3023ec69-8edc-4b19-b1c8-fb51ec4cbd7b]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap49f7d2f1-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 537531, 'tstamp': 537531}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 428247, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap49f7d2f1-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 537536, 'tstamp': 537536}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 428247, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 01:59:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:59:28.303 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap49f7d2f1-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 01:59:28 compute-0 nova_compute[349548]: 2025-12-05 01:59:28.305 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:59:28 compute-0 nova_compute[349548]: 2025-12-05 01:59:28.318 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:59:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:59:28.320 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap49f7d2f1-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 01:59:28 compute-0 nova_compute[349548]: 2025-12-05 01:59:28.321 349552 INFO nova.virt.libvirt.driver [-] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Instance destroyed successfully.#033[00m
Dec  5 01:59:28 compute-0 nova_compute[349548]: 2025-12-05 01:59:28.322 349552 DEBUG nova.objects.instance [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lazy-loading 'resources' on Instance uuid b82c3f0e-6d6a-4a7b-9556-b609ad63e497 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 01:59:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:59:28.321 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 01:59:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:59:28.324 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap49f7d2f1-f0, col_values=(('external_ids', {'iface-id': '35b0af3f-4a87-44c5-9b77-2f08261b9985'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 01:59:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:59:28.325 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 01:59:28 compute-0 nova_compute[349548]: 2025-12-05 01:59:28.336 349552 DEBUG nova.virt.libvirt.vif [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T01:49:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-4ysdpfw-vozvkqjb7v2u-n3c5nyx5kkkm-vnf-x5qm3qqtonfj',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-4ysdpfw-vozvkqjb7v2u-n3c5nyx5kkkm-vnf-x5qm3qqtonfj',id=2,image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-05T01:49:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6ad982b73954486390215862ee62239f',ramdisk_id='',reservation_id='r-rt9976xc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-05T01:49:19Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01NzYxMzI0NDc4NDAzNTAzNjkyPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTU3NjEzMjQ0Nzg0MDM1MDM2OTI9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTc2MTMyNDQ3ODQwMzUwMzY5Mj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTU3NjEzMjQ0Nzg0MDM1MDM2OTI9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01NzYxMzI0NDc4NDAzNTAzNjkyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01NzYxMzI0NDc4NDAzNTAzNjkyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKC
Dec  5 01:59:28 compute-0 nova_compute[349548]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTc2MTMyNDQ3ODQwMzUwMzY5Mj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTU3NjEzMjQ0Nzg0MDM1MDM2OTI9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01NzYxMzI0NDc4NDAzNTAzNjkyPT0tLQo=',user_id='ff880837791d4f49a54672b8d0e705ff',uuid=b82c3f0e-6d6a-4a7b-9556-b609ad63e497,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "554930d3-ff53-4ef1-af0a-bad6acef1456", "address": "fa:16:3e:43:63:18", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554930d3-ff", "ovs_interfaceid": "554930d3-ff53-4ef1-af0a-bad6acef1456", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  5 01:59:28 compute-0 nova_compute[349548]: 2025-12-05 01:59:28.337 349552 DEBUG nova.network.os_vif_util [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converting VIF {"id": "554930d3-ff53-4ef1-af0a-bad6acef1456", "address": "fa:16:3e:43:63:18", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554930d3-ff", "ovs_interfaceid": "554930d3-ff53-4ef1-af0a-bad6acef1456", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 01:59:28 compute-0 nova_compute[349548]: 2025-12-05 01:59:28.338 349552 DEBUG nova.network.os_vif_util [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:43:63:18,bridge_name='br-int',has_traffic_filtering=True,id=554930d3-ff53-4ef1-af0a-bad6acef1456,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap554930d3-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 01:59:28 compute-0 nova_compute[349548]: 2025-12-05 01:59:28.338 349552 DEBUG os_vif [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:43:63:18,bridge_name='br-int',has_traffic_filtering=True,id=554930d3-ff53-4ef1-af0a-bad6acef1456,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap554930d3-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  5 01:59:28 compute-0 nova_compute[349548]: 2025-12-05 01:59:28.340 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:59:28 compute-0 nova_compute[349548]: 2025-12-05 01:59:28.341 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap554930d3-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 01:59:28 compute-0 nova_compute[349548]: 2025-12-05 01:59:28.343 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:59:28 compute-0 nova_compute[349548]: 2025-12-05 01:59:28.345 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  5 01:59:28 compute-0 nova_compute[349548]: 2025-12-05 01:59:28.346 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:59:28 compute-0 nova_compute[349548]: 2025-12-05 01:59:28.350 349552 INFO os_vif [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:43:63:18,bridge_name='br-int',has_traffic_filtering=True,id=554930d3-ff53-4ef1-af0a-bad6acef1456,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap554930d3-ff')#033[00m
Dec  5 01:59:28 compute-0 rsyslogd[188644]: message too long (8192) with configured size 8096, begin of message is: 2025-12-05 01:59:28.336 349552 DEBUG nova.virt.libvirt.vif [None req-ed4253f5-b0 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  5 01:59:29 compute-0 nova_compute[349548]: 2025-12-05 01:59:29.191 349552 DEBUG nova.compute.manager [req-652fad9b-b7d7-40ed-be09-fd42bd510732 req-6aa27376-e300-4003-a26c-d12ab8428be0 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Received event network-vif-unplugged-554930d3-ff53-4ef1-af0a-bad6acef1456 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 01:59:29 compute-0 nova_compute[349548]: 2025-12-05 01:59:29.192 349552 DEBUG oslo_concurrency.lockutils [req-652fad9b-b7d7-40ed-be09-fd42bd510732 req-6aa27376-e300-4003-a26c-d12ab8428be0 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:59:29 compute-0 nova_compute[349548]: 2025-12-05 01:59:29.192 349552 DEBUG oslo_concurrency.lockutils [req-652fad9b-b7d7-40ed-be09-fd42bd510732 req-6aa27376-e300-4003-a26c-d12ab8428be0 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:59:29 compute-0 nova_compute[349548]: 2025-12-05 01:59:29.194 349552 DEBUG oslo_concurrency.lockutils [req-652fad9b-b7d7-40ed-be09-fd42bd510732 req-6aa27376-e300-4003-a26c-d12ab8428be0 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:59:29 compute-0 nova_compute[349548]: 2025-12-05 01:59:29.194 349552 DEBUG nova.compute.manager [req-652fad9b-b7d7-40ed-be09-fd42bd510732 req-6aa27376-e300-4003-a26c-d12ab8428be0 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] No waiting events found dispatching network-vif-unplugged-554930d3-ff53-4ef1-af0a-bad6acef1456 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 01:59:29 compute-0 nova_compute[349548]: 2025-12-05 01:59:29.195 349552 DEBUG nova.compute.manager [req-652fad9b-b7d7-40ed-be09-fd42bd510732 req-6aa27376-e300-4003-a26c-d12ab8428be0 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Received event network-vif-unplugged-554930d3-ff53-4ef1-af0a-bad6acef1456 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  5 01:59:29 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:59:29.493 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:c8:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:b5:45:4f:f9:d2'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 01:59:29 compute-0 nova_compute[349548]: 2025-12-05 01:59:29.494 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:59:29 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:59:29.494 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  5 01:59:29 compute-0 nova_compute[349548]: 2025-12-05 01:59:29.585 349552 INFO nova.virt.libvirt.driver [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Deleting instance files /var/lib/nova/instances/b82c3f0e-6d6a-4a7b-9556-b609ad63e497_del#033[00m
Dec  5 01:59:29 compute-0 nova_compute[349548]: 2025-12-05 01:59:29.586 349552 INFO nova.virt.libvirt.driver [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Deletion of /var/lib/nova/instances/b82c3f0e-6d6a-4a7b-9556-b609ad63e497_del complete#033[00m
Dec  5 01:59:29 compute-0 nova_compute[349548]: 2025-12-05 01:59:29.594 349552 DEBUG nova.compute.manager [req-05d6b711-3c7c-478e-86bf-f89da1286ee3 req-2e65d5b8-16cc-4632-bc08-1ba8c04cc852 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Received event network-changed-554930d3-ff53-4ef1-af0a-bad6acef1456 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 01:59:29 compute-0 nova_compute[349548]: 2025-12-05 01:59:29.594 349552 DEBUG nova.compute.manager [req-05d6b711-3c7c-478e-86bf-f89da1286ee3 req-2e65d5b8-16cc-4632-bc08-1ba8c04cc852 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Refreshing instance network info cache due to event network-changed-554930d3-ff53-4ef1-af0a-bad6acef1456. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  5 01:59:29 compute-0 nova_compute[349548]: 2025-12-05 01:59:29.595 349552 DEBUG oslo_concurrency.lockutils [req-05d6b711-3c7c-478e-86bf-f89da1286ee3 req-2e65d5b8-16cc-4632-bc08-1ba8c04cc852 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 01:59:29 compute-0 nova_compute[349548]: 2025-12-05 01:59:29.693 349552 DEBUG nova.virt.libvirt.host [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Dec  5 01:59:29 compute-0 nova_compute[349548]: 2025-12-05 01:59:29.693 349552 INFO nova.virt.libvirt.host [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] UEFI support detected#033[00m
Dec  5 01:59:29 compute-0 nova_compute[349548]: 2025-12-05 01:59:29.696 349552 INFO nova.compute.manager [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Took 1.82 seconds to destroy the instance on the hypervisor.#033[00m
Dec  5 01:59:29 compute-0 nova_compute[349548]: 2025-12-05 01:59:29.697 349552 DEBUG oslo.service.loopingcall [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  5 01:59:29 compute-0 nova_compute[349548]: 2025-12-05 01:59:29.698 349552 DEBUG nova.compute.manager [-] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  5 01:59:29 compute-0 nova_compute[349548]: 2025-12-05 01:59:29.698 349552 DEBUG nova.network.neutron [-] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  5 01:59:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1503: 321 pgs: 321 active+clean; 263 MiB data, 357 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:59:29 compute-0 podman[158197]: time="2025-12-05T01:59:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:59:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:59:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 01:59:29 compute-0 podman[158197]: @ - - [05/Dec/2025:01:59:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8635 "" "Go-http-client/1.1"
Dec  5 01:59:30 compute-0 nova_compute[349548]: 2025-12-05 01:59:30.731 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Updating instance_info_cache with network_info: [{"id": "554930d3-ff53-4ef1-af0a-bad6acef1456", "address": "fa:16:3e:43:63:18", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554930d3-ff", "ovs_interfaceid": "554930d3-ff53-4ef1-af0a-bad6acef1456", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 01:59:30 compute-0 nova_compute[349548]: 2025-12-05 01:59:30.770 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 01:59:30 compute-0 nova_compute[349548]: 2025-12-05 01:59:30.771 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  5 01:59:30 compute-0 nova_compute[349548]: 2025-12-05 01:59:30.771 349552 DEBUG oslo_concurrency.lockutils [req-05d6b711-3c7c-478e-86bf-f89da1286ee3 req-2e65d5b8-16cc-4632-bc08-1ba8c04cc852 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 01:59:30 compute-0 nova_compute[349548]: 2025-12-05 01:59:30.772 349552 DEBUG nova.network.neutron [req-05d6b711-3c7c-478e-86bf-f89da1286ee3 req-2e65d5b8-16cc-4632-bc08-1ba8c04cc852 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Refreshing network info cache for port 554930d3-ff53-4ef1-af0a-bad6acef1456 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  5 01:59:30 compute-0 nova_compute[349548]: 2025-12-05 01:59:30.774 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:59:30 compute-0 nova_compute[349548]: 2025-12-05 01:59:30.776 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:59:30 compute-0 nova_compute[349548]: 2025-12-05 01:59:30.827 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:59:30 compute-0 nova_compute[349548]: 2025-12-05 01:59:30.828 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:59:30 compute-0 nova_compute[349548]: 2025-12-05 01:59:30.828 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:59:30 compute-0 nova_compute[349548]: 2025-12-05 01:59:30.829 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 01:59:30 compute-0 nova_compute[349548]: 2025-12-05 01:59:30.830 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:59:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:59:31 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2700550426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:59:31 compute-0 nova_compute[349548]: 2025-12-05 01:59:31.316 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:59:31 compute-0 openstack_network_exporter[366555]: ERROR   01:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:59:31 compute-0 openstack_network_exporter[366555]: ERROR   01:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 01:59:31 compute-0 openstack_network_exporter[366555]: ERROR   01:59:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 01:59:31 compute-0 openstack_network_exporter[366555]: ERROR   01:59:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 01:59:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:59:31 compute-0 openstack_network_exporter[366555]: ERROR   01:59:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 01:59:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 01:59:31 compute-0 nova_compute[349548]: 2025-12-05 01:59:31.492 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:59:31 compute-0 nova_compute[349548]: 2025-12-05 01:59:31.493 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:59:31 compute-0 nova_compute[349548]: 2025-12-05 01:59:31.493 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:59:31 compute-0 nova_compute[349548]: 2025-12-05 01:59:31.503 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:59:31 compute-0 nova_compute[349548]: 2025-12-05 01:59:31.504 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:59:31 compute-0 nova_compute[349548]: 2025-12-05 01:59:31.504 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:59:31 compute-0 nova_compute[349548]: 2025-12-05 01:59:31.512 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:59:31 compute-0 nova_compute[349548]: 2025-12-05 01:59:31.512 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:59:31 compute-0 nova_compute[349548]: 2025-12-05 01:59:31.512 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 01:59:31 compute-0 nova_compute[349548]: 2025-12-05 01:59:31.564 349552 DEBUG nova.compute.manager [req-d28f1916-295b-4d91-8524-ff4d276adb30 req-ef4e8d6b-5100-4f30-893a-0952f652e254 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Received event network-vif-plugged-554930d3-ff53-4ef1-af0a-bad6acef1456 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 01:59:31 compute-0 nova_compute[349548]: 2025-12-05 01:59:31.565 349552 DEBUG oslo_concurrency.lockutils [req-d28f1916-295b-4d91-8524-ff4d276adb30 req-ef4e8d6b-5100-4f30-893a-0952f652e254 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:59:31 compute-0 nova_compute[349548]: 2025-12-05 01:59:31.565 349552 DEBUG oslo_concurrency.lockutils [req-d28f1916-295b-4d91-8524-ff4d276adb30 req-ef4e8d6b-5100-4f30-893a-0952f652e254 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:59:31 compute-0 nova_compute[349548]: 2025-12-05 01:59:31.566 349552 DEBUG oslo_concurrency.lockutils [req-d28f1916-295b-4d91-8524-ff4d276adb30 req-ef4e8d6b-5100-4f30-893a-0952f652e254 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:59:31 compute-0 nova_compute[349548]: 2025-12-05 01:59:31.566 349552 DEBUG nova.compute.manager [req-d28f1916-295b-4d91-8524-ff4d276adb30 req-ef4e8d6b-5100-4f30-893a-0952f652e254 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] No waiting events found dispatching network-vif-plugged-554930d3-ff53-4ef1-af0a-bad6acef1456 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 01:59:31 compute-0 nova_compute[349548]: 2025-12-05 01:59:31.566 349552 WARNING nova.compute.manager [req-d28f1916-295b-4d91-8524-ff4d276adb30 req-ef4e8d6b-5100-4f30-893a-0952f652e254 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Received unexpected event network-vif-plugged-554930d3-ff53-4ef1-af0a-bad6acef1456 for instance with vm_state active and task_state deleting.#033[00m
Dec  5 01:59:31 compute-0 nova_compute[349548]: 2025-12-05 01:59:31.713 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:59:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1504: 321 pgs: 321 active+clean; 250 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 9.5 KiB/s rd, 341 B/s wr, 13 op/s
Dec  5 01:59:32 compute-0 nova_compute[349548]: 2025-12-05 01:59:32.120 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 01:59:32 compute-0 nova_compute[349548]: 2025-12-05 01:59:32.121 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3411MB free_disk=59.855655670166016GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 01:59:32 compute-0 nova_compute[349548]: 2025-12-05 01:59:32.121 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:59:32 compute-0 nova_compute[349548]: 2025-12-05 01:59:32.121 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:59:32 compute-0 nova_compute[349548]: 2025-12-05 01:59:32.428 349552 DEBUG nova.network.neutron [-] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 01:59:32 compute-0 nova_compute[349548]: 2025-12-05 01:59:32.563 349552 INFO nova.compute.manager [-] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Took 2.87 seconds to deallocate network for instance.#033[00m
Dec  5 01:59:32 compute-0 nova_compute[349548]: 2025-12-05 01:59:32.636 349552 DEBUG oslo_concurrency.lockutils [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:59:32 compute-0 nova_compute[349548]: 2025-12-05 01:59:32.672 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b69a0e24-1bc4-46a5-92d7-367c1efd53df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 01:59:32 compute-0 nova_compute[349548]: 2025-12-05 01:59:32.672 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b82c3f0e-6d6a-4a7b-9556-b609ad63e497 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 01:59:32 compute-0 nova_compute[349548]: 2025-12-05 01:59:32.672 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 01:59:32 compute-0 nova_compute[349548]: 2025-12-05 01:59:32.672 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 3611d2ae-da33-4e55-aec7-0bec88d3b4e0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 01:59:32 compute-0 nova_compute[349548]: 2025-12-05 01:59:32.673 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 01:59:32 compute-0 nova_compute[349548]: 2025-12-05 01:59:32.673 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2560MB phys_disk=59GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 01:59:32 compute-0 nova_compute[349548]: 2025-12-05 01:59:32.804 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:59:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:59:32 compute-0 nova_compute[349548]: 2025-12-05 01:59:32.966 349552 DEBUG nova.network.neutron [req-05d6b711-3c7c-478e-86bf-f89da1286ee3 req-2e65d5b8-16cc-4632-bc08-1ba8c04cc852 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Updated VIF entry in instance network info cache for port 554930d3-ff53-4ef1-af0a-bad6acef1456. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  5 01:59:32 compute-0 nova_compute[349548]: 2025-12-05 01:59:32.968 349552 DEBUG nova.network.neutron [req-05d6b711-3c7c-478e-86bf-f89da1286ee3 req-2e65d5b8-16cc-4632-bc08-1ba8c04cc852 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Updating instance_info_cache with network_info: [{"id": "554930d3-ff53-4ef1-af0a-bad6acef1456", "address": "fa:16:3e:43:63:18", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap554930d3-ff", "ovs_interfaceid": "554930d3-ff53-4ef1-af0a-bad6acef1456", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 01:59:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:59:33 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1634709864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:59:33 compute-0 nova_compute[349548]: 2025-12-05 01:59:33.342 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:59:33 compute-0 nova_compute[349548]: 2025-12-05 01:59:33.348 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:59:33 compute-0 nova_compute[349548]: 2025-12-05 01:59:33.357 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 01:59:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1505: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  5 01:59:33 compute-0 nova_compute[349548]: 2025-12-05 01:59:33.914 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 01:59:33 compute-0 nova_compute[349548]: 2025-12-05 01:59:33.927 349552 DEBUG oslo_concurrency.lockutils [req-05d6b711-3c7c-478e-86bf-f89da1286ee3 req-2e65d5b8-16cc-4632-bc08-1ba8c04cc852 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-b82c3f0e-6d6a-4a7b-9556-b609ad63e497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 01:59:33 compute-0 nova_compute[349548]: 2025-12-05 01:59:33.961 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 01:59:33 compute-0 nova_compute[349548]: 2025-12-05 01:59:33.962 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:59:33 compute-0 nova_compute[349548]: 2025-12-05 01:59:33.963 349552 DEBUG oslo_concurrency.lockutils [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 1.327s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:59:34 compute-0 nova_compute[349548]: 2025-12-05 01:59:34.111 349552 DEBUG oslo_concurrency.processutils [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 01:59:34 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 01:59:34 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/282690679' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 01:59:34 compute-0 nova_compute[349548]: 2025-12-05 01:59:34.613 349552 DEBUG oslo_concurrency.processutils [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 01:59:34 compute-0 nova_compute[349548]: 2025-12-05 01:59:34.628 349552 DEBUG nova.compute.provider_tree [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 01:59:34 compute-0 nova_compute[349548]: 2025-12-05 01:59:34.653 349552 DEBUG nova.scheduler.client.report [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 01:59:34 compute-0 nova_compute[349548]: 2025-12-05 01:59:34.698 349552 DEBUG oslo_concurrency.lockutils [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:59:34 compute-0 nova_compute[349548]: 2025-12-05 01:59:34.752 349552 INFO nova.scheduler.client.report [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Deleted allocations for instance b82c3f0e-6d6a-4a7b-9556-b609ad63e497#033[00m
Dec  5 01:59:34 compute-0 nova_compute[349548]: 2025-12-05 01:59:34.837 349552 DEBUG oslo_concurrency.lockutils [None req-ed4253f5-b0d8-489f-aa77-63038a473231 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "b82c3f0e-6d6a-4a7b-9556-b609ad63e497" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.972s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:59:34 compute-0 nova_compute[349548]: 2025-12-05 01:59:34.960 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:59:34 compute-0 nova_compute[349548]: 2025-12-05 01:59:34.961 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:59:34 compute-0 nova_compute[349548]: 2025-12-05 01:59:34.962 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:59:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1506: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  5 01:59:36 compute-0 nova_compute[349548]: 2025-12-05 01:59:36.064 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 01:59:36 compute-0 nova_compute[349548]: 2025-12-05 01:59:36.715 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:59:37 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:59:37.496 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8dd76c1c-ab01-42af-b35e-2e870841b6ad, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 01:59:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1507: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  5 01:59:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:59:38 compute-0 nova_compute[349548]: 2025-12-05 01:59:38.345 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:59:38 compute-0 podman[428349]: 2025-12-05 01:59:38.702967813 +0000 UTC m=+0.107475251 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  5 01:59:38 compute-0 podman[428348]: 2025-12-05 01:59:38.721375988 +0000 UTC m=+0.121647908 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  5 01:59:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1508: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  5 01:59:40 compute-0 podman[428388]: 2025-12-05 01:59:40.739587988 +0000 UTC m=+0.133563362 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  5 01:59:40 compute-0 podman[428387]: 2025-12-05 01:59:40.740442342 +0000 UTC m=+0.144283602 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  5 01:59:41 compute-0 nova_compute[349548]: 2025-12-05 01:59:41.720 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:59:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1509: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  5 01:59:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:59:43 compute-0 nova_compute[349548]: 2025-12-05 01:59:43.315 349552 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764899968.3130796, b82c3f0e-6d6a-4a7b-9556-b609ad63e497 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 01:59:43 compute-0 nova_compute[349548]: 2025-12-05 01:59:43.316 349552 INFO nova.compute.manager [-] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] VM Stopped (Lifecycle Event)#033[00m
Dec  5 01:59:43 compute-0 nova_compute[349548]: 2025-12-05 01:59:43.336 349552 DEBUG nova.compute.manager [None req-f514dcc7-940d-4ec9-815b-f46f5eb2d1db - - - - - -] [instance: b82c3f0e-6d6a-4a7b-9556-b609ad63e497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 01:59:43 compute-0 nova_compute[349548]: 2025-12-05 01:59:43.347 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:59:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1510: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 26 op/s
Dec  5 01:59:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 01:59:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/39374684' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 01:59:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 01:59:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/39374684' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 01:59:45 compute-0 podman[428425]: 2025-12-05 01:59:45.684571125 +0000 UTC m=+0.105410533 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.buildah.version=1.29.0, io.openshift.expose-services=, managed_by=edpm_ansible, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, com.redhat.component=ubi9-container, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, config_id=edpm, name=ubi9, vendor=Red Hat, Inc.)
Dec  5 01:59:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1511: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:59:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:59:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:59:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:59:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:59:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 01:59:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 01:59:46 compute-0 nova_compute[349548]: 2025-12-05 01:59:46.722 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:59:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1512: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:59:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:59:48 compute-0 nova_compute[349548]: 2025-12-05 01:59:48.349 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:59:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1513: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:59:51 compute-0 nova_compute[349548]: 2025-12-05 01:59:51.727 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:59:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1514: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:59:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:59:53 compute-0 nova_compute[349548]: 2025-12-05 01:59:53.353 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:59:53 compute-0 podman[428446]: 2025-12-05 01:59:53.692698757 +0000 UTC m=+0.090536306 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  5 01:59:53 compute-0 podman[428449]: 2025-12-05 01:59:53.696242747 +0000 UTC m=+0.094625491 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, release=1755695350, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm)
Dec  5 01:59:53 compute-0 podman[428447]: 2025-12-05 01:59:53.710730942 +0000 UTC m=+0.116413131 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  5 01:59:53 compute-0 podman[428448]: 2025-12-05 01:59:53.725671131 +0000 UTC m=+0.127927144 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  5 01:59:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1515: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:59:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1516: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:59:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:59:56.191 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 01:59:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:59:56.192 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 01:59:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 01:59:56.193 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 01:59:56 compute-0 nova_compute[349548]: 2025-12-05 01:59:56.729 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:59:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1517: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:59:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 01:59:58 compute-0 nova_compute[349548]: 2025-12-05 01:59:58.356 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 01:59:59 compute-0 podman[158197]: time="2025-12-05T01:59:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 01:59:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1518: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 01:59:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:59:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 01:59:59 compute-0 podman[158197]: @ - - [05/Dec/2025:01:59:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8626 "" "Go-http-client/1.1"
Dec  5 02:00:01 compute-0 openstack_network_exporter[366555]: ERROR   02:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:00:01 compute-0 openstack_network_exporter[366555]: ERROR   02:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:00:01 compute-0 openstack_network_exporter[366555]: ERROR   02:00:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:00:01 compute-0 openstack_network_exporter[366555]: ERROR   02:00:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:00:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:00:01 compute-0 openstack_network_exporter[366555]: ERROR   02:00:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:00:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:00:01 compute-0 nova_compute[349548]: 2025-12-05 02:00:01.733 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:00:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1519: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:00:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:00:03 compute-0 nova_compute[349548]: 2025-12-05 02:00:03.360 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:00:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1520: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:00:04 compute-0 ovn_controller[89286]: 2025-12-05T02:00:04Z|00053|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Dec  5 02:00:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1521: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:00:06 compute-0 nova_compute[349548]: 2025-12-05 02:00:06.735 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:00:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1522: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:00:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:00:08 compute-0 nova_compute[349548]: 2025-12-05 02:00:08.363 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:00:09 compute-0 podman[428533]: 2025-12-05 02:00:09.723341582 +0000 UTC m=+0.122046199 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 02:00:09 compute-0 podman[428534]: 2025-12-05 02:00:09.733375493 +0000 UTC m=+0.125327591 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  5 02:00:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1523: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:00:11 compute-0 podman[428575]: 2025-12-05 02:00:11.716523192 +0000 UTC m=+0.122156032 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec  5 02:00:11 compute-0 podman[428576]: 2025-12-05 02:00:11.722047296 +0000 UTC m=+0.115024442 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  5 02:00:11 compute-0 nova_compute[349548]: 2025-12-05 02:00:11.737 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:00:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1524: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:00:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:00:13 compute-0 nova_compute[349548]: 2025-12-05 02:00:13.366 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:00:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1525: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:00:14 compute-0 podman[428782]: 2025-12-05 02:00:14.733420571 +0000 UTC m=+0.106243787 container exec aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:00:14 compute-0 podman[428782]: 2025-12-05 02:00:14.883123533 +0000 UTC m=+0.255946739 container exec_died aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:00:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1526: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:00:15 compute-0 podman[428900]: 2025-12-05 02:00:15.915481444 +0000 UTC m=+0.137392688 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, name=ubi9, container_name=kepler, config_id=edpm, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-type=git)
Dec  5 02:00:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 02:00:16 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:00:16 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 02:00:16 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:00:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:00:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:00:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:00:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:00:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:00:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:00:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:00:16
Dec  5 02:00:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 02:00:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 02:00:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'images', '.mgr', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'default.rgw.control', 'vms', 'default.rgw.meta']
Dec  5 02:00:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 02:00:16 compute-0 nova_compute[349548]: 2025-12-05 02:00:16.740 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:00:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 02:00:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:00:16 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:00:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:00:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 02:00:17 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:00:17 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:00:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:00:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:00:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:00:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:00:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:00:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1527: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:00:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:00:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:00:17 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:00:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 02:00:17 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:00:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 02:00:17 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:00:17 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 84fffc59-8bf5-4592-bdf5-4f28decde7b7 does not exist
Dec  5 02:00:17 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 0dec3bc4-c7cd-4519-832c-8155773f6663 does not exist
Dec  5 02:00:17 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 8547deb3-2b7f-4c5a-9e0e-0a361f82e02d does not exist
Dec  5 02:00:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 02:00:18 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 02:00:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 02:00:18 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:00:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:00:18 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:00:18 compute-0 nova_compute[349548]: 2025-12-05 02:00:18.369 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:00:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:00:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:00:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:00:19 compute-0 podman[429219]: 2025-12-05 02:00:19.065321137 +0000 UTC m=+0.064663791 container create eeec8b499590f3974c6f132add82c75a4dec3f828f3098e0646b541e7bf97f8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  5 02:00:19 compute-0 podman[429219]: 2025-12-05 02:00:19.033199468 +0000 UTC m=+0.032542162 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:00:19 compute-0 systemd[1]: Started libpod-conmon-eeec8b499590f3974c6f132add82c75a4dec3f828f3098e0646b541e7bf97f8f.scope.
Dec  5 02:00:19 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:00:19 compute-0 podman[429219]: 2025-12-05 02:00:19.190177304 +0000 UTC m=+0.189519998 container init eeec8b499590f3974c6f132add82c75a4dec3f828f3098e0646b541e7bf97f8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_nobel, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec  5 02:00:19 compute-0 podman[429219]: 2025-12-05 02:00:19.199602758 +0000 UTC m=+0.198945412 container start eeec8b499590f3974c6f132add82c75a4dec3f828f3098e0646b541e7bf97f8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:00:19 compute-0 podman[429219]: 2025-12-05 02:00:19.20432052 +0000 UTC m=+0.203663174 container attach eeec8b499590f3974c6f132add82c75a4dec3f828f3098e0646b541e7bf97f8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_nobel, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  5 02:00:19 compute-0 silly_nobel[429235]: 167 167
Dec  5 02:00:19 compute-0 systemd[1]: libpod-eeec8b499590f3974c6f132add82c75a4dec3f828f3098e0646b541e7bf97f8f.scope: Deactivated successfully.
Dec  5 02:00:19 compute-0 conmon[429235]: conmon eeec8b499590f3974c6f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eeec8b499590f3974c6f132add82c75a4dec3f828f3098e0646b541e7bf97f8f.scope/container/memory.events
Dec  5 02:00:19 compute-0 podman[429219]: 2025-12-05 02:00:19.212369876 +0000 UTC m=+0.211712580 container died eeec8b499590f3974c6f132add82c75a4dec3f828f3098e0646b541e7bf97f8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:00:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-9560a9f41cb17ad6f7557fdeffdbf653903976e73370e44dff15388dcfda3bc5-merged.mount: Deactivated successfully.
Dec  5 02:00:19 compute-0 podman[429219]: 2025-12-05 02:00:19.288958711 +0000 UTC m=+0.288301375 container remove eeec8b499590f3974c6f132add82c75a4dec3f828f3098e0646b541e7bf97f8f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:00:19 compute-0 systemd[1]: libpod-conmon-eeec8b499590f3974c6f132add82c75a4dec3f828f3098e0646b541e7bf97f8f.scope: Deactivated successfully.
Dec  5 02:00:19 compute-0 podman[429258]: 2025-12-05 02:00:19.531312348 +0000 UTC m=+0.077159492 container create 118ff52700b13f3ea7d4eb1c337a966c873d9698d2a5a0f32c7c0c26926b3ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wilson, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:00:19 compute-0 systemd[1]: Started libpod-conmon-118ff52700b13f3ea7d4eb1c337a966c873d9698d2a5a0f32c7c0c26926b3ea6.scope.
Dec  5 02:00:19 compute-0 podman[429258]: 2025-12-05 02:00:19.505508605 +0000 UTC m=+0.051355829 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:00:19 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:00:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61efb10cd25439dd9e7b368c5e1cda61efa3591566dcb2beb5e05120655dbe1c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:00:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61efb10cd25439dd9e7b368c5e1cda61efa3591566dcb2beb5e05120655dbe1c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:00:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61efb10cd25439dd9e7b368c5e1cda61efa3591566dcb2beb5e05120655dbe1c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:00:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61efb10cd25439dd9e7b368c5e1cda61efa3591566dcb2beb5e05120655dbe1c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:00:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61efb10cd25439dd9e7b368c5e1cda61efa3591566dcb2beb5e05120655dbe1c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 02:00:19 compute-0 podman[429258]: 2025-12-05 02:00:19.672826711 +0000 UTC m=+0.218673885 container init 118ff52700b13f3ea7d4eb1c337a966c873d9698d2a5a0f32c7c0c26926b3ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wilson, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  5 02:00:19 compute-0 podman[429258]: 2025-12-05 02:00:19.686316609 +0000 UTC m=+0.232163753 container start 118ff52700b13f3ea7d4eb1c337a966c873d9698d2a5a0f32c7c0c26926b3ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wilson, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:00:19 compute-0 podman[429258]: 2025-12-05 02:00:19.691417802 +0000 UTC m=+0.237264946 container attach 118ff52700b13f3ea7d4eb1c337a966c873d9698d2a5a0f32c7c0c26926b3ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Dec  5 02:00:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1528: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:00:20 compute-0 lucid_wilson[429273]: --> passed data devices: 0 physical, 3 LVM
Dec  5 02:00:20 compute-0 lucid_wilson[429273]: --> relative data size: 1.0
Dec  5 02:00:20 compute-0 lucid_wilson[429273]: --> All data devices are unavailable
Dec  5 02:00:20 compute-0 systemd[1]: libpod-118ff52700b13f3ea7d4eb1c337a966c873d9698d2a5a0f32c7c0c26926b3ea6.scope: Deactivated successfully.
Dec  5 02:00:20 compute-0 podman[429258]: 2025-12-05 02:00:20.945873434 +0000 UTC m=+1.491720608 container died 118ff52700b13f3ea7d4eb1c337a966c873d9698d2a5a0f32c7c0c26926b3ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wilson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  5 02:00:20 compute-0 systemd[1]: libpod-118ff52700b13f3ea7d4eb1c337a966c873d9698d2a5a0f32c7c0c26926b3ea6.scope: Consumed 1.167s CPU time.
Dec  5 02:00:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-61efb10cd25439dd9e7b368c5e1cda61efa3591566dcb2beb5e05120655dbe1c-merged.mount: Deactivated successfully.
Dec  5 02:00:21 compute-0 podman[429258]: 2025-12-05 02:00:21.049381483 +0000 UTC m=+1.595228627 container remove 118ff52700b13f3ea7d4eb1c337a966c873d9698d2a5a0f32c7c0c26926b3ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_wilson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:00:21 compute-0 systemd[1]: libpod-conmon-118ff52700b13f3ea7d4eb1c337a966c873d9698d2a5a0f32c7c0c26926b3ea6.scope: Deactivated successfully.
Dec  5 02:00:21 compute-0 nova_compute[349548]: 2025-12-05 02:00:21.743 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:00:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1529: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:00:22 compute-0 podman[429456]: 2025-12-05 02:00:22.108646078 +0000 UTC m=+0.070418193 container create 7a57d2c519f633f2892c641a976ea9bce87b1aaebabfc288be77b6f4b53de58c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_solomon, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:00:22 compute-0 podman[429456]: 2025-12-05 02:00:22.079368638 +0000 UTC m=+0.041140813 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:00:22 compute-0 systemd[1]: Started libpod-conmon-7a57d2c519f633f2892c641a976ea9bce87b1aaebabfc288be77b6f4b53de58c.scope.
Dec  5 02:00:22 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:00:22 compute-0 podman[429456]: 2025-12-05 02:00:22.255044278 +0000 UTC m=+0.216816433 container init 7a57d2c519f633f2892c641a976ea9bce87b1aaebabfc288be77b6f4b53de58c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_solomon, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  5 02:00:22 compute-0 podman[429456]: 2025-12-05 02:00:22.272564189 +0000 UTC m=+0.234336304 container start 7a57d2c519f633f2892c641a976ea9bce87b1aaebabfc288be77b6f4b53de58c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  5 02:00:22 compute-0 zen_solomon[429470]: 167 167
Dec  5 02:00:22 compute-0 podman[429456]: 2025-12-05 02:00:22.279634697 +0000 UTC m=+0.241406802 container attach 7a57d2c519f633f2892c641a976ea9bce87b1aaebabfc288be77b6f4b53de58c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Dec  5 02:00:22 compute-0 systemd[1]: libpod-7a57d2c519f633f2892c641a976ea9bce87b1aaebabfc288be77b6f4b53de58c.scope: Deactivated successfully.
Dec  5 02:00:22 compute-0 podman[429456]: 2025-12-05 02:00:22.280551503 +0000 UTC m=+0.242323598 container died 7a57d2c519f633f2892c641a976ea9bce87b1aaebabfc288be77b6f4b53de58c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_solomon, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  5 02:00:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d7b181d0f25f72f66b4af84e184d92c4b16884c2e0c692266b4e6054ef3b9cc-merged.mount: Deactivated successfully.
Dec  5 02:00:22 compute-0 podman[429456]: 2025-12-05 02:00:22.332667612 +0000 UTC m=+0.294439697 container remove 7a57d2c519f633f2892c641a976ea9bce87b1aaebabfc288be77b6f4b53de58c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:00:22 compute-0 systemd[1]: libpod-conmon-7a57d2c519f633f2892c641a976ea9bce87b1aaebabfc288be77b6f4b53de58c.scope: Deactivated successfully.
Dec  5 02:00:22 compute-0 podman[429494]: 2025-12-05 02:00:22.629224767 +0000 UTC m=+0.079047284 container create 9627cb42a19ee30a69fe234f3a15f8da9fb51d7759fe02ceb86cb704aa5a1110 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_montalcini, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:00:22 compute-0 podman[429494]: 2025-12-05 02:00:22.604220886 +0000 UTC m=+0.054043373 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:00:22 compute-0 systemd[1]: Started libpod-conmon-9627cb42a19ee30a69fe234f3a15f8da9fb51d7759fe02ceb86cb704aa5a1110.scope.
Dec  5 02:00:22 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:00:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dd98cdf75ad7759cf0816bfb6b0893655ac6ddc4fa7bd331731b6829ad45fd7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:00:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dd98cdf75ad7759cf0816bfb6b0893655ac6ddc4fa7bd331731b6829ad45fd7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:00:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dd98cdf75ad7759cf0816bfb6b0893655ac6ddc4fa7bd331731b6829ad45fd7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:00:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dd98cdf75ad7759cf0816bfb6b0893655ac6ddc4fa7bd331731b6829ad45fd7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:00:22 compute-0 podman[429494]: 2025-12-05 02:00:22.811608984 +0000 UTC m=+0.261431571 container init 9627cb42a19ee30a69fe234f3a15f8da9fb51d7759fe02ceb86cb704aa5a1110 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_montalcini, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  5 02:00:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:00:22 compute-0 podman[429494]: 2025-12-05 02:00:22.832544471 +0000 UTC m=+0.282366968 container start 9627cb42a19ee30a69fe234f3a15f8da9fb51d7759fe02ceb86cb704aa5a1110 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:00:22 compute-0 podman[429494]: 2025-12-05 02:00:22.840224936 +0000 UTC m=+0.290047463 container attach 9627cb42a19ee30a69fe234f3a15f8da9fb51d7759fe02ceb86cb704aa5a1110 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Dec  5 02:00:23 compute-0 nova_compute[349548]: 2025-12-05 02:00:23.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:00:23 compute-0 nova_compute[349548]: 2025-12-05 02:00:23.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  5 02:00:23 compute-0 nova_compute[349548]: 2025-12-05 02:00:23.088 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  5 02:00:23 compute-0 nova_compute[349548]: 2025-12-05 02:00:23.372 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]: {
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:    "0": [
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:        {
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:            "devices": [
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:                "/dev/loop3"
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:            ],
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:            "lv_name": "ceph_lv0",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:            "lv_size": "21470642176",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:            "name": "ceph_lv0",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:            "tags": {
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:                "ceph.cluster_name": "ceph",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:                "ceph.crush_device_class": "",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:                "ceph.encrypted": "0",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:                "ceph.osd_id": "0",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:                "ceph.type": "block",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:                "ceph.vdo": "0"
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:            },
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:            "type": "block",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:            "vg_name": "ceph_vg0"
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:        }
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:    ],
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:    "1": [
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:        {
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:            "devices": [
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:                "/dev/loop4"
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:            ],
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:            "lv_name": "ceph_lv1",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:            "lv_size": "21470642176",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:            "name": "ceph_lv1",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:            "tags": {
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:                "ceph.cluster_name": "ceph",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:                "ceph.crush_device_class": "",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:                "ceph.encrypted": "0",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:                "ceph.osd_id": "1",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:                "ceph.type": "block",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:                "ceph.vdo": "0"
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:            },
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:            "type": "block",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:            "vg_name": "ceph_vg1"
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:        }
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:    ],
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:    "2": [
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:        {
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:            "devices": [
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:                "/dev/loop5"
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:            ],
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:            "lv_name": "ceph_lv2",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:            "lv_size": "21470642176",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:            "name": "ceph_lv2",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:            "tags": {
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:                "ceph.cluster_name": "ceph",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:                "ceph.crush_device_class": "",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:                "ceph.encrypted": "0",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:                "ceph.osd_id": "2",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:                "ceph.type": "block",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:                "ceph.vdo": "0"
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:            },
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:            "type": "block",
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:            "vg_name": "ceph_vg2"
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:        }
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]:    ]
Dec  5 02:00:23 compute-0 sharp_montalcini[429510]: }
Dec  5 02:00:23 compute-0 systemd[1]: libpod-9627cb42a19ee30a69fe234f3a15f8da9fb51d7759fe02ceb86cb704aa5a1110.scope: Deactivated successfully.
Dec  5 02:00:23 compute-0 podman[429494]: 2025-12-05 02:00:23.675749665 +0000 UTC m=+1.125572152 container died 9627cb42a19ee30a69fe234f3a15f8da9fb51d7759fe02ceb86cb704aa5a1110 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_montalcini, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  5 02:00:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-7dd98cdf75ad7759cf0816bfb6b0893655ac6ddc4fa7bd331731b6829ad45fd7-merged.mount: Deactivated successfully.
Dec  5 02:00:23 compute-0 podman[429494]: 2025-12-05 02:00:23.740723065 +0000 UTC m=+1.190545552 container remove 9627cb42a19ee30a69fe234f3a15f8da9fb51d7759fe02ceb86cb704aa5a1110 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_montalcini, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  5 02:00:23 compute-0 systemd[1]: libpod-conmon-9627cb42a19ee30a69fe234f3a15f8da9fb51d7759fe02ceb86cb704aa5a1110.scope: Deactivated successfully.
Dec  5 02:00:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1530: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:00:23 compute-0 podman[429532]: 2025-12-05 02:00:23.840347585 +0000 UTC m=+0.081219156 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  5 02:00:23 compute-0 podman[429530]: 2025-12-05 02:00:23.841922039 +0000 UTC m=+0.096322358 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Dec  5 02:00:23 compute-0 podman[429531]: 2025-12-05 02:00:23.864223124 +0000 UTC m=+0.118620183 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, release=1755695350, build-date=2025-08-20T13:12:41, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., distribution-scope=public)
Dec  5 02:00:23 compute-0 podman[429533]: 2025-12-05 02:00:23.90656948 +0000 UTC m=+0.150936848 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller)
Dec  5 02:00:24 compute-0 podman[429752]: 2025-12-05 02:00:24.695079603 +0000 UTC m=+0.097662437 container create 28479d2f93fd2a0fb8acc1a763e7aa827a2494222cba59249b22ee138fd5195e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_yalow, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  5 02:00:24 compute-0 podman[429752]: 2025-12-05 02:00:24.65536337 +0000 UTC m=+0.057946234 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:00:24 compute-0 systemd[1]: Started libpod-conmon-28479d2f93fd2a0fb8acc1a763e7aa827a2494222cba59249b22ee138fd5195e.scope.
Dec  5 02:00:24 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:00:24 compute-0 podman[429752]: 2025-12-05 02:00:24.860357851 +0000 UTC m=+0.262940715 container init 28479d2f93fd2a0fb8acc1a763e7aa827a2494222cba59249b22ee138fd5195e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_yalow, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:00:24 compute-0 podman[429752]: 2025-12-05 02:00:24.881816982 +0000 UTC m=+0.284399846 container start 28479d2f93fd2a0fb8acc1a763e7aa827a2494222cba59249b22ee138fd5195e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_yalow, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  5 02:00:24 compute-0 podman[429752]: 2025-12-05 02:00:24.888753687 +0000 UTC m=+0.291336541 container attach 28479d2f93fd2a0fb8acc1a763e7aa827a2494222cba59249b22ee138fd5195e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  5 02:00:24 compute-0 nervous_yalow[429768]: 167 167
Dec  5 02:00:24 compute-0 systemd[1]: libpod-28479d2f93fd2a0fb8acc1a763e7aa827a2494222cba59249b22ee138fd5195e.scope: Deactivated successfully.
Dec  5 02:00:24 compute-0 podman[429752]: 2025-12-05 02:00:24.895523136 +0000 UTC m=+0.298106000 container died 28479d2f93fd2a0fb8acc1a763e7aa827a2494222cba59249b22ee138fd5195e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:00:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-e18f43591846e5ff7a23e9359dd6ec482ab1dbc8824a555a2c8168f39a3c126a-merged.mount: Deactivated successfully.
Dec  5 02:00:24 compute-0 podman[429752]: 2025-12-05 02:00:24.972342378 +0000 UTC m=+0.374925212 container remove 28479d2f93fd2a0fb8acc1a763e7aa827a2494222cba59249b22ee138fd5195e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:00:24 compute-0 systemd[1]: libpod-conmon-28479d2f93fd2a0fb8acc1a763e7aa827a2494222cba59249b22ee138fd5195e.scope: Deactivated successfully.
Dec  5 02:00:25 compute-0 nova_compute[349548]: 2025-12-05 02:00:25.089 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:00:25 compute-0 nova_compute[349548]: 2025-12-05 02:00:25.091 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 02:00:25 compute-0 podman[429791]: 2025-12-05 02:00:25.207423601 +0000 UTC m=+0.084599130 container create ca6e738071c0c0fdf4cf604515a234c5ee2c4a6afeb94b37d8a2a410dfe5881f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_hamilton, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Dec  5 02:00:25 compute-0 podman[429791]: 2025-12-05 02:00:25.16058813 +0000 UTC m=+0.037763679 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:00:25 compute-0 systemd[1]: Started libpod-conmon-ca6e738071c0c0fdf4cf604515a234c5ee2c4a6afeb94b37d8a2a410dfe5881f.scope.
Dec  5 02:00:25 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:00:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/279f7b11079ce98ba18479e455bfe75702d5c6b361ce95841e1a0cf749f8dee4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:00:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/279f7b11079ce98ba18479e455bfe75702d5c6b361ce95841e1a0cf749f8dee4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:00:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/279f7b11079ce98ba18479e455bfe75702d5c6b361ce95841e1a0cf749f8dee4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:00:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/279f7b11079ce98ba18479e455bfe75702d5c6b361ce95841e1a0cf749f8dee4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:00:25 compute-0 podman[429791]: 2025-12-05 02:00:25.381012093 +0000 UTC m=+0.258187652 container init ca6e738071c0c0fdf4cf604515a234c5ee2c4a6afeb94b37d8a2a410dfe5881f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_hamilton, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:00:25 compute-0 podman[429791]: 2025-12-05 02:00:25.397783752 +0000 UTC m=+0.274959281 container start ca6e738071c0c0fdf4cf604515a234c5ee2c4a6afeb94b37d8a2a410dfe5881f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:00:25 compute-0 podman[429791]: 2025-12-05 02:00:25.402969288 +0000 UTC m=+0.280144837 container attach ca6e738071c0c0fdf4cf604515a234c5ee2c4a6afeb94b37d8a2a410dfe5881f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec  5 02:00:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1531: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:00:26 compute-0 goofy_hamilton[429807]: {
Dec  5 02:00:26 compute-0 goofy_hamilton[429807]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 02:00:26 compute-0 goofy_hamilton[429807]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:00:26 compute-0 goofy_hamilton[429807]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 02:00:26 compute-0 goofy_hamilton[429807]:        "osd_id": 0,
Dec  5 02:00:26 compute-0 goofy_hamilton[429807]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:00:26 compute-0 goofy_hamilton[429807]:        "type": "bluestore"
Dec  5 02:00:26 compute-0 goofy_hamilton[429807]:    },
Dec  5 02:00:26 compute-0 goofy_hamilton[429807]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 02:00:26 compute-0 goofy_hamilton[429807]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:00:26 compute-0 goofy_hamilton[429807]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 02:00:26 compute-0 goofy_hamilton[429807]:        "osd_id": 1,
Dec  5 02:00:26 compute-0 goofy_hamilton[429807]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:00:26 compute-0 goofy_hamilton[429807]:        "type": "bluestore"
Dec  5 02:00:26 compute-0 goofy_hamilton[429807]:    },
Dec  5 02:00:26 compute-0 goofy_hamilton[429807]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 02:00:26 compute-0 goofy_hamilton[429807]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:00:26 compute-0 goofy_hamilton[429807]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 02:00:26 compute-0 goofy_hamilton[429807]:        "osd_id": 2,
Dec  5 02:00:26 compute-0 goofy_hamilton[429807]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:00:26 compute-0 goofy_hamilton[429807]:        "type": "bluestore"
Dec  5 02:00:26 compute-0 goofy_hamilton[429807]:    }
Dec  5 02:00:26 compute-0 goofy_hamilton[429807]: }
Dec  5 02:00:26 compute-0 systemd[1]: libpod-ca6e738071c0c0fdf4cf604515a234c5ee2c4a6afeb94b37d8a2a410dfe5881f.scope: Deactivated successfully.
Dec  5 02:00:26 compute-0 podman[429791]: 2025-12-05 02:00:26.562742407 +0000 UTC m=+1.439917956 container died ca6e738071c0c0fdf4cf604515a234c5ee2c4a6afeb94b37d8a2a410dfe5881f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_hamilton, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:00:26 compute-0 systemd[1]: libpod-ca6e738071c0c0fdf4cf604515a234c5ee2c4a6afeb94b37d8a2a410dfe5881f.scope: Consumed 1.166s CPU time.
Dec  5 02:00:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-279f7b11079ce98ba18479e455bfe75702d5c6b361ce95841e1a0cf749f8dee4-merged.mount: Deactivated successfully.
Dec  5 02:00:26 compute-0 podman[429791]: 2025-12-05 02:00:26.647775919 +0000 UTC m=+1.524951448 container remove ca6e738071c0c0fdf4cf604515a234c5ee2c4a6afeb94b37d8a2a410dfe5881f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  5 02:00:26 compute-0 systemd[1]: libpod-conmon-ca6e738071c0c0fdf4cf604515a234c5ee2c4a6afeb94b37d8a2a410dfe5881f.scope: Deactivated successfully.
Dec  5 02:00:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 02:00:26 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:00:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 02:00:26 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:00:26 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 1b469a19-6aa1-4466-93e5-fe6258fe0b30 does not exist
Dec  5 02:00:26 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev b8e6de92-cba1-4dfc-be5b-1ae67dac1842 does not exist
Dec  5 02:00:26 compute-0 nova_compute[349548]: 2025-12-05 02:00:26.747 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016572374365110374 of space, bias 1.0, pg target 0.4971712309533112 quantized to 32 (current 32)
Dec  5 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  5 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:00:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 02:00:26 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:00:26 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:00:27 compute-0 nova_compute[349548]: 2025-12-05 02:00:27.068 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:00:27 compute-0 nova_compute[349548]: 2025-12-05 02:00:27.068 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:00:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1532: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:00:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:00:28 compute-0 nova_compute[349548]: 2025-12-05 02:00:28.061 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:00:28 compute-0 nova_compute[349548]: 2025-12-05 02:00:28.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:00:28 compute-0 nova_compute[349548]: 2025-12-05 02:00:28.065 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 02:00:28 compute-0 nova_compute[349548]: 2025-12-05 02:00:28.376 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:00:28 compute-0 nova_compute[349548]: 2025-12-05 02:00:28.956 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:00:28 compute-0 nova_compute[349548]: 2025-12-05 02:00:28.957 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:00:28 compute-0 nova_compute[349548]: 2025-12-05 02:00:28.958 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  5 02:00:29 compute-0 podman[158197]: time="2025-12-05T02:00:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:00:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:00:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 02:00:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1533: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:00:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:00:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8635 "" "Go-http-client/1.1"
Dec  5 02:00:31 compute-0 nova_compute[349548]: 2025-12-05 02:00:31.213 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Updating instance_info_cache with network_info: [{"id": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "address": "fa:16:3e:68:a7:22", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4341bf52-6b", "ovs_interfaceid": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:00:31 compute-0 nova_compute[349548]: 2025-12-05 02:00:31.231 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:00:31 compute-0 nova_compute[349548]: 2025-12-05 02:00:31.232 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  5 02:00:31 compute-0 nova_compute[349548]: 2025-12-05 02:00:31.233 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:00:31 compute-0 nova_compute[349548]: 2025-12-05 02:00:31.234 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:00:31 compute-0 nova_compute[349548]: 2025-12-05 02:00:31.268 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:00:31 compute-0 nova_compute[349548]: 2025-12-05 02:00:31.269 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:00:31 compute-0 nova_compute[349548]: 2025-12-05 02:00:31.270 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:00:31 compute-0 nova_compute[349548]: 2025-12-05 02:00:31.270 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 02:00:31 compute-0 nova_compute[349548]: 2025-12-05 02:00:31.271 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:00:31 compute-0 openstack_network_exporter[366555]: ERROR   02:00:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:00:31 compute-0 openstack_network_exporter[366555]: ERROR   02:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:00:31 compute-0 openstack_network_exporter[366555]: ERROR   02:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:00:31 compute-0 openstack_network_exporter[366555]: ERROR   02:00:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:00:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:00:31 compute-0 openstack_network_exporter[366555]: ERROR   02:00:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:00:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:00:31 compute-0 nova_compute[349548]: 2025-12-05 02:00:31.750 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:00:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:00:31 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/86507028' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:00:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1534: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:00:31 compute-0 nova_compute[349548]: 2025-12-05 02:00:31.788 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:00:31 compute-0 nova_compute[349548]: 2025-12-05 02:00:31.881 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:00:31 compute-0 nova_compute[349548]: 2025-12-05 02:00:31.882 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:00:31 compute-0 nova_compute[349548]: 2025-12-05 02:00:31.882 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:00:31 compute-0 nova_compute[349548]: 2025-12-05 02:00:31.888 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:00:31 compute-0 nova_compute[349548]: 2025-12-05 02:00:31.889 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:00:31 compute-0 nova_compute[349548]: 2025-12-05 02:00:31.889 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:00:31 compute-0 nova_compute[349548]: 2025-12-05 02:00:31.895 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:00:31 compute-0 nova_compute[349548]: 2025-12-05 02:00:31.896 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:00:31 compute-0 nova_compute[349548]: 2025-12-05 02:00:31.896 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:00:32 compute-0 nova_compute[349548]: 2025-12-05 02:00:32.371 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:00:32 compute-0 nova_compute[349548]: 2025-12-05 02:00:32.372 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3406MB free_disk=59.88886642456055GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 02:00:32 compute-0 nova_compute[349548]: 2025-12-05 02:00:32.372 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:00:32 compute-0 nova_compute[349548]: 2025-12-05 02:00:32.372 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:00:32 compute-0 nova_compute[349548]: 2025-12-05 02:00:32.526 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b69a0e24-1bc4-46a5-92d7-367c1efd53df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:00:32 compute-0 nova_compute[349548]: 2025-12-05 02:00:32.526 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:00:32 compute-0 nova_compute[349548]: 2025-12-05 02:00:32.526 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 3611d2ae-da33-4e55-aec7-0bec88d3b4e0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:00:32 compute-0 nova_compute[349548]: 2025-12-05 02:00:32.527 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 02:00:32 compute-0 nova_compute[349548]: 2025-12-05 02:00:32.527 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2048MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 02:00:32 compute-0 nova_compute[349548]: 2025-12-05 02:00:32.721 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:00:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:00:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:00:33 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3176152989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:00:33 compute-0 nova_compute[349548]: 2025-12-05 02:00:33.255 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:00:33 compute-0 nova_compute[349548]: 2025-12-05 02:00:33.265 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:00:33 compute-0 nova_compute[349548]: 2025-12-05 02:00:33.283 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:00:33 compute-0 nova_compute[349548]: 2025-12-05 02:00:33.313 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 02:00:33 compute-0 nova_compute[349548]: 2025-12-05 02:00:33.314 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.941s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:00:33 compute-0 nova_compute[349548]: 2025-12-05 02:00:33.316 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:00:33 compute-0 nova_compute[349548]: 2025-12-05 02:00:33.316 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  5 02:00:33 compute-0 nova_compute[349548]: 2025-12-05 02:00:33.379 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:00:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1535: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:00:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1536: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:00:36 compute-0 nova_compute[349548]: 2025-12-05 02:00:36.164 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:00:36 compute-0 nova_compute[349548]: 2025-12-05 02:00:36.165 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:00:36 compute-0 nova_compute[349548]: 2025-12-05 02:00:36.753 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:00:37 compute-0 nova_compute[349548]: 2025-12-05 02:00:37.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:00:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1537: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:00:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.319 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.320 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.326 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.326 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d388c20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.332 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5', 'name': 'vn-4ysdpfw-vyar5vmyxehf-7qpgpa3gxwp3-vnf-gvxpa75bo2i7', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {'metering.server_group': 'b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.348 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b69a0e24-1bc4-46a5-92d7-367c1efd53df', 'name': 'test_0', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.355 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '3611d2ae-da33-4e55-aec7-0bec88d3b4e0', 'name': 'vn-4ysdpfw-etyk2gsqvxro-nwtay2ho224x-vnf-wh6pa34aydpq', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {'metering.server_group': 'b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.355 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.356 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd61438050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.356 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd61438050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.356 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.358 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.358 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.359 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.359 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.359 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.360 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-05T02:00:38.356446) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.360 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.361 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-05T02:00:38.360169) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:00:38 compute-0 nova_compute[349548]: 2025-12-05 02:00:38.383 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.387 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.387 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.388 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.417 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.418 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.418 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.443 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.443 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.444 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.445 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.445 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.445 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.445 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.446 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.446 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.447 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.447 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.447 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.447 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.447 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.447 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.448 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.448 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.448 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-05T02:00:38.446223) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.449 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-05T02:00:38.448234) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.497 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.498 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.498 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.575 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.577 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.577 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.641 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.641 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.642 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.643 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.644 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.644 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.644 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.645 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.645 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.646 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.latency volume: 1788689993 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.647 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.latency volume: 318906117 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.647 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-05T02:00:38.645576) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.648 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.latency volume: 246265233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.649 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 2043636416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.650 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 325714825 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.651 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 190759187 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.651 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.latency volume: 1726190004 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.652 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.latency volume: 302563806 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.653 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.latency volume: 198504004 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.654 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.655 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.656 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.657 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.657 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.658 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.658 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-05T02:00:38.657972) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.658 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.659 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.660 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.661 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.662 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.662 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.663 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.664 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.665 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.666 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.667 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.668 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.668 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.669 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.670 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.670 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-05T02:00:38.669869) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.670 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.671 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.672 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.673 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.674 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.675 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.676 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.677 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.678 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.679 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.680 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.681 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.681 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.681 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.681 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.681 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.681 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.682 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.682 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 41762816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.682 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.683 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.683 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-05T02:00:38.681330) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.683 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.683 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.684 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.684 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.684 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.685 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.685 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.685 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.685 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.686 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-05T02:00:38.685427) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.712 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.748 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.776 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.779 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.780 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.780 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.781 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.782 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.783 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.784 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-05T02:00:38.782977) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.784 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.latency volume: 7184458071 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.785 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.latency volume: 30429022 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.786 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.787 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 7524740776 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.788 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 28454640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.789 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.790 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.latency volume: 8278686410 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.791 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.latency volume: 33331693 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.792 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.793 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.795 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.795 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.796 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.797 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.798 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-05T02:00:38.796964) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.798 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.799 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.800 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.800 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.801 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.801 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.802 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.802 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.803 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.804 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.804 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.805 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.805 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.806 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.806 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.806 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-05T02:00:38.806327) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.811 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.815 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.820 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.821 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.822 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.822 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.822 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.822 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.823 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.823 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.823 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-05T02:00:38.822947) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.824 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.824 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.824 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.825 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.825 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.825 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.825 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.826 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.826 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-05T02:00:38.826008) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.826 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.828 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.829 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.829 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.830 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.830 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.830 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.831 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.831 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.832 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.832 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.832 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.832 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.833 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.833 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.833 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.833 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.834 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.834 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-05T02:00:38.833129) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.835 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.835 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.835 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.835 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.835 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.835 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.836 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.836 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-05T02:00:38.835843) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.836 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.836 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.837 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.838 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.838 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.838 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.838 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.838 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.838 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.839 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.839 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-05T02:00:38.838495) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.839 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.840 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.840 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.840 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.840 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.841 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.841 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.841 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.841 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.841 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/memory.usage volume: 49.02734375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.842 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/memory.usage volume: 48.87890625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.842 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-05T02:00:38.841778) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.842 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/memory.usage volume: 49.01171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.844 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.844 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.844 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.844 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.844 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.844 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.844 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.845 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.bytes volume: 2220 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.846 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.846 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.847 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.847 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.847 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.847 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.848 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.848 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-05T02:00:38.844800) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.848 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.848 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.848 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.849 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.849 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.849 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.849 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.849 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.849 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.850 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-05T02:00:38.847940) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.850 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.850 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-05T02:00:38.849974) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.850 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.851 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.851 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.851 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.852 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.852 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.852 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.853 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/cpu volume: 39700000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.853 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-05T02:00:38.852235) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.853 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/cpu volume: 46130000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.853 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/cpu volume: 39710000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.854 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.854 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.854 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.854 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.854 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.855 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.855 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.855 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.856 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.856 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.856 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.856 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.856 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.857 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.857 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.857 14 DEBUG ceilometer.compute.pollsters [-] 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.857 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-05T02:00:38.855181) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.857 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.858 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-05T02:00:38.857128) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.858 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.858 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.859 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.859 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.859 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.859 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.859 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.859 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.859 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.860 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.860 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.860 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.860 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.860 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.860 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.860 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.860 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.860 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.860 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:00:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:00:38.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:00:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1538: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:00:40 compute-0 podman[429947]: 2025-12-05 02:00:40.715352379 +0000 UTC m=+0.112631985 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 02:00:40 compute-0 podman[429946]: 2025-12-05 02:00:40.741251115 +0000 UTC m=+0.139743245 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  5 02:00:41 compute-0 nova_compute[349548]: 2025-12-05 02:00:41.757 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:00:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1539: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:00:42 compute-0 podman[429988]: 2025-12-05 02:00:42.725116405 +0000 UTC m=+0.117674917 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  5 02:00:42 compute-0 podman[429987]: 2025-12-05 02:00:42.731058111 +0000 UTC m=+0.129242270 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute)
Dec  5 02:00:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:00:43 compute-0 nova_compute[349548]: 2025-12-05 02:00:43.387 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:00:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1540: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:00:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 02:00:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2835619917' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 02:00:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 02:00:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2835619917' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 02:00:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1541: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:00:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:00:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:00:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:00:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:00:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:00:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:00:46 compute-0 podman[430025]: 2025-12-05 02:00:46.719601722 +0000 UTC m=+0.120245449 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, release-0.7.12=, vcs-type=git, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container)
Dec  5 02:00:46 compute-0 nova_compute[349548]: 2025-12-05 02:00:46.760 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:00:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1542: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:00:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:00:48 compute-0 nova_compute[349548]: 2025-12-05 02:00:48.391 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:00:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1543: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:00:51 compute-0 nova_compute[349548]: 2025-12-05 02:00:51.765 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:00:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1544: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:00:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:00:53 compute-0 nova_compute[349548]: 2025-12-05 02:00:53.394 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:00:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1545: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:00:54 compute-0 podman[430044]: 2025-12-05 02:00:54.717386153 +0000 UTC m=+0.118621743 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  5 02:00:54 compute-0 podman[430045]: 2025-12-05 02:00:54.74050702 +0000 UTC m=+0.131821052 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 02:00:54 compute-0 podman[430047]: 2025-12-05 02:00:54.761725045 +0000 UTC m=+0.136564466 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, vendor=Red Hat, Inc., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, container_name=openstack_network_exporter, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  5 02:00:54 compute-0 podman[430046]: 2025-12-05 02:00:54.774839322 +0000 UTC m=+0.160565768 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  5 02:00:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1546: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:00:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:00:56.193 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:00:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:00:56.194 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:00:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:00:56.195 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:00:56 compute-0 nova_compute[349548]: 2025-12-05 02:00:56.767 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:00:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1547: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:00:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:00:58 compute-0 nova_compute[349548]: 2025-12-05 02:00:58.397 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:00:59 compute-0 podman[158197]: time="2025-12-05T02:00:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:00:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:00:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 02:00:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:00:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8635 "" "Go-http-client/1.1"
Dec  5 02:00:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1548: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:01:01 compute-0 openstack_network_exporter[366555]: ERROR   02:01:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:01:01 compute-0 openstack_network_exporter[366555]: ERROR   02:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:01:01 compute-0 openstack_network_exporter[366555]: ERROR   02:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:01:01 compute-0 openstack_network_exporter[366555]: ERROR   02:01:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:01:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:01:01 compute-0 openstack_network_exporter[366555]: ERROR   02:01:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:01:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:01:01 compute-0 nova_compute[349548]: 2025-12-05 02:01:01.771 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:01:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1549: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:01:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:01:03 compute-0 nova_compute[349548]: 2025-12-05 02:01:03.401 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:01:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1550: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:01:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1551: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:01:06 compute-0 nova_compute[349548]: 2025-12-05 02:01:06.773 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:01:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1552: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:01:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:01:08 compute-0 nova_compute[349548]: 2025-12-05 02:01:08.404 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:01:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1553: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:01:11 compute-0 podman[430140]: 2025-12-05 02:01:11.675446091 +0000 UTC m=+0.081806632 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 02:01:11 compute-0 podman[430139]: 2025-12-05 02:01:11.721364337 +0000 UTC m=+0.123895261 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  5 02:01:11 compute-0 nova_compute[349548]: 2025-12-05 02:01:11.776 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:01:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1554: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:01:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:01:13 compute-0 nova_compute[349548]: 2025-12-05 02:01:13.408 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:01:13 compute-0 podman[430181]: 2025-12-05 02:01:13.688726003 +0000 UTC m=+0.104481167 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, tcib_managed=true)
Dec  5 02:01:13 compute-0 podman[430182]: 2025-12-05 02:01:13.720402361 +0000 UTC m=+0.118898181 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  5 02:01:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1555: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:01:15 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Dec  5 02:01:15 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:01:15.212557) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  5 02:01:15 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Dec  5 02:01:15 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900075212984, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 2042, "num_deletes": 251, "total_data_size": 3382859, "memory_usage": 3431120, "flush_reason": "Manual Compaction"}
Dec  5 02:01:15 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Dec  5 02:01:15 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900075256454, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 3316936, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30123, "largest_seqno": 32164, "table_properties": {"data_size": 3307643, "index_size": 5851, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18521, "raw_average_key_size": 20, "raw_value_size": 3289232, "raw_average_value_size": 3559, "num_data_blocks": 260, "num_entries": 924, "num_filter_entries": 924, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764899849, "oldest_key_time": 1764899849, "file_creation_time": 1764900075, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Dec  5 02:01:15 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 43973 microseconds, and 22118 cpu microseconds.
Dec  5 02:01:15 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 02:01:15 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:01:15.256523) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 3316936 bytes OK
Dec  5 02:01:15 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:01:15.256561) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Dec  5 02:01:15 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:01:15.260573) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Dec  5 02:01:15 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:01:15.260714) EVENT_LOG_v1 {"time_micros": 1764900075260691, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  5 02:01:15 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:01:15.260757) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  5 02:01:15 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 3374338, prev total WAL file size 3374338, number of live WAL files 2.
Dec  5 02:01:15 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:01:15 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:01:15.263440) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Dec  5 02:01:15 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  5 02:01:15 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(3239KB)], [68(7036KB)]
Dec  5 02:01:15 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900075263551, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 10522014, "oldest_snapshot_seqno": -1}
Dec  5 02:01:15 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 5338 keys, 8820868 bytes, temperature: kUnknown
Dec  5 02:01:15 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900075497067, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 8820868, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8784770, "index_size": 21652, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13381, "raw_key_size": 133667, "raw_average_key_size": 25, "raw_value_size": 8687757, "raw_average_value_size": 1627, "num_data_blocks": 894, "num_entries": 5338, "num_filter_entries": 5338, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764900075, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Dec  5 02:01:15 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 02:01:15 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:01:15.497428) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 8820868 bytes
Dec  5 02:01:15 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:01:15.502348) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 45.0 rd, 37.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 6.9 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(5.8) write-amplify(2.7) OK, records in: 5852, records dropped: 514 output_compression: NoCompression
Dec  5 02:01:15 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:01:15.502416) EVENT_LOG_v1 {"time_micros": 1764900075502389, "job": 38, "event": "compaction_finished", "compaction_time_micros": 233628, "compaction_time_cpu_micros": 38860, "output_level": 6, "num_output_files": 1, "total_output_size": 8820868, "num_input_records": 5852, "num_output_records": 5338, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  5 02:01:15 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:01:15 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900075504135, "job": 38, "event": "table_file_deletion", "file_number": 70}
Dec  5 02:01:15 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:01:15 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900075507564, "job": 38, "event": "table_file_deletion", "file_number": 68}
Dec  5 02:01:15 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:01:15.263228) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:01:15 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:01:15.507854) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:01:15 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:01:15.507863) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:01:15 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:01:15.507867) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:01:15 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:01:15.507870) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:01:15 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:01:15.507873) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:01:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1556: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:01:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:01:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:01:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:01:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:01:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:01:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:01:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:01:16
Dec  5 02:01:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 02:01:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 02:01:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', '.mgr', 'default.rgw.meta', 'vms', 'volumes', 'cephfs.cephfs.meta', 'images']
Dec  5 02:01:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 02:01:16 compute-0 nova_compute[349548]: 2025-12-05 02:01:16.779 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:01:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 02:01:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:01:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:01:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:01:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:01:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 02:01:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:01:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:01:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:01:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:01:17 compute-0 podman[430218]: 2025-12-05 02:01:17.737310307 +0000 UTC m=+0.143295195 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=edpm, container_name=kepler, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, io.openshift.expose-services=, architecture=x86_64, maintainer=Red Hat, Inc., release-0.7.12=, vcs-type=git)
Dec  5 02:01:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1557: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:01:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:01:18 compute-0 nova_compute[349548]: 2025-12-05 02:01:18.410 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:01:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1558: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:01:21 compute-0 nova_compute[349548]: 2025-12-05 02:01:21.783 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:01:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1559: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:01:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:01:23 compute-0 nova_compute[349548]: 2025-12-05 02:01:23.413 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:01:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1560: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:01:25 compute-0 podman[430239]: 2025-12-05 02:01:25.707157536 +0000 UTC m=+0.103715048 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  5 02:01:25 compute-0 podman[430238]: 2025-12-05 02:01:25.732325421 +0000 UTC m=+0.136878878 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  5 02:01:25 compute-0 podman[430241]: 2025-12-05 02:01:25.746490368 +0000 UTC m=+0.134721597 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, config_id=edpm, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, container_name=openstack_network_exporter, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, release=1755695350, vcs-type=git, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  5 02:01:25 compute-0 podman[430240]: 2025-12-05 02:01:25.761626522 +0000 UTC m=+0.153565565 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  5 02:01:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1561: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:01:26 compute-0 nova_compute[349548]: 2025-12-05 02:01:26.086 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:01:26 compute-0 nova_compute[349548]: 2025-12-05 02:01:26.087 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 02:01:26 compute-0 nova_compute[349548]: 2025-12-05 02:01:26.787 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0016572374365110374 of space, bias 1.0, pg target 0.4971712309533112 quantized to 32 (current 32)
Dec  5 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  5 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:01:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 02:01:27 compute-0 nova_compute[349548]: 2025-12-05 02:01:27.624 349552 DEBUG oslo_concurrency.lockutils [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:01:27 compute-0 nova_compute[349548]: 2025-12-05 02:01:27.626 349552 DEBUG oslo_concurrency.lockutils [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:01:27 compute-0 nova_compute[349548]: 2025-12-05 02:01:27.627 349552 DEBUG oslo_concurrency.lockutils [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:01:27 compute-0 nova_compute[349548]: 2025-12-05 02:01:27.627 349552 DEBUG oslo_concurrency.lockutils [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:01:27 compute-0 nova_compute[349548]: 2025-12-05 02:01:27.628 349552 DEBUG oslo_concurrency.lockutils [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:01:27 compute-0 nova_compute[349548]: 2025-12-05 02:01:27.630 349552 INFO nova.compute.manager [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Terminating instance#033[00m
Dec  5 02:01:27 compute-0 nova_compute[349548]: 2025-12-05 02:01:27.632 349552 DEBUG nova.compute.manager [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  5 02:01:27 compute-0 kernel: tap4341bf52-6b (unregistering): left promiscuous mode
Dec  5 02:01:27 compute-0 nova_compute[349548]: 2025-12-05 02:01:27.773 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:01:27 compute-0 ovn_controller[89286]: 2025-12-05T02:01:27Z|00054|binding|INFO|Releasing lport 4341bf52-6bd5-42ee-b25d-f3d9844af854 from this chassis (sb_readonly=0)
Dec  5 02:01:27 compute-0 ovn_controller[89286]: 2025-12-05T02:01:27Z|00055|binding|INFO|Setting lport 4341bf52-6bd5-42ee-b25d-f3d9844af854 down in Southbound
Dec  5 02:01:27 compute-0 ovn_controller[89286]: 2025-12-05T02:01:27Z|00056|binding|INFO|Removing iface tap4341bf52-6b ovn-installed in OVS
Dec  5 02:01:27 compute-0 nova_compute[349548]: 2025-12-05 02:01:27.781 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:01:27 compute-0 NetworkManager[49092]: <info>  [1764900087.7854] device (tap4341bf52-6b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  5 02:01:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:01:27.786 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:a7:22 192.168.0.25'], port_security=['fa:16:3e:68:a7:22 192.168.0.25'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-qkgif4ysdpfw-vyar5vmyxehf-7qpgpa3gxwp3-port-3t3utgry676a', 'neutron:cidrs': '192.168.0.25/24', 'neutron:device_id': '7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-qkgif4ysdpfw-vyar5vmyxehf-7qpgpa3gxwp3-port-3t3utgry676a', 'neutron:project_id': '6ad982b73954486390215862ee62239f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cf07c149-4b4f-4cc9-a5b5-cfd139acbede', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.236', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8440543a-d57d-422f-b491-49a678c2776e, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=4341bf52-6bd5-42ee-b25d-f3d9844af854) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 02:01:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:01:27.788 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 4341bf52-6bd5-42ee-b25d-f3d9844af854 in datapath 49f7d2f1-f1ff-4dcc-94db-d088dc8d3183 unbound from our chassis#033[00m
Dec  5 02:01:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:01:27.789 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 49f7d2f1-f1ff-4dcc-94db-d088dc8d3183#033[00m
Dec  5 02:01:27 compute-0 nova_compute[349548]: 2025-12-05 02:01:27.795 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:01:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:01:27.805 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[80f8e2b6-5c92-440c-a904-c4b343d17de8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:01:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1562: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:01:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:01:27 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Dec  5 02:01:27 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 1min 44.564s CPU time.
Dec  5 02:01:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:01:27.847 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[f9756fee-271d-4ef9-93f1-bdf174755de9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:01:27 compute-0 systemd-machined[138700]: Machine qemu-3-instance-00000003 terminated.
Dec  5 02:01:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:01:27.852 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[7768c2c0-6996-4a75-ac44-fd921107d33a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:01:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:01:27.885 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[d2e959aa-9434-420a-abc1-1cf65bd32f0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:01:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:01:27.904 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[d9758a03-219d-4f00-9d4b-9fd090774ee6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap49f7d2f1-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:8a:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 14, 'rx_bytes': 616, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 14, 'rx_bytes': 616, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537514, 'reachable_time': 39496, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 430450, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:01:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:01:27.920 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[5b547bd8-202e-42d0-8fa9-a35128c929a2]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap49f7d2f1-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 537531, 'tstamp': 537531}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 430451, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap49f7d2f1-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 537536, 'tstamp': 537536}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 430451, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:01:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:01:27.922 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap49f7d2f1-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:01:27 compute-0 nova_compute[349548]: 2025-12-05 02:01:27.924 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:01:27 compute-0 nova_compute[349548]: 2025-12-05 02:01:27.931 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:01:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:01:27.931 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap49f7d2f1-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:01:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:01:27.932 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 02:01:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:01:27.932 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap49f7d2f1-f0, col_values=(('external_ids', {'iface-id': '35b0af3f-4a87-44c5-9b77-2f08261b9985'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:01:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:01:27.933 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.063 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.071 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.082 349552 INFO nova.virt.libvirt.driver [-] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Instance destroyed successfully.#033[00m
Dec  5 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.083 349552 DEBUG nova.objects.instance [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lazy-loading 'resources' on Instance uuid 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.095 349552 DEBUG nova.virt.libvirt.vif [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T01:53:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-4ysdpfw-vyar5vmyxehf-7qpgpa3gxwp3-vnf-gvxpa75bo2i7',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-4ysdpfw-vyar5vmyxehf-7qpgpa3gxwp3-vnf-gvxpa75bo2i7',id=3,image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-05T01:53:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6ad982b73954486390215862ee62239f',ramdisk_id='',reservation_id='r-6yiphc1y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-05T01:53:42Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wNjI3NDkyODY1Nzg2OTkzOTcyPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTA2Mjc0OTI4NjU3ODY5OTM5NzI9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDYyNzQ5Mjg2NTc4Njk5Mzk3Mj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTA2Mjc0OTI4NjU3ODY5OTM5NzI9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wNjI3NDkyODY1Nzg2OTkzOTcyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wNjI3NDkyODY1Nzg2OTkzOTcyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKC
Dec  5 02:01:28 compute-0 nova_compute[349548]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDYyNzQ5Mjg2NTc4Njk5Mzk3Mj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTA2Mjc0OTI4NjU3ODY5OTM5NzI9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wNjI3NDkyODY1Nzg2OTkzOTcyPT0tLQo=',user_id='ff880837791d4f49a54672b8d0e705ff',uuid=7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "address": "fa:16:3e:68:a7:22", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4341bf52-6b", "ovs_interfaceid": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  5 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.095 349552 DEBUG nova.network.os_vif_util [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converting VIF {"id": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "address": "fa:16:3e:68:a7:22", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4341bf52-6b", "ovs_interfaceid": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.096 349552 DEBUG nova.network.os_vif_util [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:68:a7:22,bridge_name='br-int',has_traffic_filtering=True,id=4341bf52-6bd5-42ee-b25d-f3d9844af854,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4341bf52-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.096 349552 DEBUG os_vif [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:68:a7:22,bridge_name='br-int',has_traffic_filtering=True,id=4341bf52-6bd5-42ee-b25d-f3d9844af854,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4341bf52-6b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  5 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.097 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.098 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4341bf52-6b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.100 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.102 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  5 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.103 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.106 349552 INFO os_vif [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:68:a7:22,bridge_name='br-int',has_traffic_filtering=True,id=4341bf52-6bd5-42ee-b25d-f3d9844af854,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap4341bf52-6b')#033[00m
Dec  5 02:01:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:01:28 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:01:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 02:01:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:01:28 compute-0 rsyslogd[188644]: message too long (8192) with configured size 8096, begin of message is: 2025-12-05 02:01:28.095 349552 DEBUG nova.virt.libvirt.vif [None req-5fa94621-3d [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  5 02:01:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 02:01:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:01:28 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 71bc29b5-4049-4d6e-b918-93976ce33fca does not exist
Dec  5 02:01:28 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 6c8af796-3fa1-425b-9e24-1b3325a9b73e does not exist
Dec  5 02:01:28 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 71654bf0-85c1-461d-a9be-e12534705c08 does not exist
Dec  5 02:01:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 02:01:28 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 02:01:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 02:01:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:01:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:01:28 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.267 349552 DEBUG nova.compute.manager [req-116ab8ea-f481-4def-8776-d737c3cf667d req-3a37b4ad-7f7b-4f05-8c55-ac69ecfb1b4b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Received event network-vif-unplugged-4341bf52-6bd5-42ee-b25d-f3d9844af854 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.267 349552 DEBUG oslo_concurrency.lockutils [req-116ab8ea-f481-4def-8776-d737c3cf667d req-3a37b4ad-7f7b-4f05-8c55-ac69ecfb1b4b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.268 349552 DEBUG oslo_concurrency.lockutils [req-116ab8ea-f481-4def-8776-d737c3cf667d req-3a37b4ad-7f7b-4f05-8c55-ac69ecfb1b4b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.268 349552 DEBUG oslo_concurrency.lockutils [req-116ab8ea-f481-4def-8776-d737c3cf667d req-3a37b4ad-7f7b-4f05-8c55-ac69ecfb1b4b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.268 349552 DEBUG nova.compute.manager [req-116ab8ea-f481-4def-8776-d737c3cf667d req-3a37b4ad-7f7b-4f05-8c55-ac69ecfb1b4b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] No waiting events found dispatching network-vif-unplugged-4341bf52-6bd5-42ee-b25d-f3d9844af854 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.268 349552 DEBUG nova.compute.manager [req-116ab8ea-f481-4def-8776-d737c3cf667d req-3a37b4ad-7f7b-4f05-8c55-ac69ecfb1b4b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Received event network-vif-unplugged-4341bf52-6bd5-42ee-b25d-f3d9844af854 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  5 02:01:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:01:28.530 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:c8:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:b5:45:4f:f9:d2'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 02:01:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:01:28.530 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  5 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.533 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.911 349552 DEBUG nova.compute.manager [req-c1c324c3-743e-4218-bd75-87bdb74aa50d req-fd507ea1-c6db-4910-af86-f95cdcf8ac3e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Received event network-changed-4341bf52-6bd5-42ee-b25d-f3d9844af854 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.911 349552 DEBUG nova.compute.manager [req-c1c324c3-743e-4218-bd75-87bdb74aa50d req-fd507ea1-c6db-4910-af86-f95cdcf8ac3e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Refreshing instance network info cache due to event network-changed-4341bf52-6bd5-42ee-b25d-f3d9844af854. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  5 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.911 349552 DEBUG oslo_concurrency.lockutils [req-c1c324c3-743e-4218-bd75-87bdb74aa50d req-fd507ea1-c6db-4910-af86-f95cdcf8ac3e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.912 349552 DEBUG oslo_concurrency.lockutils [req-c1c324c3-743e-4218-bd75-87bdb74aa50d req-fd507ea1-c6db-4910-af86-f95cdcf8ac3e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:01:28 compute-0 nova_compute[349548]: 2025-12-05 02:01:28.912 349552 DEBUG nova.network.neutron [req-c1c324c3-743e-4218-bd75-87bdb74aa50d req-fd507ea1-c6db-4910-af86-f95cdcf8ac3e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Refreshing network info cache for port 4341bf52-6bd5-42ee-b25d-f3d9844af854 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  5 02:01:29 compute-0 nova_compute[349548]: 2025-12-05 02:01:29.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:01:29 compute-0 nova_compute[349548]: 2025-12-05 02:01:29.066 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 02:01:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:01:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:01:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:01:29 compute-0 podman[430635]: 2025-12-05 02:01:29.163379512 +0000 UTC m=+0.076239438 container create e4da1d0d085bf4e234408a53ab968ab9bbd31a2ff9981fb68c012c425d1ff08a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_noyce, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Dec  5 02:01:29 compute-0 podman[430635]: 2025-12-05 02:01:29.136070746 +0000 UTC m=+0.048930712 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:01:29 compute-0 systemd[1]: Started libpod-conmon-e4da1d0d085bf4e234408a53ab968ab9bbd31a2ff9981fb68c012c425d1ff08a.scope.
Dec  5 02:01:29 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:01:29 compute-0 nova_compute[349548]: 2025-12-05 02:01:29.291 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-3611d2ae-da33-4e55-aec7-0bec88d3b4e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:01:29 compute-0 nova_compute[349548]: 2025-12-05 02:01:29.293 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-3611d2ae-da33-4e55-aec7-0bec88d3b4e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:01:29 compute-0 nova_compute[349548]: 2025-12-05 02:01:29.293 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  5 02:01:29 compute-0 podman[430635]: 2025-12-05 02:01:29.31529815 +0000 UTC m=+0.228158126 container init e4da1d0d085bf4e234408a53ab968ab9bbd31a2ff9981fb68c012c425d1ff08a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:01:29 compute-0 podman[430635]: 2025-12-05 02:01:29.333607933 +0000 UTC m=+0.246467889 container start e4da1d0d085bf4e234408a53ab968ab9bbd31a2ff9981fb68c012c425d1ff08a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  5 02:01:29 compute-0 podman[430635]: 2025-12-05 02:01:29.340111285 +0000 UTC m=+0.252971311 container attach e4da1d0d085bf4e234408a53ab968ab9bbd31a2ff9981fb68c012c425d1ff08a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_noyce, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  5 02:01:29 compute-0 nova_compute[349548]: 2025-12-05 02:01:29.343 349552 INFO nova.virt.libvirt.driver [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Deleting instance files /var/lib/nova/instances/7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_del#033[00m
Dec  5 02:01:29 compute-0 nova_compute[349548]: 2025-12-05 02:01:29.345 349552 INFO nova.virt.libvirt.driver [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Deletion of /var/lib/nova/instances/7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5_del complete#033[00m
Dec  5 02:01:29 compute-0 recursing_noyce[430651]: 167 167
Dec  5 02:01:29 compute-0 systemd[1]: libpod-e4da1d0d085bf4e234408a53ab968ab9bbd31a2ff9981fb68c012c425d1ff08a.scope: Deactivated successfully.
Dec  5 02:01:29 compute-0 podman[430635]: 2025-12-05 02:01:29.349660753 +0000 UTC m=+0.262520709 container died e4da1d0d085bf4e234408a53ab968ab9bbd31a2ff9981fb68c012c425d1ff08a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_noyce, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:01:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-174470297fb1cfddbbb44a99d6c09660504ef6d14ad74d192f25c843a33b2742-merged.mount: Deactivated successfully.
Dec  5 02:01:29 compute-0 podman[430635]: 2025-12-05 02:01:29.429024698 +0000 UTC m=+0.341884614 container remove e4da1d0d085bf4e234408a53ab968ab9bbd31a2ff9981fb68c012c425d1ff08a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_noyce, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:01:29 compute-0 nova_compute[349548]: 2025-12-05 02:01:29.432 349552 INFO nova.compute.manager [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Took 1.80 seconds to destroy the instance on the hypervisor.#033[00m
Dec  5 02:01:29 compute-0 nova_compute[349548]: 2025-12-05 02:01:29.433 349552 DEBUG oslo.service.loopingcall [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  5 02:01:29 compute-0 nova_compute[349548]: 2025-12-05 02:01:29.433 349552 DEBUG nova.compute.manager [-] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  5 02:01:29 compute-0 nova_compute[349548]: 2025-12-05 02:01:29.434 349552 DEBUG nova.network.neutron [-] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  5 02:01:29 compute-0 systemd[1]: libpod-conmon-e4da1d0d085bf4e234408a53ab968ab9bbd31a2ff9981fb68c012c425d1ff08a.scope: Deactivated successfully.
Dec  5 02:01:29 compute-0 podman[430675]: 2025-12-05 02:01:29.704374886 +0000 UTC m=+0.080776745 container create b4587edc64f1bc1d00a90c0fd622c5e5e7e51091813f10942a1d0b87f2f1fb03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_curie, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:01:29 compute-0 podman[158197]: time="2025-12-05T02:01:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:01:29 compute-0 podman[430675]: 2025-12-05 02:01:29.670063404 +0000 UTC m=+0.046465303 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:01:29 compute-0 systemd[1]: Started libpod-conmon-b4587edc64f1bc1d00a90c0fd622c5e5e7e51091813f10942a1d0b87f2f1fb03.scope.
Dec  5 02:01:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1563: 321 pgs: 321 active+clean; 201 MiB data, 323 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:01:29 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:01:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3256c754df319e642871aacd5b7adc3ce230c19d8a761631476d8e5b8296a785/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:01:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3256c754df319e642871aacd5b7adc3ce230c19d8a761631476d8e5b8296a785/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:01:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3256c754df319e642871aacd5b7adc3ce230c19d8a761631476d8e5b8296a785/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:01:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3256c754df319e642871aacd5b7adc3ce230c19d8a761631476d8e5b8296a785/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:01:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3256c754df319e642871aacd5b7adc3ce230c19d8a761631476d8e5b8296a785/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 02:01:29 compute-0 podman[430675]: 2025-12-05 02:01:29.872538479 +0000 UTC m=+0.248940378 container init b4587edc64f1bc1d00a90c0fd622c5e5e7e51091813f10942a1d0b87f2f1fb03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_curie, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:01:29 compute-0 podman[430675]: 2025-12-05 02:01:29.900256916 +0000 UTC m=+0.276658775 container start b4587edc64f1bc1d00a90c0fd622c5e5e7e51091813f10942a1d0b87f2f1fb03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:01:29 compute-0 podman[430675]: 2025-12-05 02:01:29.908005963 +0000 UTC m=+0.284407882 container attach b4587edc64f1bc1d00a90c0fd622c5e5e7e51091813f10942a1d0b87f2f1fb03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  5 02:01:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:01:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45513 "" "Go-http-client/1.1"
Dec  5 02:01:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:01:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9044 "" "Go-http-client/1.1"
Dec  5 02:01:30 compute-0 nova_compute[349548]: 2025-12-05 02:01:30.194 349552 DEBUG nova.network.neutron [req-c1c324c3-743e-4218-bd75-87bdb74aa50d req-fd507ea1-c6db-4910-af86-f95cdcf8ac3e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Updated VIF entry in instance network info cache for port 4341bf52-6bd5-42ee-b25d-f3d9844af854. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  5 02:01:30 compute-0 nova_compute[349548]: 2025-12-05 02:01:30.194 349552 DEBUG nova.network.neutron [req-c1c324c3-743e-4218-bd75-87bdb74aa50d req-fd507ea1-c6db-4910-af86-f95cdcf8ac3e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Updating instance_info_cache with network_info: [{"id": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "address": "fa:16:3e:68:a7:22", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4341bf52-6b", "ovs_interfaceid": "4341bf52-6bd5-42ee-b25d-f3d9844af854", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:01:30 compute-0 nova_compute[349548]: 2025-12-05 02:01:30.221 349552 DEBUG oslo_concurrency.lockutils [req-c1c324c3-743e-4218-bd75-87bdb74aa50d req-fd507ea1-c6db-4910-af86-f95cdcf8ac3e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:01:30 compute-0 nova_compute[349548]: 2025-12-05 02:01:30.453 349552 DEBUG nova.compute.manager [req-afa7b141-d8b0-4367-8de7-47c335cff4eb req-cd005135-6607-4f2e-ba0d-60e6235eb99b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Received event network-vif-plugged-4341bf52-6bd5-42ee-b25d-f3d9844af854 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:01:30 compute-0 nova_compute[349548]: 2025-12-05 02:01:30.454 349552 DEBUG oslo_concurrency.lockutils [req-afa7b141-d8b0-4367-8de7-47c335cff4eb req-cd005135-6607-4f2e-ba0d-60e6235eb99b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:01:30 compute-0 nova_compute[349548]: 2025-12-05 02:01:30.454 349552 DEBUG oslo_concurrency.lockutils [req-afa7b141-d8b0-4367-8de7-47c335cff4eb req-cd005135-6607-4f2e-ba0d-60e6235eb99b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:01:30 compute-0 nova_compute[349548]: 2025-12-05 02:01:30.454 349552 DEBUG oslo_concurrency.lockutils [req-afa7b141-d8b0-4367-8de7-47c335cff4eb req-cd005135-6607-4f2e-ba0d-60e6235eb99b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:01:30 compute-0 nova_compute[349548]: 2025-12-05 02:01:30.455 349552 DEBUG nova.compute.manager [req-afa7b141-d8b0-4367-8de7-47c335cff4eb req-cd005135-6607-4f2e-ba0d-60e6235eb99b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] No waiting events found dispatching network-vif-plugged-4341bf52-6bd5-42ee-b25d-f3d9844af854 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 02:01:30 compute-0 nova_compute[349548]: 2025-12-05 02:01:30.455 349552 WARNING nova.compute.manager [req-afa7b141-d8b0-4367-8de7-47c335cff4eb req-cd005135-6607-4f2e-ba0d-60e6235eb99b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Received unexpected event network-vif-plugged-4341bf52-6bd5-42ee-b25d-f3d9844af854 for instance with vm_state active and task_state deleting.#033[00m
Dec  5 02:01:30 compute-0 nova_compute[349548]: 2025-12-05 02:01:30.688 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Updating instance_info_cache with network_info: [{"id": "2799035c-b9e1-4c24-b031-9824b684480c", "address": "fa:16:3e:10:64:51", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2799035c-b9", "ovs_interfaceid": "2799035c-b9e1-4c24-b031-9824b684480c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:01:30 compute-0 nova_compute[349548]: 2025-12-05 02:01:30.713 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-3611d2ae-da33-4e55-aec7-0bec88d3b4e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:01:30 compute-0 nova_compute[349548]: 2025-12-05 02:01:30.714 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  5 02:01:30 compute-0 nova_compute[349548]: 2025-12-05 02:01:30.715 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:01:30 compute-0 nova_compute[349548]: 2025-12-05 02:01:30.787 349552 DEBUG nova.network.neutron [-] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:01:30 compute-0 nova_compute[349548]: 2025-12-05 02:01:30.808 349552 INFO nova.compute.manager [-] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Took 1.37 seconds to deallocate network for instance.#033[00m
Dec  5 02:01:30 compute-0 nova_compute[349548]: 2025-12-05 02:01:30.882 349552 DEBUG oslo_concurrency.lockutils [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:01:30 compute-0 nova_compute[349548]: 2025-12-05 02:01:30.883 349552 DEBUG oslo_concurrency.lockutils [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:01:31 compute-0 nova_compute[349548]: 2025-12-05 02:01:31.034 349552 DEBUG oslo_concurrency.processutils [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:01:31 compute-0 nova_compute[349548]: 2025-12-05 02:01:31.068 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:01:31 compute-0 nova_compute[349548]: 2025-12-05 02:01:31.069 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:01:31 compute-0 nova_compute[349548]: 2025-12-05 02:01:31.108 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:01:31 compute-0 magical_curie[430692]: --> passed data devices: 0 physical, 3 LVM
Dec  5 02:01:31 compute-0 magical_curie[430692]: --> relative data size: 1.0
Dec  5 02:01:31 compute-0 magical_curie[430692]: --> All data devices are unavailable
Dec  5 02:01:31 compute-0 systemd[1]: libpod-b4587edc64f1bc1d00a90c0fd622c5e5e7e51091813f10942a1d0b87f2f1fb03.scope: Deactivated successfully.
Dec  5 02:01:31 compute-0 podman[430675]: 2025-12-05 02:01:31.260537713 +0000 UTC m=+1.636939562 container died b4587edc64f1bc1d00a90c0fd622c5e5e7e51091813f10942a1d0b87f2f1fb03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_curie, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:01:31 compute-0 systemd[1]: libpod-b4587edc64f1bc1d00a90c0fd622c5e5e7e51091813f10942a1d0b87f2f1fb03.scope: Consumed 1.260s CPU time.
Dec  5 02:01:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-3256c754df319e642871aacd5b7adc3ce230c19d8a761631476d8e5b8296a785-merged.mount: Deactivated successfully.
Dec  5 02:01:31 compute-0 podman[430675]: 2025-12-05 02:01:31.340878185 +0000 UTC m=+1.717280004 container remove b4587edc64f1bc1d00a90c0fd622c5e5e7e51091813f10942a1d0b87f2f1fb03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_curie, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:01:31 compute-0 systemd[1]: libpod-conmon-b4587edc64f1bc1d00a90c0fd622c5e5e7e51091813f10942a1d0b87f2f1fb03.scope: Deactivated successfully.
Dec  5 02:01:31 compute-0 openstack_network_exporter[366555]: ERROR   02:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:01:31 compute-0 openstack_network_exporter[366555]: ERROR   02:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:01:31 compute-0 openstack_network_exporter[366555]: ERROR   02:01:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:01:31 compute-0 openstack_network_exporter[366555]: ERROR   02:01:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:01:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:01:31 compute-0 openstack_network_exporter[366555]: ERROR   02:01:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:01:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:01:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:01:31 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1587700047' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:01:31 compute-0 nova_compute[349548]: 2025-12-05 02:01:31.607 349552 DEBUG oslo_concurrency.processutils [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:01:31 compute-0 nova_compute[349548]: 2025-12-05 02:01:31.622 349552 DEBUG nova.compute.provider_tree [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:01:31 compute-0 nova_compute[349548]: 2025-12-05 02:01:31.637 349552 DEBUG nova.scheduler.client.report [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:01:31 compute-0 nova_compute[349548]: 2025-12-05 02:01:31.655 349552 DEBUG oslo_concurrency.lockutils [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:01:31 compute-0 nova_compute[349548]: 2025-12-05 02:01:31.658 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:01:31 compute-0 nova_compute[349548]: 2025-12-05 02:01:31.658 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:01:31 compute-0 nova_compute[349548]: 2025-12-05 02:01:31.659 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 02:01:31 compute-0 nova_compute[349548]: 2025-12-05 02:01:31.659 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:01:31 compute-0 nova_compute[349548]: 2025-12-05 02:01:31.719 349552 INFO nova.scheduler.client.report [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Deleted allocations for instance 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5#033[00m
Dec  5 02:01:31 compute-0 nova_compute[349548]: 2025-12-05 02:01:31.780 349552 DEBUG oslo_concurrency.lockutils [None req-5fa94621-3d01-4e06-860c-38d715adb1ab ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.154s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:01:31 compute-0 nova_compute[349548]: 2025-12-05 02:01:31.789 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:01:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1564: 321 pgs: 321 active+clean; 192 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1023 B/s wr, 26 op/s
Dec  5 02:01:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:01:32 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3145945582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:01:32 compute-0 nova_compute[349548]: 2025-12-05 02:01:32.221 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:01:32 compute-0 nova_compute[349548]: 2025-12-05 02:01:32.338 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:01:32 compute-0 nova_compute[349548]: 2025-12-05 02:01:32.338 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:01:32 compute-0 nova_compute[349548]: 2025-12-05 02:01:32.338 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:01:32 compute-0 nova_compute[349548]: 2025-12-05 02:01:32.351 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:01:32 compute-0 nova_compute[349548]: 2025-12-05 02:01:32.351 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:01:32 compute-0 nova_compute[349548]: 2025-12-05 02:01:32.352 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:01:32 compute-0 podman[430914]: 2025-12-05 02:01:32.410371842 +0000 UTC m=+0.092648757 container create d8b7899499b78b6f49f27d06637ea6b4e10b29f11241458f09e461606989c292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_mclean, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  5 02:01:32 compute-0 podman[430914]: 2025-12-05 02:01:32.364578329 +0000 UTC m=+0.046855344 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:01:32 compute-0 systemd[1]: Started libpod-conmon-d8b7899499b78b6f49f27d06637ea6b4e10b29f11241458f09e461606989c292.scope.
Dec  5 02:01:32 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:01:32 compute-0 podman[430914]: 2025-12-05 02:01:32.514195922 +0000 UTC m=+0.196472867 container init d8b7899499b78b6f49f27d06637ea6b4e10b29f11241458f09e461606989c292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  5 02:01:32 compute-0 podman[430914]: 2025-12-05 02:01:32.52624318 +0000 UTC m=+0.208520095 container start d8b7899499b78b6f49f27d06637ea6b4e10b29f11241458f09e461606989c292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_mclean, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:01:32 compute-0 podman[430914]: 2025-12-05 02:01:32.529992925 +0000 UTC m=+0.212269870 container attach d8b7899499b78b6f49f27d06637ea6b4e10b29f11241458f09e461606989c292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_mclean, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:01:32 compute-0 romantic_mclean[430929]: 167 167
Dec  5 02:01:32 compute-0 systemd[1]: libpod-d8b7899499b78b6f49f27d06637ea6b4e10b29f11241458f09e461606989c292.scope: Deactivated successfully.
Dec  5 02:01:32 compute-0 podman[430914]: 2025-12-05 02:01:32.536709754 +0000 UTC m=+0.218986719 container died d8b7899499b78b6f49f27d06637ea6b4e10b29f11241458f09e461606989c292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_mclean, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  5 02:01:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-cfcb252b28be680e4621169c79931aacf230d38e6ab46cb979a8c967226b01b6-merged.mount: Deactivated successfully.
Dec  5 02:01:32 compute-0 podman[430914]: 2025-12-05 02:01:32.590856301 +0000 UTC m=+0.273133226 container remove d8b7899499b78b6f49f27d06637ea6b4e10b29f11241458f09e461606989c292 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_mclean, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  5 02:01:32 compute-0 systemd[1]: libpod-conmon-d8b7899499b78b6f49f27d06637ea6b4e10b29f11241458f09e461606989c292.scope: Deactivated successfully.
Dec  5 02:01:32 compute-0 nova_compute[349548]: 2025-12-05 02:01:32.796 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:01:32 compute-0 nova_compute[349548]: 2025-12-05 02:01:32.798 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3614MB free_disk=59.88886642456055GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 02:01:32 compute-0 nova_compute[349548]: 2025-12-05 02:01:32.798 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:01:32 compute-0 nova_compute[349548]: 2025-12-05 02:01:32.801 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:01:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:01:32 compute-0 podman[430952]: 2025-12-05 02:01:32.84376201 +0000 UTC m=+0.060194418 container create 2a1e9d887ccbfc08bd45a870234bd746908ab28e5c2b5fa3b979e4c0dfab7e22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:01:32 compute-0 systemd[1]: Started libpod-conmon-2a1e9d887ccbfc08bd45a870234bd746908ab28e5c2b5fa3b979e4c0dfab7e22.scope.
Dec  5 02:01:32 compute-0 podman[430952]: 2025-12-05 02:01:32.822048631 +0000 UTC m=+0.038481079 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:01:32 compute-0 nova_compute[349548]: 2025-12-05 02:01:32.937 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b69a0e24-1bc4-46a5-92d7-367c1efd53df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:01:32 compute-0 nova_compute[349548]: 2025-12-05 02:01:32.938 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 3611d2ae-da33-4e55-aec7-0bec88d3b4e0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:01:32 compute-0 nova_compute[349548]: 2025-12-05 02:01:32.938 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 02:01:32 compute-0 nova_compute[349548]: 2025-12-05 02:01:32.938 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 02:01:32 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:01:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6926310a7b527b0a226540de2a6985d0eb2181f90eeab80330374d5b37eafc40/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:01:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6926310a7b527b0a226540de2a6985d0eb2181f90eeab80330374d5b37eafc40/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:01:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6926310a7b527b0a226540de2a6985d0eb2181f90eeab80330374d5b37eafc40/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:01:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6926310a7b527b0a226540de2a6985d0eb2181f90eeab80330374d5b37eafc40/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:01:32 compute-0 nova_compute[349548]: 2025-12-05 02:01:32.980 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:01:33 compute-0 podman[430952]: 2025-12-05 02:01:33.009692311 +0000 UTC m=+0.226124769 container init 2a1e9d887ccbfc08bd45a870234bd746908ab28e5c2b5fa3b979e4c0dfab7e22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:01:33 compute-0 podman[430952]: 2025-12-05 02:01:33.042342996 +0000 UTC m=+0.258775414 container start 2a1e9d887ccbfc08bd45a870234bd746908ab28e5c2b5fa3b979e4c0dfab7e22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_curie, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  5 02:01:33 compute-0 podman[430952]: 2025-12-05 02:01:33.048136509 +0000 UTC m=+0.264568927 container attach 2a1e9d887ccbfc08bd45a870234bd746908ab28e5c2b5fa3b979e4c0dfab7e22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_curie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  5 02:01:33 compute-0 nova_compute[349548]: 2025-12-05 02:01:33.100 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:01:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:01:33 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2677156586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:01:33 compute-0 nova_compute[349548]: 2025-12-05 02:01:33.493 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:01:33 compute-0 nova_compute[349548]: 2025-12-05 02:01:33.504 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:01:33 compute-0 nova_compute[349548]: 2025-12-05 02:01:33.521 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:01:33 compute-0 nova_compute[349548]: 2025-12-05 02:01:33.524 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 02:01:33 compute-0 nova_compute[349548]: 2025-12-05 02:01:33.525 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:01:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1565: 321 pgs: 321 active+clean; 139 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  5 02:01:33 compute-0 exciting_curie[430967]: {
Dec  5 02:01:33 compute-0 exciting_curie[430967]:    "0": [
Dec  5 02:01:33 compute-0 exciting_curie[430967]:        {
Dec  5 02:01:33 compute-0 exciting_curie[430967]:            "devices": [
Dec  5 02:01:33 compute-0 exciting_curie[430967]:                "/dev/loop3"
Dec  5 02:01:33 compute-0 exciting_curie[430967]:            ],
Dec  5 02:01:33 compute-0 exciting_curie[430967]:            "lv_name": "ceph_lv0",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:            "lv_size": "21470642176",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:            "name": "ceph_lv0",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:            "tags": {
Dec  5 02:01:33 compute-0 exciting_curie[430967]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:                "ceph.cluster_name": "ceph",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:                "ceph.crush_device_class": "",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:                "ceph.encrypted": "0",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:                "ceph.osd_id": "0",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:                "ceph.type": "block",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:                "ceph.vdo": "0"
Dec  5 02:01:33 compute-0 exciting_curie[430967]:            },
Dec  5 02:01:33 compute-0 exciting_curie[430967]:            "type": "block",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:            "vg_name": "ceph_vg0"
Dec  5 02:01:33 compute-0 exciting_curie[430967]:        }
Dec  5 02:01:33 compute-0 exciting_curie[430967]:    ],
Dec  5 02:01:33 compute-0 exciting_curie[430967]:    "1": [
Dec  5 02:01:33 compute-0 exciting_curie[430967]:        {
Dec  5 02:01:33 compute-0 exciting_curie[430967]:            "devices": [
Dec  5 02:01:33 compute-0 exciting_curie[430967]:                "/dev/loop4"
Dec  5 02:01:33 compute-0 exciting_curie[430967]:            ],
Dec  5 02:01:33 compute-0 exciting_curie[430967]:            "lv_name": "ceph_lv1",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:            "lv_size": "21470642176",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:            "name": "ceph_lv1",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:            "tags": {
Dec  5 02:01:33 compute-0 exciting_curie[430967]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:                "ceph.cluster_name": "ceph",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:                "ceph.crush_device_class": "",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:                "ceph.encrypted": "0",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:                "ceph.osd_id": "1",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:                "ceph.type": "block",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:                "ceph.vdo": "0"
Dec  5 02:01:33 compute-0 exciting_curie[430967]:            },
Dec  5 02:01:33 compute-0 exciting_curie[430967]:            "type": "block",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:            "vg_name": "ceph_vg1"
Dec  5 02:01:33 compute-0 exciting_curie[430967]:        }
Dec  5 02:01:33 compute-0 exciting_curie[430967]:    ],
Dec  5 02:01:33 compute-0 exciting_curie[430967]:    "2": [
Dec  5 02:01:33 compute-0 exciting_curie[430967]:        {
Dec  5 02:01:33 compute-0 exciting_curie[430967]:            "devices": [
Dec  5 02:01:33 compute-0 exciting_curie[430967]:                "/dev/loop5"
Dec  5 02:01:33 compute-0 exciting_curie[430967]:            ],
Dec  5 02:01:33 compute-0 exciting_curie[430967]:            "lv_name": "ceph_lv2",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:            "lv_size": "21470642176",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:            "name": "ceph_lv2",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:            "tags": {
Dec  5 02:01:33 compute-0 exciting_curie[430967]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:                "ceph.cluster_name": "ceph",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:                "ceph.crush_device_class": "",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:                "ceph.encrypted": "0",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:                "ceph.osd_id": "2",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:                "ceph.type": "block",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:                "ceph.vdo": "0"
Dec  5 02:01:33 compute-0 exciting_curie[430967]:            },
Dec  5 02:01:33 compute-0 exciting_curie[430967]:            "type": "block",
Dec  5 02:01:33 compute-0 exciting_curie[430967]:            "vg_name": "ceph_vg2"
Dec  5 02:01:33 compute-0 exciting_curie[430967]:        }
Dec  5 02:01:33 compute-0 exciting_curie[430967]:    ]
Dec  5 02:01:33 compute-0 exciting_curie[430967]: }
Dec  5 02:01:33 compute-0 systemd[1]: libpod-2a1e9d887ccbfc08bd45a870234bd746908ab28e5c2b5fa3b979e4c0dfab7e22.scope: Deactivated successfully.
Dec  5 02:01:33 compute-0 podman[430952]: 2025-12-05 02:01:33.918479494 +0000 UTC m=+1.134911942 container died 2a1e9d887ccbfc08bd45a870234bd746908ab28e5c2b5fa3b979e4c0dfab7e22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_curie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  5 02:01:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-6926310a7b527b0a226540de2a6985d0eb2181f90eeab80330374d5b37eafc40-merged.mount: Deactivated successfully.
Dec  5 02:01:34 compute-0 podman[430952]: 2025-12-05 02:01:34.025669588 +0000 UTC m=+1.242101996 container remove 2a1e9d887ccbfc08bd45a870234bd746908ab28e5c2b5fa3b979e4c0dfab7e22 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_curie, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:01:34 compute-0 systemd[1]: libpod-conmon-2a1e9d887ccbfc08bd45a870234bd746908ab28e5c2b5fa3b979e4c0dfab7e22.scope: Deactivated successfully.
Dec  5 02:01:35 compute-0 podman[431148]: 2025-12-05 02:01:35.048719083 +0000 UTC m=+0.099559942 container create 4e24370165c002c3af2b079dd982b11e685c97de45ce48fadeb2cea4fb80edd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_liskov, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Dec  5 02:01:35 compute-0 podman[431148]: 2025-12-05 02:01:34.997290051 +0000 UTC m=+0.048130990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:01:35 compute-0 systemd[1]: Started libpod-conmon-4e24370165c002c3af2b079dd982b11e685c97de45ce48fadeb2cea4fb80edd5.scope.
Dec  5 02:01:35 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:01:35 compute-0 podman[431148]: 2025-12-05 02:01:35.177112702 +0000 UTC m=+0.227953571 container init 4e24370165c002c3af2b079dd982b11e685c97de45ce48fadeb2cea4fb80edd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_liskov, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  5 02:01:35 compute-0 podman[431148]: 2025-12-05 02:01:35.187997257 +0000 UTC m=+0.238838116 container start 4e24370165c002c3af2b079dd982b11e685c97de45ce48fadeb2cea4fb80edd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_liskov, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  5 02:01:35 compute-0 podman[431148]: 2025-12-05 02:01:35.193038968 +0000 UTC m=+0.243879927 container attach 4e24370165c002c3af2b079dd982b11e685c97de45ce48fadeb2cea4fb80edd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  5 02:01:35 compute-0 quirky_liskov[431164]: 167 167
Dec  5 02:01:35 compute-0 systemd[1]: libpod-4e24370165c002c3af2b079dd982b11e685c97de45ce48fadeb2cea4fb80edd5.scope: Deactivated successfully.
Dec  5 02:01:35 compute-0 podman[431148]: 2025-12-05 02:01:35.196341701 +0000 UTC m=+0.247182560 container died 4e24370165c002c3af2b079dd982b11e685c97de45ce48fadeb2cea4fb80edd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  5 02:01:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-79deee6cc8ad23f24571e2c15434866f01a95fd0d7af108225bb7d8b7342235c-merged.mount: Deactivated successfully.
Dec  5 02:01:35 compute-0 podman[431148]: 2025-12-05 02:01:35.264465 +0000 UTC m=+0.315305859 container remove 4e24370165c002c3af2b079dd982b11e685c97de45ce48fadeb2cea4fb80edd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_liskov, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Dec  5 02:01:35 compute-0 systemd[1]: libpod-conmon-4e24370165c002c3af2b079dd982b11e685c97de45ce48fadeb2cea4fb80edd5.scope: Deactivated successfully.
Dec  5 02:01:35 compute-0 podman[431187]: 2025-12-05 02:01:35.529936651 +0000 UTC m=+0.078110360 container create 1664a415665b04987e7a2563a18736c15c7a50659ec5bc27db116ec7d1a1586e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  5 02:01:35 compute-0 podman[431187]: 2025-12-05 02:01:35.498291744 +0000 UTC m=+0.046465503 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:01:35 compute-0 systemd[1]: Started libpod-conmon-1664a415665b04987e7a2563a18736c15c7a50659ec5bc27db116ec7d1a1586e.scope.
Dec  5 02:01:35 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:01:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77823af6dac19a6b181bb2701eda02a0deb3259fb1cf8147731634ad6b2614a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:01:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77823af6dac19a6b181bb2701eda02a0deb3259fb1cf8147731634ad6b2614a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:01:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77823af6dac19a6b181bb2701eda02a0deb3259fb1cf8147731634ad6b2614a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:01:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77823af6dac19a6b181bb2701eda02a0deb3259fb1cf8147731634ad6b2614a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:01:35 compute-0 podman[431187]: 2025-12-05 02:01:35.67402089 +0000 UTC m=+0.222194629 container init 1664a415665b04987e7a2563a18736c15c7a50659ec5bc27db116ec7d1a1586e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cohen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  5 02:01:35 compute-0 podman[431187]: 2025-12-05 02:01:35.696550081 +0000 UTC m=+0.244723800 container start 1664a415665b04987e7a2563a18736c15c7a50659ec5bc27db116ec7d1a1586e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cohen, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  5 02:01:35 compute-0 podman[431187]: 2025-12-05 02:01:35.703431494 +0000 UTC m=+0.251605203 container attach 1664a415665b04987e7a2563a18736c15c7a50659ec5bc27db116ec7d1a1586e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cohen, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:01:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1566: 321 pgs: 321 active+clean; 139 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  5 02:01:36 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:01:36.533 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8dd76c1c-ab01-42af-b35e-2e870841b6ad, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:01:36 compute-0 nova_compute[349548]: 2025-12-05 02:01:36.792 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:01:36 compute-0 optimistic_cohen[431203]: {
Dec  5 02:01:36 compute-0 optimistic_cohen[431203]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 02:01:36 compute-0 optimistic_cohen[431203]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:01:36 compute-0 optimistic_cohen[431203]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 02:01:36 compute-0 optimistic_cohen[431203]:        "osd_id": 0,
Dec  5 02:01:36 compute-0 optimistic_cohen[431203]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:01:36 compute-0 optimistic_cohen[431203]:        "type": "bluestore"
Dec  5 02:01:36 compute-0 optimistic_cohen[431203]:    },
Dec  5 02:01:36 compute-0 optimistic_cohen[431203]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 02:01:36 compute-0 optimistic_cohen[431203]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:01:36 compute-0 optimistic_cohen[431203]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 02:01:36 compute-0 optimistic_cohen[431203]:        "osd_id": 1,
Dec  5 02:01:36 compute-0 optimistic_cohen[431203]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:01:36 compute-0 optimistic_cohen[431203]:        "type": "bluestore"
Dec  5 02:01:36 compute-0 optimistic_cohen[431203]:    },
Dec  5 02:01:36 compute-0 optimistic_cohen[431203]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 02:01:36 compute-0 optimistic_cohen[431203]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:01:36 compute-0 optimistic_cohen[431203]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 02:01:36 compute-0 optimistic_cohen[431203]:        "osd_id": 2,
Dec  5 02:01:36 compute-0 optimistic_cohen[431203]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:01:36 compute-0 optimistic_cohen[431203]:        "type": "bluestore"
Dec  5 02:01:36 compute-0 optimistic_cohen[431203]:    }
Dec  5 02:01:36 compute-0 optimistic_cohen[431203]: }
Dec  5 02:01:36 compute-0 systemd[1]: libpod-1664a415665b04987e7a2563a18736c15c7a50659ec5bc27db116ec7d1a1586e.scope: Deactivated successfully.
Dec  5 02:01:36 compute-0 podman[431187]: 2025-12-05 02:01:36.906148086 +0000 UTC m=+1.454321775 container died 1664a415665b04987e7a2563a18736c15c7a50659ec5bc27db116ec7d1a1586e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cohen, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:01:36 compute-0 systemd[1]: libpod-1664a415665b04987e7a2563a18736c15c7a50659ec5bc27db116ec7d1a1586e.scope: Consumed 1.210s CPU time.
Dec  5 02:01:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-77823af6dac19a6b181bb2701eda02a0deb3259fb1cf8147731634ad6b2614a9-merged.mount: Deactivated successfully.
Dec  5 02:01:37 compute-0 podman[431187]: 2025-12-05 02:01:37.110173424 +0000 UTC m=+1.658347153 container remove 1664a415665b04987e7a2563a18736c15c7a50659ec5bc27db116ec7d1a1586e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cohen, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:01:37 compute-0 systemd[1]: libpod-conmon-1664a415665b04987e7a2563a18736c15c7a50659ec5bc27db116ec7d1a1586e.scope: Deactivated successfully.
Dec  5 02:01:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 02:01:37 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:01:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 02:01:37 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:01:37 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 8d5a668e-eecf-4c2c-980b-51609e88500d does not exist
Dec  5 02:01:37 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev c7a6f763-d5d2-4292-beb9-bd5debf483af does not exist
Dec  5 02:01:37 compute-0 nova_compute[349548]: 2025-12-05 02:01:37.522 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:01:37 compute-0 nova_compute[349548]: 2025-12-05 02:01:37.549 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:01:37 compute-0 nova_compute[349548]: 2025-12-05 02:01:37.550 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:01:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1567: 321 pgs: 321 active+clean; 139 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  5 02:01:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:01:38 compute-0 nova_compute[349548]: 2025-12-05 02:01:38.104 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:01:38 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:01:38 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:01:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1568: 321 pgs: 321 active+clean; 139 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  5 02:01:41 compute-0 nova_compute[349548]: 2025-12-05 02:01:41.793 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:01:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1569: 321 pgs: 321 active+clean; 139 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  5 02:01:42 compute-0 podman[431299]: 2025-12-05 02:01:42.724541291 +0000 UTC m=+0.118427021 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  5 02:01:42 compute-0 podman[431298]: 2025-12-05 02:01:42.743120621 +0000 UTC m=+0.134817839 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  5 02:01:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:01:43 compute-0 nova_compute[349548]: 2025-12-05 02:01:43.078 349552 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764900088.0759847, 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:01:43 compute-0 nova_compute[349548]: 2025-12-05 02:01:43.079 349552 INFO nova.compute.manager [-] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] VM Stopped (Lifecycle Event)#033[00m
Dec  5 02:01:43 compute-0 nova_compute[349548]: 2025-12-05 02:01:43.106 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:01:43 compute-0 nova_compute[349548]: 2025-12-05 02:01:43.110 349552 DEBUG nova.compute.manager [None req-75552ae3-7a45-41d0-bbc9-7974bced881a - - - - - -] [instance: 7cc97c2c-ffaf-4dfd-bd8c-4ac5267c2fd5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:01:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1570: 321 pgs: 321 active+clean; 139 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s rd, 767 B/s wr, 13 op/s
Dec  5 02:01:44 compute-0 podman[431337]: 2025-12-05 02:01:44.741729202 +0000 UTC m=+0.143067601 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  5 02:01:44 compute-0 podman[431338]: 2025-12-05 02:01:44.761447015 +0000 UTC m=+0.153938446 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec  5 02:01:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 02:01:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2067888663' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 02:01:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 02:01:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2067888663' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 02:01:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1571: 321 pgs: 321 active+clean; 139 MiB data, 292 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:01:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:01:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:01:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:01:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:01:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:01:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:01:46 compute-0 nova_compute[349548]: 2025-12-05 02:01:46.797 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:01:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1572: 321 pgs: 321 active+clean; 139 MiB data, 292 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:01:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:01:47 compute-0 systemd-logind[792]: New session 62 of user zuul.
Dec  5 02:01:47 compute-0 systemd[1]: Started Session 62 of User zuul.
Dec  5 02:01:48 compute-0 podman[431378]: 2025-12-05 02:01:48.07921891 +0000 UTC m=+0.130156520 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, architecture=x86_64, distribution-scope=public, name=ubi9, io.buildah.version=1.29.0, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, config_id=edpm, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  5 02:01:48 compute-0 nova_compute[349548]: 2025-12-05 02:01:48.109 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:01:49 compute-0 python3[431575]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 02:01:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1573: 321 pgs: 321 active+clean; 139 MiB data, 292 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:01:51 compute-0 nova_compute[349548]: 2025-12-05 02:01:51.799 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:01:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1574: 321 pgs: 321 active+clean; 139 MiB data, 292 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:01:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:01:53 compute-0 nova_compute[349548]: 2025-12-05 02:01:53.111 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:01:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1575: 321 pgs: 321 active+clean; 139 MiB data, 292 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:01:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1576: 321 pgs: 321 active+clean; 139 MiB data, 292 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:01:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:01:56.195 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:01:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:01:56.195 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:01:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:01:56.196 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:01:56 compute-0 podman[431616]: 2025-12-05 02:01:56.697375971 +0000 UTC m=+0.099533021 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd)
Dec  5 02:01:56 compute-0 podman[431617]: 2025-12-05 02:01:56.70413351 +0000 UTC m=+0.105864468 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 02:01:56 compute-0 podman[431619]: 2025-12-05 02:01:56.734937564 +0000 UTC m=+0.124891702 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, architecture=x86_64, io.openshift.tags=minimal rhel9, release=1755695350, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  5 02:01:56 compute-0 podman[431618]: 2025-12-05 02:01:56.758683949 +0000 UTC m=+0.152413783 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller)
Dec  5 02:01:56 compute-0 nova_compute[349548]: 2025-12-05 02:01:56.801 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:01:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1577: 321 pgs: 321 active+clean; 147 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 4.3 KiB/s rd, 683 KiB/s wr, 6 op/s
Dec  5 02:01:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:01:58 compute-0 nova_compute[349548]: 2025-12-05 02:01:58.115 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:01:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Dec  5 02:01:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Dec  5 02:01:58 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Dec  5 02:01:59 compute-0 podman[158197]: time="2025-12-05T02:01:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:01:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:01:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 02:01:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:01:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8648 "" "Go-http-client/1.1"
Dec  5 02:01:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1579: 321 pgs: 321 active+clean; 147 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 820 KiB/s wr, 7 op/s
Dec  5 02:02:01 compute-0 openstack_network_exporter[366555]: ERROR   02:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:02:01 compute-0 openstack_network_exporter[366555]: ERROR   02:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:02:01 compute-0 openstack_network_exporter[366555]: ERROR   02:02:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:02:01 compute-0 openstack_network_exporter[366555]: ERROR   02:02:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:02:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:02:01 compute-0 openstack_network_exporter[366555]: ERROR   02:02:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:02:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:02:01 compute-0 nova_compute[349548]: 2025-12-05 02:02:01.803 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:02:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1580: 321 pgs: 321 active+clean; 147 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 820 KiB/s wr, 7 op/s
Dec  5 02:02:02 compute-0 ovn_controller[89286]: 2025-12-05T02:02:02Z|00057|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec  5 02:02:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:02:03 compute-0 nova_compute[349548]: 2025-12-05 02:02:03.118 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:02:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1581: 321 pgs: 321 active+clean; 155 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.6 MiB/s wr, 18 op/s
Dec  5 02:02:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1582: 321 pgs: 321 active+clean; 155 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.6 MiB/s wr, 18 op/s
Dec  5 02:02:06 compute-0 nova_compute[349548]: 2025-12-05 02:02:06.417 349552 DEBUG oslo_concurrency.lockutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "ee0bd3a4-b224-4dad-948c-1362bf56fea1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:02:06 compute-0 nova_compute[349548]: 2025-12-05 02:02:06.418 349552 DEBUG oslo_concurrency.lockutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "ee0bd3a4-b224-4dad-948c-1362bf56fea1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:02:06 compute-0 nova_compute[349548]: 2025-12-05 02:02:06.436 349552 DEBUG nova.compute.manager [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  5 02:02:06 compute-0 nova_compute[349548]: 2025-12-05 02:02:06.518 349552 DEBUG oslo_concurrency.lockutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:02:06 compute-0 nova_compute[349548]: 2025-12-05 02:02:06.518 349552 DEBUG oslo_concurrency.lockutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:02:06 compute-0 nova_compute[349548]: 2025-12-05 02:02:06.528 349552 DEBUG nova.virt.hardware [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  5 02:02:06 compute-0 nova_compute[349548]: 2025-12-05 02:02:06.529 349552 INFO nova.compute.claims [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  5 02:02:06 compute-0 nova_compute[349548]: 2025-12-05 02:02:06.694 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:02:06 compute-0 nova_compute[349548]: 2025-12-05 02:02:06.805 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:02:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:02:07 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/571145534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:02:07 compute-0 nova_compute[349548]: 2025-12-05 02:02:07.151 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:02:07 compute-0 nova_compute[349548]: 2025-12-05 02:02:07.162 349552 DEBUG nova.compute.provider_tree [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:02:07 compute-0 nova_compute[349548]: 2025-12-05 02:02:07.180 349552 DEBUG nova.scheduler.client.report [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:02:07 compute-0 nova_compute[349548]: 2025-12-05 02:02:07.216 349552 DEBUG oslo_concurrency.lockutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:02:07 compute-0 nova_compute[349548]: 2025-12-05 02:02:07.217 349552 DEBUG nova.compute.manager [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  5 02:02:07 compute-0 nova_compute[349548]: 2025-12-05 02:02:07.277 349552 DEBUG nova.compute.manager [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Dec  5 02:02:07 compute-0 nova_compute[349548]: 2025-12-05 02:02:07.295 349552 INFO nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  5 02:02:07 compute-0 nova_compute[349548]: 2025-12-05 02:02:07.331 349552 DEBUG nova.compute.manager [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  5 02:02:07 compute-0 nova_compute[349548]: 2025-12-05 02:02:07.432 349552 DEBUG nova.compute.manager [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  5 02:02:07 compute-0 nova_compute[349548]: 2025-12-05 02:02:07.434 349552 DEBUG nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  5 02:02:07 compute-0 nova_compute[349548]: 2025-12-05 02:02:07.435 349552 INFO nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Creating image(s)#033[00m
Dec  5 02:02:07 compute-0 nova_compute[349548]: 2025-12-05 02:02:07.488 349552 DEBUG nova.storage.rbd_utils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image ee0bd3a4-b224-4dad-948c-1362bf56fea1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:02:07 compute-0 nova_compute[349548]: 2025-12-05 02:02:07.548 349552 DEBUG nova.storage.rbd_utils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image ee0bd3a4-b224-4dad-948c-1362bf56fea1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:02:07 compute-0 nova_compute[349548]: 2025-12-05 02:02:07.591 349552 DEBUG nova.storage.rbd_utils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image ee0bd3a4-b224-4dad-948c-1362bf56fea1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:02:07 compute-0 nova_compute[349548]: 2025-12-05 02:02:07.598 349552 DEBUG oslo_concurrency.lockutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "c50dad93a0c0d8de9b59bb98a1c7fb911608b410" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:02:07 compute-0 nova_compute[349548]: 2025-12-05 02:02:07.599 349552 DEBUG oslo_concurrency.lockutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "c50dad93a0c0d8de9b59bb98a1c7fb911608b410" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:02:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1583: 321 pgs: 321 active+clean; 155 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 7.5 KiB/s rd, 773 KiB/s wr, 10 op/s
Dec  5 02:02:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:02:07 compute-0 nova_compute[349548]: 2025-12-05 02:02:07.898 349552 DEBUG nova.virt.libvirt.imagebackend [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Image locations are: [{'url': 'rbd://cbd280d3-cbd8-528b-ace6-2b3a887cdcee/images/2f1298d8-b7d4-43bf-b887-b91409888461/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://cbd280d3-cbd8-528b-ace6-2b3a887cdcee/images/2f1298d8-b7d4-43bf-b887-b91409888461/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Dec  5 02:02:08 compute-0 nova_compute[349548]: 2025-12-05 02:02:08.122 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:02:09 compute-0 nova_compute[349548]: 2025-12-05 02:02:09.039 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c50dad93a0c0d8de9b59bb98a1c7fb911608b410.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:02:09 compute-0 nova_compute[349548]: 2025-12-05 02:02:09.143 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c50dad93a0c0d8de9b59bb98a1c7fb911608b410.part --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:02:09 compute-0 nova_compute[349548]: 2025-12-05 02:02:09.144 349552 DEBUG nova.virt.images [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] 2f1298d8-b7d4-43bf-b887-b91409888461 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec  5 02:02:09 compute-0 nova_compute[349548]: 2025-12-05 02:02:09.145 349552 DEBUG nova.privsep.utils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  5 02:02:09 compute-0 nova_compute[349548]: 2025-12-05 02:02:09.146 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/c50dad93a0c0d8de9b59bb98a1c7fb911608b410.part /var/lib/nova/instances/_base/c50dad93a0c0d8de9b59bb98a1c7fb911608b410.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:02:09 compute-0 nova_compute[349548]: 2025-12-05 02:02:09.328 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/c50dad93a0c0d8de9b59bb98a1c7fb911608b410.part /var/lib/nova/instances/_base/c50dad93a0c0d8de9b59bb98a1c7fb911608b410.converted" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:02:09 compute-0 nova_compute[349548]: 2025-12-05 02:02:09.331 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c50dad93a0c0d8de9b59bb98a1c7fb911608b410.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:02:09 compute-0 nova_compute[349548]: 2025-12-05 02:02:09.387 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c50dad93a0c0d8de9b59bb98a1c7fb911608b410.converted --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:02:09 compute-0 nova_compute[349548]: 2025-12-05 02:02:09.388 349552 DEBUG oslo_concurrency.lockutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "c50dad93a0c0d8de9b59bb98a1c7fb911608b410" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:02:09 compute-0 nova_compute[349548]: 2025-12-05 02:02:09.419 349552 DEBUG nova.storage.rbd_utils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image ee0bd3a4-b224-4dad-948c-1362bf56fea1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:02:09 compute-0 nova_compute[349548]: 2025-12-05 02:02:09.428 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c50dad93a0c0d8de9b59bb98a1c7fb911608b410 ee0bd3a4-b224-4dad-948c-1362bf56fea1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:02:09 compute-0 nova_compute[349548]: 2025-12-05 02:02:09.783 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c50dad93a0c0d8de9b59bb98a1c7fb911608b410 ee0bd3a4-b224-4dad-948c-1362bf56fea1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.355s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:02:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1584: 321 pgs: 321 active+clean; 155 MiB data, 308 MiB used, 60 GiB / 60 GiB avail; 6.9 KiB/s rd, 713 KiB/s wr, 9 op/s
Dec  5 02:02:09 compute-0 nova_compute[349548]: 2025-12-05 02:02:09.903 349552 DEBUG nova.storage.rbd_utils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] resizing rbd image ee0bd3a4-b224-4dad-948c-1362bf56fea1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  5 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.110 349552 DEBUG nova.objects.instance [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lazy-loading 'migration_context' on Instance uuid ee0bd3a4-b224-4dad-948c-1362bf56fea1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.160 349552 DEBUG nova.storage.rbd_utils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image ee0bd3a4-b224-4dad-948c-1362bf56fea1_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.198 349552 DEBUG nova.storage.rbd_utils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image ee0bd3a4-b224-4dad-948c-1362bf56fea1_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.206 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.282 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.283 349552 DEBUG oslo_concurrency.lockutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.284 349552 DEBUG oslo_concurrency.lockutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.284 349552 DEBUG oslo_concurrency.lockutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.319 349552 DEBUG nova.storage.rbd_utils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image ee0bd3a4-b224-4dad-948c-1362bf56fea1_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.326 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 ee0bd3a4-b224-4dad-948c-1362bf56fea1_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.800 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 ee0bd3a4-b224-4dad-948c-1362bf56fea1_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.946 349552 DEBUG nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  5 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.947 349552 DEBUG nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Ensure instance console log exists: /var/lib/nova/instances/ee0bd3a4-b224-4dad-948c-1362bf56fea1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  5 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.947 349552 DEBUG oslo_concurrency.lockutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.948 349552 DEBUG oslo_concurrency.lockutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.948 349552 DEBUG oslo_concurrency.lockutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.949 349552 DEBUG nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-05T02:01:53Z,direct_url=<?>,disk_format='qcow2',id=2f1298d8-b7d4-43bf-b887-b91409888461,min_disk=0,min_ram=0,name='fvt_testing_image',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-05T02:01:59Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'image_id': '2f1298d8-b7d4-43bf-b887-b91409888461'}], 'ephemerals': [{'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'device_name': '/dev/vdb', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 1}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  5 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.958 349552 WARNING nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.968 349552 DEBUG nova.virt.libvirt.host [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  5 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.969 349552 DEBUG nova.virt.libvirt.host [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  5 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.974 349552 DEBUG nova.virt.libvirt.host [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  5 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.975 349552 DEBUG nova.virt.libvirt.host [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  5 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.976 349552 DEBUG nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  5 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.976 349552 DEBUG nova.virt.hardware [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-05T02:02:01Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='d5a74f49-c758-455f-8cdb-cc9a5a969d77',id=2,is_public=True,memory_mb=512,name='fvt_testing_flavor',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-05T02:01:53Z,direct_url=<?>,disk_format='qcow2',id=2f1298d8-b7d4-43bf-b887-b91409888461,min_disk=0,min_ram=0,name='fvt_testing_image',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-05T02:01:59Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  5 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.978 349552 DEBUG nova.virt.hardware [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  5 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.978 349552 DEBUG nova.virt.hardware [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  5 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.979 349552 DEBUG nova.virt.hardware [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  5 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.979 349552 DEBUG nova.virt.hardware [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  5 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.980 349552 DEBUG nova.virt.hardware [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  5 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.981 349552 DEBUG nova.virt.hardware [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  5 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.981 349552 DEBUG nova.virt.hardware [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  5 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.982 349552 DEBUG nova.virt.hardware [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  5 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.982 349552 DEBUG nova.virt.hardware [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  5 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.982 349552 DEBUG nova.virt.hardware [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  5 02:02:10 compute-0 nova_compute[349548]: 2025-12-05 02:02:10.985 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:02:11 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  5 02:02:11 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1132399297' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  5 02:02:11 compute-0 nova_compute[349548]: 2025-12-05 02:02:11.496 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:02:11 compute-0 nova_compute[349548]: 2025-12-05 02:02:11.498 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:02:11 compute-0 nova_compute[349548]: 2025-12-05 02:02:11.808 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:02:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1585: 321 pgs: 321 active+clean; 164 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.0 MiB/s wr, 20 op/s
Dec  5 02:02:11 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  5 02:02:11 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3698465419' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  5 02:02:11 compute-0 nova_compute[349548]: 2025-12-05 02:02:11.985 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:02:12 compute-0 nova_compute[349548]: 2025-12-05 02:02:12.028 349552 DEBUG nova.storage.rbd_utils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image ee0bd3a4-b224-4dad-948c-1362bf56fea1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:02:12 compute-0 nova_compute[349548]: 2025-12-05 02:02:12.036 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:02:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  5 02:02:12 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2274756270' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  5 02:02:12 compute-0 nova_compute[349548]: 2025-12-05 02:02:12.531 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:02:12 compute-0 nova_compute[349548]: 2025-12-05 02:02:12.533 349552 DEBUG nova.objects.instance [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lazy-loading 'pci_devices' on Instance uuid ee0bd3a4-b224-4dad-948c-1362bf56fea1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:02:12 compute-0 nova_compute[349548]: 2025-12-05 02:02:12.549 349552 DEBUG nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] End _get_guest_xml xml=<domain type="kvm">
Dec  5 02:02:12 compute-0 nova_compute[349548]:  <uuid>ee0bd3a4-b224-4dad-948c-1362bf56fea1</uuid>
Dec  5 02:02:12 compute-0 nova_compute[349548]:  <name>instance-00000005</name>
Dec  5 02:02:12 compute-0 nova_compute[349548]:  <memory>524288</memory>
Dec  5 02:02:12 compute-0 nova_compute[349548]:  <vcpu>1</vcpu>
Dec  5 02:02:12 compute-0 nova_compute[349548]:  <metadata>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  5 02:02:12 compute-0 nova_compute[349548]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:      <nova:name>fvt_testing_server</nova:name>
Dec  5 02:02:12 compute-0 nova_compute[349548]:      <nova:creationTime>2025-12-05 02:02:10</nova:creationTime>
Dec  5 02:02:12 compute-0 nova_compute[349548]:      <nova:flavor name="fvt_testing_flavor">
Dec  5 02:02:12 compute-0 nova_compute[349548]:        <nova:memory>512</nova:memory>
Dec  5 02:02:12 compute-0 nova_compute[349548]:        <nova:disk>1</nova:disk>
Dec  5 02:02:12 compute-0 nova_compute[349548]:        <nova:swap>0</nova:swap>
Dec  5 02:02:12 compute-0 nova_compute[349548]:        <nova:ephemeral>1</nova:ephemeral>
Dec  5 02:02:12 compute-0 nova_compute[349548]:        <nova:vcpus>1</nova:vcpus>
Dec  5 02:02:12 compute-0 nova_compute[349548]:      </nova:flavor>
Dec  5 02:02:12 compute-0 nova_compute[349548]:      <nova:owner>
Dec  5 02:02:12 compute-0 nova_compute[349548]:        <nova:user uuid="ff880837791d4f49a54672b8d0e705ff">admin</nova:user>
Dec  5 02:02:12 compute-0 nova_compute[349548]:        <nova:project uuid="6ad982b73954486390215862ee62239f">admin</nova:project>
Dec  5 02:02:12 compute-0 nova_compute[349548]:      </nova:owner>
Dec  5 02:02:12 compute-0 nova_compute[349548]:      <nova:root type="image" uuid="2f1298d8-b7d4-43bf-b887-b91409888461"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:      <nova:ports/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    </nova:instance>
Dec  5 02:02:12 compute-0 nova_compute[349548]:  </metadata>
Dec  5 02:02:12 compute-0 nova_compute[349548]:  <sysinfo type="smbios">
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <system>
Dec  5 02:02:12 compute-0 nova_compute[349548]:      <entry name="manufacturer">RDO</entry>
Dec  5 02:02:12 compute-0 nova_compute[349548]:      <entry name="product">OpenStack Compute</entry>
Dec  5 02:02:12 compute-0 nova_compute[349548]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  5 02:02:12 compute-0 nova_compute[349548]:      <entry name="serial">ee0bd3a4-b224-4dad-948c-1362bf56fea1</entry>
Dec  5 02:02:12 compute-0 nova_compute[349548]:      <entry name="uuid">ee0bd3a4-b224-4dad-948c-1362bf56fea1</entry>
Dec  5 02:02:12 compute-0 nova_compute[349548]:      <entry name="family">Virtual Machine</entry>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    </system>
Dec  5 02:02:12 compute-0 nova_compute[349548]:  </sysinfo>
Dec  5 02:02:12 compute-0 nova_compute[349548]:  <os>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <boot dev="hd"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <smbios mode="sysinfo"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:  </os>
Dec  5 02:02:12 compute-0 nova_compute[349548]:  <features>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <acpi/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <apic/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <vmcoreinfo/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:  </features>
Dec  5 02:02:12 compute-0 nova_compute[349548]:  <clock offset="utc">
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <timer name="pit" tickpolicy="delay"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <timer name="hpet" present="no"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:  </clock>
Dec  5 02:02:12 compute-0 nova_compute[349548]:  <cpu mode="host-model" match="exact">
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <topology sockets="1" cores="1" threads="1"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:  </cpu>
Dec  5 02:02:12 compute-0 nova_compute[349548]:  <devices>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <disk type="network" device="disk">
Dec  5 02:02:12 compute-0 nova_compute[349548]:      <driver type="raw" cache="none"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:      <source protocol="rbd" name="vms/ee0bd3a4-b224-4dad-948c-1362bf56fea1_disk">
Dec  5 02:02:12 compute-0 nova_compute[349548]:        <host name="192.168.122.100" port="6789"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:      </source>
Dec  5 02:02:12 compute-0 nova_compute[349548]:      <auth username="openstack">
Dec  5 02:02:12 compute-0 nova_compute[349548]:        <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:      </auth>
Dec  5 02:02:12 compute-0 nova_compute[349548]:      <target dev="vda" bus="virtio"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    </disk>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <disk type="network" device="disk">
Dec  5 02:02:12 compute-0 nova_compute[349548]:      <driver type="raw" cache="none"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:      <source protocol="rbd" name="vms/ee0bd3a4-b224-4dad-948c-1362bf56fea1_disk.eph0">
Dec  5 02:02:12 compute-0 nova_compute[349548]:        <host name="192.168.122.100" port="6789"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:      </source>
Dec  5 02:02:12 compute-0 nova_compute[349548]:      <auth username="openstack">
Dec  5 02:02:12 compute-0 nova_compute[349548]:        <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:      </auth>
Dec  5 02:02:12 compute-0 nova_compute[349548]:      <target dev="vdb" bus="virtio"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    </disk>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <disk type="network" device="cdrom">
Dec  5 02:02:12 compute-0 nova_compute[349548]:      <driver type="raw" cache="none"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:      <source protocol="rbd" name="vms/ee0bd3a4-b224-4dad-948c-1362bf56fea1_disk.config">
Dec  5 02:02:12 compute-0 nova_compute[349548]:        <host name="192.168.122.100" port="6789"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:      </source>
Dec  5 02:02:12 compute-0 nova_compute[349548]:      <auth username="openstack">
Dec  5 02:02:12 compute-0 nova_compute[349548]:        <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:      </auth>
Dec  5 02:02:12 compute-0 nova_compute[349548]:      <target dev="sda" bus="sata"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    </disk>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <serial type="pty">
Dec  5 02:02:12 compute-0 nova_compute[349548]:      <log file="/var/lib/nova/instances/ee0bd3a4-b224-4dad-948c-1362bf56fea1/console.log" append="off"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    </serial>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <video>
Dec  5 02:02:12 compute-0 nova_compute[349548]:      <model type="virtio"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    </video>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <input type="tablet" bus="usb"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <rng model="virtio">
Dec  5 02:02:12 compute-0 nova_compute[349548]:      <backend model="random">/dev/urandom</backend>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    </rng>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <controller type="usb" index="0"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    <memballoon model="virtio">
Dec  5 02:02:12 compute-0 nova_compute[349548]:      <stats period="10"/>
Dec  5 02:02:12 compute-0 nova_compute[349548]:    </memballoon>
Dec  5 02:02:12 compute-0 nova_compute[349548]:  </devices>
Dec  5 02:02:12 compute-0 nova_compute[349548]: </domain>
Dec  5 02:02:12 compute-0 nova_compute[349548]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  5 02:02:12 compute-0 nova_compute[349548]: 2025-12-05 02:02:12.624 349552 DEBUG nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  5 02:02:12 compute-0 nova_compute[349548]: 2025-12-05 02:02:12.625 349552 DEBUG nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  5 02:02:12 compute-0 nova_compute[349548]: 2025-12-05 02:02:12.626 349552 DEBUG nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  5 02:02:12 compute-0 nova_compute[349548]: 2025-12-05 02:02:12.627 349552 INFO nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Using config drive#033[00m
Dec  5 02:02:12 compute-0 nova_compute[349548]: 2025-12-05 02:02:12.664 349552 DEBUG nova.storage.rbd_utils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image ee0bd3a4-b224-4dad-948c-1362bf56fea1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:02:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:02:13 compute-0 nova_compute[349548]: 2025-12-05 02:02:13.005 349552 INFO nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Creating config drive at /var/lib/nova/instances/ee0bd3a4-b224-4dad-948c-1362bf56fea1/disk.config#033[00m
Dec  5 02:02:13 compute-0 nova_compute[349548]: 2025-12-05 02:02:13.016 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ee0bd3a4-b224-4dad-948c-1362bf56fea1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnatkm2o0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:02:13 compute-0 nova_compute[349548]: 2025-12-05 02:02:13.126 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:02:13 compute-0 nova_compute[349548]: 2025-12-05 02:02:13.168 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ee0bd3a4-b224-4dad-948c-1362bf56fea1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnatkm2o0" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:02:13 compute-0 nova_compute[349548]: 2025-12-05 02:02:13.220 349552 DEBUG nova.storage.rbd_utils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] rbd image ee0bd3a4-b224-4dad-948c-1362bf56fea1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:02:13 compute-0 nova_compute[349548]: 2025-12-05 02:02:13.228 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ee0bd3a4-b224-4dad-948c-1362bf56fea1/disk.config ee0bd3a4-b224-4dad-948c-1362bf56fea1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:02:13 compute-0 nova_compute[349548]: 2025-12-05 02:02:13.462 349552 DEBUG oslo_concurrency.processutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ee0bd3a4-b224-4dad-948c-1362bf56fea1/disk.config ee0bd3a4-b224-4dad-948c-1362bf56fea1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.234s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:02:13 compute-0 nova_compute[349548]: 2025-12-05 02:02:13.463 349552 INFO nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Deleting local config drive /var/lib/nova/instances/ee0bd3a4-b224-4dad-948c-1362bf56fea1/disk.config because it was imported into RBD.#033[00m
Dec  5 02:02:13 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec  5 02:02:13 compute-0 systemd[1]: Started libvirt secret daemon.
Dec  5 02:02:13 compute-0 podman[432172]: 2025-12-05 02:02:13.595960943 +0000 UTC m=+0.092663627 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 02:02:13 compute-0 podman[432171]: 2025-12-05 02:02:13.599271546 +0000 UTC m=+0.100410264 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:02:13 compute-0 systemd-machined[138700]: New machine qemu-5-instance-00000005.
Dec  5 02:02:13 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Dec  5 02:02:13 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec  5 02:02:13 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec  5 02:02:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1586: 321 pgs: 321 active+clean; 186 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.0 MiB/s wr, 51 op/s
Dec  5 02:02:14 compute-0 nova_compute[349548]: 2025-12-05 02:02:14.707 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900134.7063084, ee0bd3a4-b224-4dad-948c-1362bf56fea1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:02:14 compute-0 nova_compute[349548]: 2025-12-05 02:02:14.708 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] VM Resumed (Lifecycle Event)#033[00m
Dec  5 02:02:14 compute-0 nova_compute[349548]: 2025-12-05 02:02:14.716 349552 DEBUG nova.compute.manager [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  5 02:02:14 compute-0 nova_compute[349548]: 2025-12-05 02:02:14.717 349552 DEBUG nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  5 02:02:14 compute-0 nova_compute[349548]: 2025-12-05 02:02:14.730 349552 INFO nova.virt.libvirt.driver [-] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Instance spawned successfully.#033[00m
Dec  5 02:02:14 compute-0 nova_compute[349548]: 2025-12-05 02:02:14.736 349552 DEBUG nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  5 02:02:14 compute-0 nova_compute[349548]: 2025-12-05 02:02:14.762 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:02:14 compute-0 nova_compute[349548]: 2025-12-05 02:02:14.772 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  5 02:02:14 compute-0 nova_compute[349548]: 2025-12-05 02:02:14.801 349552 DEBUG nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:02:14 compute-0 nova_compute[349548]: 2025-12-05 02:02:14.801 349552 DEBUG nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:02:14 compute-0 nova_compute[349548]: 2025-12-05 02:02:14.802 349552 DEBUG nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:02:14 compute-0 nova_compute[349548]: 2025-12-05 02:02:14.803 349552 DEBUG nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:02:14 compute-0 nova_compute[349548]: 2025-12-05 02:02:14.804 349552 DEBUG nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:02:14 compute-0 nova_compute[349548]: 2025-12-05 02:02:14.804 349552 DEBUG nova.virt.libvirt.driver [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:02:14 compute-0 nova_compute[349548]: 2025-12-05 02:02:14.808 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  5 02:02:14 compute-0 nova_compute[349548]: 2025-12-05 02:02:14.809 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900134.7153697, ee0bd3a4-b224-4dad-948c-1362bf56fea1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:02:14 compute-0 nova_compute[349548]: 2025-12-05 02:02:14.809 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] VM Started (Lifecycle Event)#033[00m
Dec  5 02:02:14 compute-0 podman[432328]: 2025-12-05 02:02:14.918660538 +0000 UTC m=+0.126343022 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec  5 02:02:14 compute-0 podman[432329]: 2025-12-05 02:02:14.944243795 +0000 UTC m=+0.128003409 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:02:14 compute-0 nova_compute[349548]: 2025-12-05 02:02:14.952 349552 INFO nova.compute.manager [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Took 7.52 seconds to spawn the instance on the hypervisor.#033[00m
Dec  5 02:02:14 compute-0 nova_compute[349548]: 2025-12-05 02:02:14.952 349552 DEBUG nova.compute.manager [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:02:14 compute-0 nova_compute[349548]: 2025-12-05 02:02:14.969 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:02:14 compute-0 nova_compute[349548]: 2025-12-05 02:02:14.982 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  5 02:02:15 compute-0 nova_compute[349548]: 2025-12-05 02:02:15.023 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  5 02:02:15 compute-0 nova_compute[349548]: 2025-12-05 02:02:15.058 349552 INFO nova.compute.manager [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Took 8.57 seconds to build instance.#033[00m
Dec  5 02:02:15 compute-0 nova_compute[349548]: 2025-12-05 02:02:15.082 349552 DEBUG oslo_concurrency.lockutils [None req-cdca4d55-2f17-4317-bccf-10a483552317 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "ee0bd3a4-b224-4dad-948c-1362bf56fea1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:02:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1587: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.4 MiB/s wr, 51 op/s
Dec  5 02:02:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:02:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:02:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:02:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:02:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:02:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:02:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:02:16
Dec  5 02:02:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 02:02:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 02:02:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'images', '.mgr', 'default.rgw.control', 'vms', 'default.rgw.log', '.rgw.root', 'default.rgw.meta']
Dec  5 02:02:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 02:02:16 compute-0 nova_compute[349548]: 2025-12-05 02:02:16.811 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:02:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 02:02:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:02:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:02:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:02:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:02:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 02:02:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:02:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:02:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:02:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:02:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1588: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 78 op/s
Dec  5 02:02:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:02:18 compute-0 nova_compute[349548]: 2025-12-05 02:02:18.129 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:02:18 compute-0 podman[432364]: 2025-12-05 02:02:18.741653824 +0000 UTC m=+0.148974027 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, name=ubi9, config_id=edpm, managed_by=edpm_ansible, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-container, distribution-scope=public)
Dec  5 02:02:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1589: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 78 op/s
Dec  5 02:02:21 compute-0 nova_compute[349548]: 2025-12-05 02:02:21.814 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:02:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1590: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.4 MiB/s wr, 91 op/s
Dec  5 02:02:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:02:23 compute-0 nova_compute[349548]: 2025-12-05 02:02:23.132 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:02:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1591: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 1007 KiB/s wr, 92 op/s
Dec  5 02:02:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1592: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 14 KiB/s wr, 61 op/s
Dec  5 02:02:26 compute-0 nova_compute[349548]: 2025-12-05 02:02:26.818 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0013715879769811811 of space, bias 1.0, pg target 0.41147639309435435 quantized to 32 (current 32)
Dec  5 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0005066271692062251 of space, bias 1.0, pg target 0.15198815076186756 quantized to 32 (current 32)
Dec  5 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:02:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 02:02:26 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 02:02:26 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.0 total, 600.0 interval#012Cumulative writes: 7304 writes, 32K keys, 7304 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 7304 writes, 7304 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1331 writes, 6016 keys, 1331 commit groups, 1.0 writes per commit group, ingest: 8.58 MB, 0.01 MB/s#012Interval WAL: 1331 writes, 1331 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    104.3      0.38              0.17        19    0.020       0      0       0.0       0.0#012  L6      1/0    8.41 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.3    116.6     94.2      1.37              0.53        18    0.076     86K    10K       0.0       0.0#012 Sum      1/0    8.41 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.3     91.5     96.4      1.75              0.70        37    0.047     86K    10K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.4     66.8     69.2      0.57              0.15         8    0.071     22K   2515       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0    116.6     94.2      1.37              0.53        18    0.076     86K    10K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    105.6      0.37              0.17        18    0.021       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.4      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3000.0 total, 600.0 interval#012Flush(GB): cumulative 0.038, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.16 GB write, 0.06 MB/s write, 0.16 GB read, 0.05 MB/s read, 1.7 seconds#012Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.06 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56463779d1f0#2 capacity: 308.00 MB usage: 19.63 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000169 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1275,18.96 MB,6.1552%) FilterBlock(38,245.17 KB,0.0777356%) IndexBlock(38,444.05 KB,0.140792%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  5 02:02:27 compute-0 nova_compute[349548]: 2025-12-05 02:02:27.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:02:27 compute-0 nova_compute[349548]: 2025-12-05 02:02:27.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 02:02:27 compute-0 podman[432388]: 2025-12-05 02:02:27.706202233 +0000 UTC m=+0.107710870 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.6, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, config_id=edpm, vendor=Red Hat, Inc., name=ubi9-minimal, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.openshift.expose-services=, io.openshift.tags=minimal rhel9)
Dec  5 02:02:27 compute-0 podman[432385]: 2025-12-05 02:02:27.710083732 +0000 UTC m=+0.126455325 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  5 02:02:27 compute-0 podman[432386]: 2025-12-05 02:02:27.71605914 +0000 UTC m=+0.125677764 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 02:02:27 compute-0 podman[432387]: 2025-12-05 02:02:27.74818797 +0000 UTC m=+0.141911418 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  5 02:02:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1593: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 255 B/s wr, 52 op/s
Dec  5 02:02:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:02:27 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Dec  5 02:02:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:02:27.854280) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  5 02:02:27 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Dec  5 02:02:27 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900147854308, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 852, "num_deletes": 256, "total_data_size": 1097730, "memory_usage": 1124608, "flush_reason": "Manual Compaction"}
Dec  5 02:02:27 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Dec  5 02:02:27 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900147862054, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 1087339, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32165, "largest_seqno": 33016, "table_properties": {"data_size": 1083024, "index_size": 1967, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9440, "raw_average_key_size": 19, "raw_value_size": 1074294, "raw_average_value_size": 2179, "num_data_blocks": 88, "num_entries": 493, "num_filter_entries": 493, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764900076, "oldest_key_time": 1764900076, "file_creation_time": 1764900147, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Dec  5 02:02:27 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 7837 microseconds, and 3483 cpu microseconds.
Dec  5 02:02:27 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 02:02:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:02:27.862117) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 1087339 bytes OK
Dec  5 02:02:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:02:27.862131) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Dec  5 02:02:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:02:27.863869) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Dec  5 02:02:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:02:27.863923) EVENT_LOG_v1 {"time_micros": 1764900147863876, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  5 02:02:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:02:27.863939) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  5 02:02:27 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1093514, prev total WAL file size 1093514, number of live WAL files 2.
Dec  5 02:02:27 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:02:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:02:27.864803) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303034' seq:72057594037927935, type:22 .. '6C6F676D0031323536' seq:0, type:0; will stop at (end)
Dec  5 02:02:27 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  5 02:02:27 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(1061KB)], [71(8614KB)]
Dec  5 02:02:27 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900147864953, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 9908207, "oldest_snapshot_seqno": -1}
Dec  5 02:02:27 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 5303 keys, 9805262 bytes, temperature: kUnknown
Dec  5 02:02:27 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900147991377, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 9805262, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9767668, "index_size": 23212, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13317, "raw_key_size": 133855, "raw_average_key_size": 25, "raw_value_size": 9669658, "raw_average_value_size": 1823, "num_data_blocks": 958, "num_entries": 5303, "num_filter_entries": 5303, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764900147, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Dec  5 02:02:27 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 02:02:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:02:27.991687) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 9805262 bytes
Dec  5 02:02:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:02:27.994509) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 78.3 rd, 77.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 8.4 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(18.1) write-amplify(9.0) OK, records in: 5831, records dropped: 528 output_compression: NoCompression
Dec  5 02:02:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:02:27.994631) EVENT_LOG_v1 {"time_micros": 1764900147994617, "job": 40, "event": "compaction_finished", "compaction_time_micros": 126514, "compaction_time_cpu_micros": 46556, "output_level": 6, "num_output_files": 1, "total_output_size": 9805262, "num_input_records": 5831, "num_output_records": 5303, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  5 02:02:27 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:02:27 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900147995235, "job": 40, "event": "table_file_deletion", "file_number": 73}
Dec  5 02:02:28 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:02:28 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900148009742, "job": 40, "event": "table_file_deletion", "file_number": 71}
Dec  5 02:02:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:02:27.864508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:02:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:02:28.009944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:02:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:02:28.009949) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:02:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:02:28.009951) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:02:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:02:28.009953) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:02:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:02:28.009954) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:02:28 compute-0 nova_compute[349548]: 2025-12-05 02:02:28.135 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:02:29 compute-0 nova_compute[349548]: 2025-12-05 02:02:29.068 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:02:29 compute-0 nova_compute[349548]: 2025-12-05 02:02:29.068 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 02:02:29 compute-0 nova_compute[349548]: 2025-12-05 02:02:29.069 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  5 02:02:29 compute-0 podman[158197]: time="2025-12-05T02:02:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:02:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:02:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 02:02:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:02:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8642 "" "Go-http-client/1.1"
Dec  5 02:02:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1594: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 809 KiB/s rd, 25 op/s
Dec  5 02:02:30 compute-0 nova_compute[349548]: 2025-12-05 02:02:30.074 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:02:30 compute-0 nova_compute[349548]: 2025-12-05 02:02:30.074 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:02:30 compute-0 nova_compute[349548]: 2025-12-05 02:02:30.074 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  5 02:02:30 compute-0 nova_compute[349548]: 2025-12-05 02:02:30.074 349552 DEBUG nova.objects.instance [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b69a0e24-1bc4-46a5-92d7-367c1efd53df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:02:30 compute-0 nova_compute[349548]: 2025-12-05 02:02:30.288 349552 DEBUG oslo_concurrency.lockutils [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "ee0bd3a4-b224-4dad-948c-1362bf56fea1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:02:30 compute-0 nova_compute[349548]: 2025-12-05 02:02:30.288 349552 DEBUG oslo_concurrency.lockutils [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "ee0bd3a4-b224-4dad-948c-1362bf56fea1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:02:30 compute-0 nova_compute[349548]: 2025-12-05 02:02:30.289 349552 DEBUG oslo_concurrency.lockutils [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "ee0bd3a4-b224-4dad-948c-1362bf56fea1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:02:30 compute-0 nova_compute[349548]: 2025-12-05 02:02:30.289 349552 DEBUG oslo_concurrency.lockutils [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "ee0bd3a4-b224-4dad-948c-1362bf56fea1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:02:30 compute-0 nova_compute[349548]: 2025-12-05 02:02:30.290 349552 DEBUG oslo_concurrency.lockutils [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "ee0bd3a4-b224-4dad-948c-1362bf56fea1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:02:30 compute-0 nova_compute[349548]: 2025-12-05 02:02:30.292 349552 INFO nova.compute.manager [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Terminating instance#033[00m
Dec  5 02:02:30 compute-0 nova_compute[349548]: 2025-12-05 02:02:30.295 349552 DEBUG oslo_concurrency.lockutils [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "refresh_cache-ee0bd3a4-b224-4dad-948c-1362bf56fea1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:02:30 compute-0 nova_compute[349548]: 2025-12-05 02:02:30.296 349552 DEBUG oslo_concurrency.lockutils [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquired lock "refresh_cache-ee0bd3a4-b224-4dad-948c-1362bf56fea1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:02:30 compute-0 nova_compute[349548]: 2025-12-05 02:02:30.297 349552 DEBUG nova.network.neutron [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  5 02:02:31 compute-0 nova_compute[349548]: 2025-12-05 02:02:31.171 349552 DEBUG nova.network.neutron [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  5 02:02:31 compute-0 openstack_network_exporter[366555]: ERROR   02:02:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:02:31 compute-0 openstack_network_exporter[366555]: ERROR   02:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:02:31 compute-0 openstack_network_exporter[366555]: ERROR   02:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:02:31 compute-0 openstack_network_exporter[366555]: ERROR   02:02:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:02:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:02:31 compute-0 openstack_network_exporter[366555]: ERROR   02:02:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:02:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:02:31 compute-0 nova_compute[349548]: 2025-12-05 02:02:31.736 349552 DEBUG nova.network.neutron [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:02:31 compute-0 nova_compute[349548]: 2025-12-05 02:02:31.760 349552 DEBUG oslo_concurrency.lockutils [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Releasing lock "refresh_cache-ee0bd3a4-b224-4dad-948c-1362bf56fea1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:02:31 compute-0 nova_compute[349548]: 2025-12-05 02:02:31.762 349552 DEBUG nova.compute.manager [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  5 02:02:31 compute-0 nova_compute[349548]: 2025-12-05 02:02:31.820 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:02:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1595: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 809 KiB/s rd, 25 op/s
Dec  5 02:02:31 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Dec  5 02:02:31 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 18.707s CPU time.
Dec  5 02:02:31 compute-0 systemd-machined[138700]: Machine qemu-5-instance-00000005 terminated.
Dec  5 02:02:32 compute-0 nova_compute[349548]: 2025-12-05 02:02:32.000 349552 INFO nova.virt.libvirt.driver [-] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Instance destroyed successfully.#033[00m
Dec  5 02:02:32 compute-0 nova_compute[349548]: 2025-12-05 02:02:32.001 349552 DEBUG nova.objects.instance [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lazy-loading 'resources' on Instance uuid ee0bd3a4-b224-4dad-948c-1362bf56fea1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:02:32 compute-0 nova_compute[349548]: 2025-12-05 02:02:32.174 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updating instance_info_cache with network_info: [{"id": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "address": "fa:16:3e:0c:12:24", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.48", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68143c81-65", "ovs_interfaceid": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:02:32 compute-0 nova_compute[349548]: 2025-12-05 02:02:32.192 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:02:32 compute-0 nova_compute[349548]: 2025-12-05 02:02:32.193 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  5 02:02:32 compute-0 nova_compute[349548]: 2025-12-05 02:02:32.194 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:02:32 compute-0 nova_compute[349548]: 2025-12-05 02:02:32.194 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:02:32 compute-0 nova_compute[349548]: 2025-12-05 02:02:32.194 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:02:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:02:33 compute-0 nova_compute[349548]: 2025-12-05 02:02:33.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:02:33 compute-0 nova_compute[349548]: 2025-12-05 02:02:33.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:02:33 compute-0 nova_compute[349548]: 2025-12-05 02:02:33.103 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:02:33 compute-0 nova_compute[349548]: 2025-12-05 02:02:33.103 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:02:33 compute-0 nova_compute[349548]: 2025-12-05 02:02:33.104 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:02:33 compute-0 nova_compute[349548]: 2025-12-05 02:02:33.104 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 02:02:33 compute-0 nova_compute[349548]: 2025-12-05 02:02:33.105 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:02:33 compute-0 nova_compute[349548]: 2025-12-05 02:02:33.138 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:02:33 compute-0 nova_compute[349548]: 2025-12-05 02:02:33.307 349552 INFO nova.virt.libvirt.driver [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Deleting instance files /var/lib/nova/instances/ee0bd3a4-b224-4dad-948c-1362bf56fea1_del#033[00m
Dec  5 02:02:33 compute-0 nova_compute[349548]: 2025-12-05 02:02:33.308 349552 INFO nova.virt.libvirt.driver [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Deletion of /var/lib/nova/instances/ee0bd3a4-b224-4dad-948c-1362bf56fea1_del complete#033[00m
Dec  5 02:02:33 compute-0 nova_compute[349548]: 2025-12-05 02:02:33.532 349552 INFO nova.compute.manager [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Took 1.77 seconds to destroy the instance on the hypervisor.#033[00m
Dec  5 02:02:33 compute-0 nova_compute[349548]: 2025-12-05 02:02:33.533 349552 DEBUG oslo.service.loopingcall [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  5 02:02:33 compute-0 nova_compute[349548]: 2025-12-05 02:02:33.533 349552 DEBUG nova.compute.manager [-] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  5 02:02:33 compute-0 nova_compute[349548]: 2025-12-05 02:02:33.533 349552 DEBUG nova.network.neutron [-] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  5 02:02:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:02:33 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/862283307' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:02:33 compute-0 nova_compute[349548]: 2025-12-05 02:02:33.603 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:02:33 compute-0 nova_compute[349548]: 2025-12-05 02:02:33.702 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:02:33 compute-0 nova_compute[349548]: 2025-12-05 02:02:33.702 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:02:33 compute-0 nova_compute[349548]: 2025-12-05 02:02:33.702 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:02:33 compute-0 nova_compute[349548]: 2025-12-05 02:02:33.712 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:02:33 compute-0 nova_compute[349548]: 2025-12-05 02:02:33.712 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:02:33 compute-0 nova_compute[349548]: 2025-12-05 02:02:33.713 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:02:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1596: 321 pgs: 321 active+clean; 188 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 390 KiB/s rd, 12 op/s
Dec  5 02:02:34 compute-0 nova_compute[349548]: 2025-12-05 02:02:34.047 349552 DEBUG nova.network.neutron [-] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  5 02:02:34 compute-0 nova_compute[349548]: 2025-12-05 02:02:34.064 349552 DEBUG nova.network.neutron [-] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:02:34 compute-0 nova_compute[349548]: 2025-12-05 02:02:34.076 349552 INFO nova.compute.manager [-] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Took 0.54 seconds to deallocate network for instance.#033[00m
Dec  5 02:02:34 compute-0 nova_compute[349548]: 2025-12-05 02:02:34.121 349552 DEBUG oslo_concurrency.lockutils [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:02:34 compute-0 nova_compute[349548]: 2025-12-05 02:02:34.122 349552 DEBUG oslo_concurrency.lockutils [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:02:34 compute-0 nova_compute[349548]: 2025-12-05 02:02:34.176 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:02:34 compute-0 nova_compute[349548]: 2025-12-05 02:02:34.178 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3608MB free_disk=59.906002044677734GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 02:02:34 compute-0 nova_compute[349548]: 2025-12-05 02:02:34.178 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:02:34 compute-0 nova_compute[349548]: 2025-12-05 02:02:34.272 349552 DEBUG oslo_concurrency.processutils [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:02:34 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:02:34 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4002993508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:02:34 compute-0 nova_compute[349548]: 2025-12-05 02:02:34.832 349552 DEBUG oslo_concurrency.processutils [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:02:34 compute-0 nova_compute[349548]: 2025-12-05 02:02:34.842 349552 DEBUG nova.compute.provider_tree [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:02:34 compute-0 nova_compute[349548]: 2025-12-05 02:02:34.869 349552 DEBUG nova.scheduler.client.report [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:02:34 compute-0 nova_compute[349548]: 2025-12-05 02:02:34.910 349552 DEBUG oslo_concurrency.lockutils [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.788s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:02:34 compute-0 nova_compute[349548]: 2025-12-05 02:02:34.914 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:02:34 compute-0 nova_compute[349548]: 2025-12-05 02:02:34.951 349552 INFO nova.scheduler.client.report [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Deleted allocations for instance ee0bd3a4-b224-4dad-948c-1362bf56fea1#033[00m
Dec  5 02:02:34 compute-0 nova_compute[349548]: 2025-12-05 02:02:34.997 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b69a0e24-1bc4-46a5-92d7-367c1efd53df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:02:34 compute-0 nova_compute[349548]: 2025-12-05 02:02:34.997 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 3611d2ae-da33-4e55-aec7-0bec88d3b4e0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:02:34 compute-0 nova_compute[349548]: 2025-12-05 02:02:34.998 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 02:02:34 compute-0 nova_compute[349548]: 2025-12-05 02:02:34.998 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 02:02:35 compute-0 nova_compute[349548]: 2025-12-05 02:02:35.010 349552 DEBUG oslo_concurrency.lockutils [None req-842c3ad3-6d29-4149-8fde-7d95581fcdac ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "ee0bd3a4-b224-4dad-948c-1362bf56fea1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:02:35 compute-0 nova_compute[349548]: 2025-12-05 02:02:35.048 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:02:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:02:35 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2501496586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:02:35 compute-0 nova_compute[349548]: 2025-12-05 02:02:35.584 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:02:35 compute-0 nova_compute[349548]: 2025-12-05 02:02:35.596 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:02:35 compute-0 nova_compute[349548]: 2025-12-05 02:02:35.629 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:02:35 compute-0 nova_compute[349548]: 2025-12-05 02:02:35.689 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 02:02:35 compute-0 nova_compute[349548]: 2025-12-05 02:02:35.689 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:02:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1597: 321 pgs: 321 active+clean; 177 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 17 op/s
Dec  5 02:02:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Dec  5 02:02:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Dec  5 02:02:35 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Dec  5 02:02:36 compute-0 nova_compute[349548]: 2025-12-05 02:02:36.824 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:02:37 compute-0 nova_compute[349548]: 2025-12-05 02:02:37.689 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:02:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1599: 321 pgs: 321 active+clean; 139 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 3.2 KiB/s wr, 70 op/s
Dec  5 02:02:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:02:38 compute-0 nova_compute[349548]: 2025-12-05 02:02:38.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:02:38 compute-0 nova_compute[349548]: 2025-12-05 02:02:38.143 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.320 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.321 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.326 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.326 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.326 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.332 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b69a0e24-1bc4-46a5-92d7-367c1efd53df', 'name': 'test_0', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.336 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '3611d2ae-da33-4e55-aec7-0bec88d3b4e0', 'name': 'vn-4ysdpfw-etyk2gsqvxro-nwtay2ho224x-vnf-wh6pa34aydpq', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {'metering.server_group': 'b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.337 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.337 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd61438050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.338 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd61438050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.338 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.339 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-05T02:02:38.338282) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.340 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.340 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.341 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.341 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.342 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.342 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-05T02:02:38.342081) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.371 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.372 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.372 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.398 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.399 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.400 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.401 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.401 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.402 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.402 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.402 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.402 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.404 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.404 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.404 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.405 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.405 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.405 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.406 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.406 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.407 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-05T02:02:38.402744) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.407 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-05T02:02:38.406547) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.477 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.477 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.478 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.522 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.523 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.523 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.524 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.524 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.524 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.524 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.524 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.524 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.525 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 2043636416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.525 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 325714825 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.525 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-05T02:02:38.524633) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.525 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 190759187 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.526 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.latency volume: 1726190004 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.526 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.latency volume: 302563806 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.526 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.latency volume: 198504004 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.527 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.527 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.527 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.527 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.527 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.527 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.528 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.528 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-05T02:02:38.527582) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.528 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.528 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.528 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.528 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.528 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.529 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.529 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.529 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.529 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.529 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.529 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.530 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-05T02:02:38.529589) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.530 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.530 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.530 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.530 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.530 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.531 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.531 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.531 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.531 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.531 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.531 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.531 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.532 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-05T02:02:38.531820) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.532 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 41762816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.532 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.532 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.533 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.533 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.533 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.533 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.534 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.534 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.534 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.534 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.534 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.534 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-05T02:02:38.534382) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.553 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.571 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.571 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.572 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.572 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.572 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.572 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.572 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.572 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 7524740776 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.573 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 28454640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.573 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.573 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.latency volume: 8278686410 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.574 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.latency volume: 33331693 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.574 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.574 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.575 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-05T02:02:38.572495) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.575 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.575 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.576 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.576 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.576 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.576 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.576 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.576 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.577 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.577 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.577 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.578 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.578 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.578 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.578 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.578 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.578 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.579 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-05T02:02:38.576161) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.579 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-05T02:02:38.578846) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.584 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.587 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.587 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.588 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.588 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.588 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.588 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.588 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.588 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.588 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.589 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.589 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.589 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.589 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.589 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.589 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.589 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.589 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.590 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.590 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.590 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-05T02:02:38.588319) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.590 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.590 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.591 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.591 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.591 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.591 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.592 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.592 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.592 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.592 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.592 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.592 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.592 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.592 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.593 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.593 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.593 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.593 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.593 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.593 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.593 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.594 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.594 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.594 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.594 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.594 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.594 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.595 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.595 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.595 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.595 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.595 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.595 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.595 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-05T02:02:38.589487) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.595 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-05T02:02:38.592105) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.596 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-05T02:02:38.593111) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.596 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-05T02:02:38.594165) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.595 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.596 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-05T02:02:38.595566) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.596 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/memory.usage volume: 48.87890625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.596 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/memory.usage volume: 49.01171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.597 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.597 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.597 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.597 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.597 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.597 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.597 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.597 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.598 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.598 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.598 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.598 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.598 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-05T02:02:38.597538) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.598 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.598 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.599 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.599 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.599 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.599 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.599 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.600 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.600 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.600 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.600 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.600 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.600 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-05T02:02:38.598820) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.600 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-05T02:02:38.600180) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.600 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.601 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.601 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.601 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.601 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.601 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.601 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/cpu volume: 48060000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.601 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-05T02:02:38.601365) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.601 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/cpu volume: 41650000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.602 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.602 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.602 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.602 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.602 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.602 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.602 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.602 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.603 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-05T02:02:38.602412) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.603 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.603 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.603 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.603 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.603 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.603 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.603 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.603 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-05T02:02:38.603583) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.604 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.604 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.604 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.604 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.605 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.605 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.605 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.605 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.605 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.605 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.605 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.605 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.606 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.606 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.606 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.606 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.606 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.606 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.606 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.606 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.607 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.607 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.607 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.607 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.607 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.607 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.607 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:02:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:02:38.608 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:02:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:02:38 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:02:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 02:02:38 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:02:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 02:02:38 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:02:38 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev e108be2e-d007-486b-9489-24368b5bceb2 does not exist
Dec  5 02:02:38 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 6637557b-3adf-4b5f-9f11-540fd6899dd8 does not exist
Dec  5 02:02:38 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev b6b13d99-e44f-45e4-9b62-a361123083b5 does not exist
Dec  5 02:02:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 02:02:38 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 02:02:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 02:02:38 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:02:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:02:38 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:02:39 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:02:39 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:02:39 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:02:39 compute-0 podman[432830]: 2025-12-05 02:02:39.743864951 +0000 UTC m=+0.071658329 container create 14180d9d17fbc02763032832574ff3356241834dee82bdcf9156c26cf681ca19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  5 02:02:39 compute-0 systemd[1]: Started libpod-conmon-14180d9d17fbc02763032832574ff3356241834dee82bdcf9156c26cf681ca19.scope.
Dec  5 02:02:39 compute-0 podman[432830]: 2025-12-05 02:02:39.720405934 +0000 UTC m=+0.048199352 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:02:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1600: 321 pgs: 321 active+clean; 139 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 3.2 KiB/s wr, 70 op/s
Dec  5 02:02:39 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:02:39 compute-0 podman[432830]: 2025-12-05 02:02:39.871662933 +0000 UTC m=+0.199456331 container init 14180d9d17fbc02763032832574ff3356241834dee82bdcf9156c26cf681ca19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:02:39 compute-0 podman[432830]: 2025-12-05 02:02:39.888746662 +0000 UTC m=+0.216540080 container start 14180d9d17fbc02763032832574ff3356241834dee82bdcf9156c26cf681ca19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_dijkstra, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  5 02:02:39 compute-0 friendly_dijkstra[432846]: 167 167
Dec  5 02:02:39 compute-0 systemd[1]: libpod-14180d9d17fbc02763032832574ff3356241834dee82bdcf9156c26cf681ca19.scope: Deactivated successfully.
Dec  5 02:02:39 compute-0 podman[432830]: 2025-12-05 02:02:39.89617314 +0000 UTC m=+0.223966558 container attach 14180d9d17fbc02763032832574ff3356241834dee82bdcf9156c26cf681ca19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_dijkstra, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:02:39 compute-0 podman[432830]: 2025-12-05 02:02:39.896928871 +0000 UTC m=+0.224722249 container died 14180d9d17fbc02763032832574ff3356241834dee82bdcf9156c26cf681ca19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_dijkstra, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:02:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-864fea35063f1754387ab12ac1e8cb2b3115f7507bbf948bdbd9033a32811fc7-merged.mount: Deactivated successfully.
Dec  5 02:02:39 compute-0 podman[432830]: 2025-12-05 02:02:39.952198931 +0000 UTC m=+0.279992319 container remove 14180d9d17fbc02763032832574ff3356241834dee82bdcf9156c26cf681ca19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:02:39 compute-0 systemd[1]: libpod-conmon-14180d9d17fbc02763032832574ff3356241834dee82bdcf9156c26cf681ca19.scope: Deactivated successfully.
Dec  5 02:02:40 compute-0 podman[432869]: 2025-12-05 02:02:40.167743142 +0000 UTC m=+0.062604326 container create 41ff7f80e0ef97d91a89e54668fffede4439451cd49e4bfec3a82e1473f8b6c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  5 02:02:40 compute-0 podman[432869]: 2025-12-05 02:02:40.145797697 +0000 UTC m=+0.040658911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:02:40 compute-0 systemd[1]: Started libpod-conmon-41ff7f80e0ef97d91a89e54668fffede4439451cd49e4bfec3a82e1473f8b6c4.scope.
Dec  5 02:02:40 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:02:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce1f335efd85115650eda9a4540676689fb3708705f57398e9e49b011335f876/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:02:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce1f335efd85115650eda9a4540676689fb3708705f57398e9e49b011335f876/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:02:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce1f335efd85115650eda9a4540676689fb3708705f57398e9e49b011335f876/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:02:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce1f335efd85115650eda9a4540676689fb3708705f57398e9e49b011335f876/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:02:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce1f335efd85115650eda9a4540676689fb3708705f57398e9e49b011335f876/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 02:02:40 compute-0 podman[432869]: 2025-12-05 02:02:40.309277359 +0000 UTC m=+0.204138613 container init 41ff7f80e0ef97d91a89e54668fffede4439451cd49e4bfec3a82e1473f8b6c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  5 02:02:40 compute-0 podman[432869]: 2025-12-05 02:02:40.33749138 +0000 UTC m=+0.232352564 container start 41ff7f80e0ef97d91a89e54668fffede4439451cd49e4bfec3a82e1473f8b6c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_villani, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:02:40 compute-0 podman[432869]: 2025-12-05 02:02:40.342385837 +0000 UTC m=+0.237247151 container attach 41ff7f80e0ef97d91a89e54668fffede4439451cd49e4bfec3a82e1473f8b6c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:02:41 compute-0 great_villani[432885]: --> passed data devices: 0 physical, 3 LVM
Dec  5 02:02:41 compute-0 great_villani[432885]: --> relative data size: 1.0
Dec  5 02:02:41 compute-0 great_villani[432885]: --> All data devices are unavailable
Dec  5 02:02:41 compute-0 systemd[1]: libpod-41ff7f80e0ef97d91a89e54668fffede4439451cd49e4bfec3a82e1473f8b6c4.scope: Deactivated successfully.
Dec  5 02:02:41 compute-0 systemd[1]: libpod-41ff7f80e0ef97d91a89e54668fffede4439451cd49e4bfec3a82e1473f8b6c4.scope: Consumed 1.191s CPU time.
Dec  5 02:02:41 compute-0 podman[432869]: 2025-12-05 02:02:41.606559822 +0000 UTC m=+1.501421036 container died 41ff7f80e0ef97d91a89e54668fffede4439451cd49e4bfec3a82e1473f8b6c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  5 02:02:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce1f335efd85115650eda9a4540676689fb3708705f57398e9e49b011335f876-merged.mount: Deactivated successfully.
Dec  5 02:02:41 compute-0 podman[432869]: 2025-12-05 02:02:41.702697276 +0000 UTC m=+1.597558490 container remove 41ff7f80e0ef97d91a89e54668fffede4439451cd49e4bfec3a82e1473f8b6c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:02:41 compute-0 systemd[1]: libpod-conmon-41ff7f80e0ef97d91a89e54668fffede4439451cd49e4bfec3a82e1473f8b6c4.scope: Deactivated successfully.
Dec  5 02:02:41 compute-0 nova_compute[349548]: 2025-12-05 02:02:41.827 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:02:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1601: 321 pgs: 321 active+clean; 139 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 3.5 KiB/s wr, 71 op/s
Dec  5 02:02:42 compute-0 podman[433067]: 2025-12-05 02:02:42.819014125 +0000 UTC m=+0.070700383 container create 9f0bc936dc5da5b9afaa828617ab1a856ef6c8698db686533c25b07327ecb648 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_elion, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:02:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:02:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Dec  5 02:02:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Dec  5 02:02:42 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Dec  5 02:02:42 compute-0 podman[433067]: 2025-12-05 02:02:42.791304808 +0000 UTC m=+0.042991096 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:02:42 compute-0 systemd[1]: Started libpod-conmon-9f0bc936dc5da5b9afaa828617ab1a856ef6c8698db686533c25b07327ecb648.scope.
Dec  5 02:02:42 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:02:42 compute-0 podman[433067]: 2025-12-05 02:02:42.971424277 +0000 UTC m=+0.223110605 container init 9f0bc936dc5da5b9afaa828617ab1a856ef6c8698db686533c25b07327ecb648 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec  5 02:02:42 compute-0 podman[433067]: 2025-12-05 02:02:42.983265809 +0000 UTC m=+0.234952067 container start 9f0bc936dc5da5b9afaa828617ab1a856ef6c8698db686533c25b07327ecb648 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_elion, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  5 02:02:42 compute-0 podman[433067]: 2025-12-05 02:02:42.988245189 +0000 UTC m=+0.239931447 container attach 9f0bc936dc5da5b9afaa828617ab1a856ef6c8698db686533c25b07327ecb648 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_elion, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  5 02:02:42 compute-0 peaceful_elion[433082]: 167 167
Dec  5 02:02:42 compute-0 systemd[1]: libpod-9f0bc936dc5da5b9afaa828617ab1a856ef6c8698db686533c25b07327ecb648.scope: Deactivated successfully.
Dec  5 02:02:42 compute-0 podman[433067]: 2025-12-05 02:02:42.999672989 +0000 UTC m=+0.251359247 container died 9f0bc936dc5da5b9afaa828617ab1a856ef6c8698db686533c25b07327ecb648 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_elion, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:02:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3a5d4a0f65caf60d558c24b03fa7d239a93108284567850b63d62aaebb817ec-merged.mount: Deactivated successfully.
Dec  5 02:02:43 compute-0 podman[433067]: 2025-12-05 02:02:43.069111205 +0000 UTC m=+0.320797483 container remove 9f0bc936dc5da5b9afaa828617ab1a856ef6c8698db686533c25b07327ecb648 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_elion, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:02:43 compute-0 systemd[1]: libpod-conmon-9f0bc936dc5da5b9afaa828617ab1a856ef6c8698db686533c25b07327ecb648.scope: Deactivated successfully.
Dec  5 02:02:43 compute-0 nova_compute[349548]: 2025-12-05 02:02:43.146 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:02:43 compute-0 podman[433106]: 2025-12-05 02:02:43.290153941 +0000 UTC m=+0.061167726 container create b9376c24c9f290429f60e0f843b2dfe729cc77d78ded944b85de86879fe4d69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_maxwell, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:02:43 compute-0 systemd[1]: Started libpod-conmon-b9376c24c9f290429f60e0f843b2dfe729cc77d78ded944b85de86879fe4d69d.scope.
Dec  5 02:02:43 compute-0 podman[433106]: 2025-12-05 02:02:43.273176825 +0000 UTC m=+0.044190650 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:02:43 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:02:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb9eb7387dd481a89bba323634bca1c3bfddda81ed0454bb84b9bb663ca9d0f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:02:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb9eb7387dd481a89bba323634bca1c3bfddda81ed0454bb84b9bb663ca9d0f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:02:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb9eb7387dd481a89bba323634bca1c3bfddda81ed0454bb84b9bb663ca9d0f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:02:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb9eb7387dd481a89bba323634bca1c3bfddda81ed0454bb84b9bb663ca9d0f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:02:43 compute-0 podman[433106]: 2025-12-05 02:02:43.489787567 +0000 UTC m=+0.260801462 container init b9376c24c9f290429f60e0f843b2dfe729cc77d78ded944b85de86879fe4d69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_maxwell, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  5 02:02:43 compute-0 podman[433106]: 2025-12-05 02:02:43.506370501 +0000 UTC m=+0.277384336 container start b9376c24c9f290429f60e0f843b2dfe729cc77d78ded944b85de86879fe4d69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_maxwell, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:02:43 compute-0 podman[433106]: 2025-12-05 02:02:43.513211453 +0000 UTC m=+0.284225338 container attach b9376c24c9f290429f60e0f843b2dfe729cc77d78ded944b85de86879fe4d69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:02:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1603: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 4.4 KiB/s wr, 62 op/s
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]: {
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:    "0": [
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:        {
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:            "devices": [
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:                "/dev/loop3"
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:            ],
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:            "lv_name": "ceph_lv0",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:            "lv_size": "21470642176",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:            "name": "ceph_lv0",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:            "tags": {
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:                "ceph.cluster_name": "ceph",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:                "ceph.crush_device_class": "",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:                "ceph.encrypted": "0",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:                "ceph.osd_id": "0",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:                "ceph.type": "block",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:                "ceph.vdo": "0"
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:            },
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:            "type": "block",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:            "vg_name": "ceph_vg0"
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:        }
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:    ],
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:    "1": [
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:        {
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:            "devices": [
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:                "/dev/loop4"
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:            ],
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:            "lv_name": "ceph_lv1",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:            "lv_size": "21470642176",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:            "name": "ceph_lv1",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:            "tags": {
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:                "ceph.cluster_name": "ceph",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:                "ceph.crush_device_class": "",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:                "ceph.encrypted": "0",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:                "ceph.osd_id": "1",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:                "ceph.type": "block",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:                "ceph.vdo": "0"
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:            },
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:            "type": "block",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:            "vg_name": "ceph_vg1"
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:        }
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:    ],
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:    "2": [
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:        {
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:            "devices": [
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:                "/dev/loop5"
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:            ],
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:            "lv_name": "ceph_lv2",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:            "lv_size": "21470642176",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:            "name": "ceph_lv2",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:            "tags": {
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:                "ceph.cluster_name": "ceph",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:                "ceph.crush_device_class": "",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:                "ceph.encrypted": "0",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:                "ceph.osd_id": "2",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:                "ceph.type": "block",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:                "ceph.vdo": "0"
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:            },
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:            "type": "block",
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:            "vg_name": "ceph_vg2"
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:        }
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]:    ]
Dec  5 02:02:44 compute-0 sharp_maxwell[433121]: }
Dec  5 02:02:44 compute-0 systemd[1]: libpod-b9376c24c9f290429f60e0f843b2dfe729cc77d78ded944b85de86879fe4d69d.scope: Deactivated successfully.
Dec  5 02:02:44 compute-0 podman[433131]: 2025-12-05 02:02:44.420340439 +0000 UTC m=+0.042645306 container died b9376c24c9f290429f60e0f843b2dfe729cc77d78ded944b85de86879fe4d69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_maxwell, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:02:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb9eb7387dd481a89bba323634bca1c3bfddda81ed0454bb84b9bb663ca9d0f9-merged.mount: Deactivated successfully.
Dec  5 02:02:44 compute-0 podman[433130]: 2025-12-05 02:02:44.478617463 +0000 UTC m=+0.084516050 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Dec  5 02:02:44 compute-0 podman[433131]: 2025-12-05 02:02:44.499017515 +0000 UTC m=+0.121322302 container remove b9376c24c9f290429f60e0f843b2dfe729cc77d78ded944b85de86879fe4d69d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_maxwell, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Dec  5 02:02:44 compute-0 systemd[1]: libpod-conmon-b9376c24c9f290429f60e0f843b2dfe729cc77d78ded944b85de86879fe4d69d.scope: Deactivated successfully.
Dec  5 02:02:44 compute-0 podman[433137]: 2025-12-05 02:02:44.519772246 +0000 UTC m=+0.115506418 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 02:02:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 02:02:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/877224894' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 02:02:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 02:02:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/877224894' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 02:02:45 compute-0 podman[433317]: 2025-12-05 02:02:45.436131032 +0000 UTC m=+0.083661836 container create a950c062aa91f17c2d028e6c84fc8510d7f14422d55361137ea5a747db85083b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bell, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:02:45 compute-0 podman[433317]: 2025-12-05 02:02:45.400504933 +0000 UTC m=+0.048035807 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:02:45 compute-0 systemd[1]: Started libpod-conmon-a950c062aa91f17c2d028e6c84fc8510d7f14422d55361137ea5a747db85083b.scope.
Dec  5 02:02:45 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:02:45 compute-0 podman[433317]: 2025-12-05 02:02:45.574396067 +0000 UTC m=+0.221926941 container init a950c062aa91f17c2d028e6c84fc8510d7f14422d55361137ea5a747db85083b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bell, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  5 02:02:45 compute-0 podman[433317]: 2025-12-05 02:02:45.59624507 +0000 UTC m=+0.243775884 container start a950c062aa91f17c2d028e6c84fc8510d7f14422d55361137ea5a747db85083b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Dec  5 02:02:45 compute-0 podman[433317]: 2025-12-05 02:02:45.605360025 +0000 UTC m=+0.252890849 container attach a950c062aa91f17c2d028e6c84fc8510d7f14422d55361137ea5a747db85083b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bell, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  5 02:02:45 compute-0 elated_bell[433345]: 167 167
Dec  5 02:02:45 compute-0 podman[433334]: 2025-12-05 02:02:45.606023444 +0000 UTC m=+0.097817633 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 02:02:45 compute-0 systemd[1]: libpod-a950c062aa91f17c2d028e6c84fc8510d7f14422d55361137ea5a747db85083b.scope: Deactivated successfully.
Dec  5 02:02:45 compute-0 conmon[433345]: conmon a950c062aa91f17c2d02 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a950c062aa91f17c2d028e6c84fc8510d7f14422d55361137ea5a747db85083b.scope/container/memory.events
Dec  5 02:02:45 compute-0 podman[433317]: 2025-12-05 02:02:45.61053587 +0000 UTC m=+0.258066664 container died a950c062aa91f17c2d028e6c84fc8510d7f14422d55361137ea5a747db85083b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  5 02:02:45 compute-0 podman[433331]: 2025-12-05 02:02:45.627701331 +0000 UTC m=+0.120508859 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125)
Dec  5 02:02:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-5dd6cf5406243618e0365e10099b806f0ffc030eb5c99f3e4f1cb226069f1a29-merged.mount: Deactivated successfully.
Dec  5 02:02:45 compute-0 podman[433317]: 2025-12-05 02:02:45.665726727 +0000 UTC m=+0.313257501 container remove a950c062aa91f17c2d028e6c84fc8510d7f14422d55361137ea5a747db85083b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_bell, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  5 02:02:45 compute-0 systemd[1]: libpod-conmon-a950c062aa91f17c2d028e6c84fc8510d7f14422d55361137ea5a747db85083b.scope: Deactivated successfully.
Dec  5 02:02:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1604: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 3.5 KiB/s wr, 50 op/s
Dec  5 02:02:45 compute-0 podman[433394]: 2025-12-05 02:02:45.936979079 +0000 UTC m=+0.086362262 container create 9305b3f3975f4907787e81d32b9bad65141c865a3d446adc49990f1749d1aacc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:02:45 compute-0 podman[433394]: 2025-12-05 02:02:45.905054904 +0000 UTC m=+0.054438147 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:02:46 compute-0 systemd[1]: Started libpod-conmon-9305b3f3975f4907787e81d32b9bad65141c865a3d446adc49990f1749d1aacc.scope.
Dec  5 02:02:46 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:02:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96b20eeecc94ef70281a5ea7a146ae59af0972dbcdd9071d8c8103906285ccef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:02:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96b20eeecc94ef70281a5ea7a146ae59af0972dbcdd9071d8c8103906285ccef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:02:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96b20eeecc94ef70281a5ea7a146ae59af0972dbcdd9071d8c8103906285ccef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:02:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96b20eeecc94ef70281a5ea7a146ae59af0972dbcdd9071d8c8103906285ccef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:02:46 compute-0 podman[433394]: 2025-12-05 02:02:46.096785378 +0000 UTC m=+0.246168601 container init 9305b3f3975f4907787e81d32b9bad65141c865a3d446adc49990f1749d1aacc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_murdock, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  5 02:02:46 compute-0 podman[433394]: 2025-12-05 02:02:46.111665505 +0000 UTC m=+0.261048688 container start 9305b3f3975f4907787e81d32b9bad65141c865a3d446adc49990f1749d1aacc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_murdock, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  5 02:02:46 compute-0 podman[433394]: 2025-12-05 02:02:46.122148289 +0000 UTC m=+0.271531472 container attach 9305b3f3975f4907787e81d32b9bad65141c865a3d446adc49990f1749d1aacc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_murdock, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:02:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:02:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:02:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:02:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:02:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:02:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:02:46 compute-0 nova_compute[349548]: 2025-12-05 02:02:46.829 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:02:46 compute-0 nova_compute[349548]: 2025-12-05 02:02:46.997 349552 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764900151.9948099, ee0bd3a4-b224-4dad-948c-1362bf56fea1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:02:46 compute-0 nova_compute[349548]: 2025-12-05 02:02:46.998 349552 INFO nova.compute.manager [-] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] VM Stopped (Lifecycle Event)#033[00m
Dec  5 02:02:47 compute-0 nova_compute[349548]: 2025-12-05 02:02:47.025 349552 DEBUG nova.compute.manager [None req-d6d0c888-7d93-431b-9be0-c4d8510faa09 - - - - - -] [instance: ee0bd3a4-b224-4dad-948c-1362bf56fea1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:02:47 compute-0 vigorous_murdock[433410]: {
Dec  5 02:02:47 compute-0 vigorous_murdock[433410]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 02:02:47 compute-0 vigorous_murdock[433410]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:02:47 compute-0 vigorous_murdock[433410]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 02:02:47 compute-0 vigorous_murdock[433410]:        "osd_id": 0,
Dec  5 02:02:47 compute-0 vigorous_murdock[433410]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:02:47 compute-0 vigorous_murdock[433410]:        "type": "bluestore"
Dec  5 02:02:47 compute-0 vigorous_murdock[433410]:    },
Dec  5 02:02:47 compute-0 vigorous_murdock[433410]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 02:02:47 compute-0 vigorous_murdock[433410]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:02:47 compute-0 vigorous_murdock[433410]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 02:02:47 compute-0 vigorous_murdock[433410]:        "osd_id": 1,
Dec  5 02:02:47 compute-0 vigorous_murdock[433410]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:02:47 compute-0 vigorous_murdock[433410]:        "type": "bluestore"
Dec  5 02:02:47 compute-0 vigorous_murdock[433410]:    },
Dec  5 02:02:47 compute-0 vigorous_murdock[433410]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 02:02:47 compute-0 vigorous_murdock[433410]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:02:47 compute-0 vigorous_murdock[433410]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 02:02:47 compute-0 vigorous_murdock[433410]:        "osd_id": 2,
Dec  5 02:02:47 compute-0 vigorous_murdock[433410]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:02:47 compute-0 vigorous_murdock[433410]:        "type": "bluestore"
Dec  5 02:02:47 compute-0 vigorous_murdock[433410]:    }
Dec  5 02:02:47 compute-0 vigorous_murdock[433410]: }
Dec  5 02:02:47 compute-0 systemd[1]: libpod-9305b3f3975f4907787e81d32b9bad65141c865a3d446adc49990f1749d1aacc.scope: Deactivated successfully.
Dec  5 02:02:47 compute-0 systemd[1]: libpod-9305b3f3975f4907787e81d32b9bad65141c865a3d446adc49990f1749d1aacc.scope: Consumed 1.236s CPU time.
Dec  5 02:02:47 compute-0 podman[433443]: 2025-12-05 02:02:47.443144446 +0000 UTC m=+0.064581371 container died 9305b3f3975f4907787e81d32b9bad65141c865a3d446adc49990f1749d1aacc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_murdock, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  5 02:02:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-96b20eeecc94ef70281a5ea7a146ae59af0972dbcdd9071d8c8103906285ccef-merged.mount: Deactivated successfully.
Dec  5 02:02:47 compute-0 podman[433443]: 2025-12-05 02:02:47.533511819 +0000 UTC m=+0.154948704 container remove 9305b3f3975f4907787e81d32b9bad65141c865a3d446adc49990f1749d1aacc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_murdock, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:02:47 compute-0 systemd[1]: libpod-conmon-9305b3f3975f4907787e81d32b9bad65141c865a3d446adc49990f1749d1aacc.scope: Deactivated successfully.
Dec  5 02:02:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 02:02:47 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:02:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 02:02:47 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:02:47 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev bb1461a4-9bb6-42e7-bb6f-21ef232d4c34 does not exist
Dec  5 02:02:47 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 844d226c-bd4b-4d3c-be63-f703d3e81c2a does not exist
Dec  5 02:02:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1605: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 613 B/s rd, 306 B/s wr, 0 op/s
Dec  5 02:02:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:02:48 compute-0 nova_compute[349548]: 2025-12-05 02:02:48.152 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:02:48 compute-0 systemd[1]: session-62.scope: Deactivated successfully.
Dec  5 02:02:48 compute-0 systemd[1]: session-62.scope: Consumed 1.330s CPU time.
Dec  5 02:02:48 compute-0 systemd-logind[792]: Session 62 logged out. Waiting for processes to exit.
Dec  5 02:02:48 compute-0 systemd-logind[792]: Removed session 62.
Dec  5 02:02:48 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:02:48 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:02:49 compute-0 podman[433510]: 2025-12-05 02:02:49.733208745 +0000 UTC m=+0.140736546 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, vcs-type=git, version=9.4, com.redhat.component=ubi9-container, architecture=x86_64, container_name=kepler, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec  5 02:02:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1606: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 613 B/s rd, 306 B/s wr, 0 op/s
Dec  5 02:02:51 compute-0 nova_compute[349548]: 2025-12-05 02:02:51.833 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:02:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1607: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:02:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:02:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Dec  5 02:02:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Dec  5 02:02:52 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Dec  5 02:02:53 compute-0 nova_compute[349548]: 2025-12-05 02:02:53.154 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:02:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1609: 321 pgs: 321 active+clean; 147 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 819 KiB/s wr, 7 op/s
Dec  5 02:02:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1610: 321 pgs: 321 active+clean; 147 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 819 KiB/s wr, 7 op/s
Dec  5 02:02:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:02:56.195 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:02:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:02:56.196 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:02:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:02:56.196 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:02:56 compute-0 nova_compute[349548]: 2025-12-05 02:02:56.835 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:02:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1611: 321 pgs: 321 active+clean; 155 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.6 MiB/s wr, 18 op/s
Dec  5 02:02:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:02:58 compute-0 nova_compute[349548]: 2025-12-05 02:02:58.156 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:02:58 compute-0 podman[433530]: 2025-12-05 02:02:58.70220351 +0000 UTC m=+0.099061658 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 02:02:58 compute-0 podman[433529]: 2025-12-05 02:02:58.713542618 +0000 UTC m=+0.115767656 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec  5 02:02:58 compute-0 podman[433532]: 2025-12-05 02:02:58.726160321 +0000 UTC m=+0.107396071 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, distribution-scope=public, release=1755695350, config_id=edpm, vendor=Red Hat, Inc., io.buildah.version=1.33.7, name=ubi9-minimal, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, architecture=x86_64)
Dec  5 02:02:58 compute-0 podman[433531]: 2025-12-05 02:02:58.767119749 +0000 UTC m=+0.161528578 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Dec  5 02:02:59 compute-0 podman[158197]: time="2025-12-05T02:02:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:02:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:02:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 02:02:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:02:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8637 "" "Go-http-client/1.1"
Dec  5 02:02:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1612: 321 pgs: 321 active+clean; 155 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.6 MiB/s wr, 18 op/s
Dec  5 02:03:01 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Dec  5 02:03:01 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Dec  5 02:03:01 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Dec  5 02:03:01 compute-0 openstack_network_exporter[366555]: ERROR   02:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:03:01 compute-0 openstack_network_exporter[366555]: ERROR   02:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:03:01 compute-0 openstack_network_exporter[366555]: ERROR   02:03:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:03:01 compute-0 openstack_network_exporter[366555]: ERROR   02:03:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:03:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:03:01 compute-0 openstack_network_exporter[366555]: ERROR   02:03:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:03:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:03:01 compute-0 nova_compute[349548]: 2025-12-05 02:03:01.837 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:03:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1614: 321 pgs: 321 active+clean; 155 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 8.4 KiB/s rd, 869 KiB/s wr, 11 op/s
Dec  5 02:03:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:03:03 compute-0 nova_compute[349548]: 2025-12-05 02:03:03.159 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:03:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1615: 321 pgs: 321 active+clean; 139 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 774 KiB/s wr, 34 op/s
Dec  5 02:03:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1616: 321 pgs: 321 active+clean; 139 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 775 KiB/s wr, 35 op/s
Dec  5 02:03:06 compute-0 nova_compute[349548]: 2025-12-05 02:03:06.840 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:03:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:03:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Dec  5 02:03:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1617: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec  5 02:03:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Dec  5 02:03:08 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Dec  5 02:03:08 compute-0 nova_compute[349548]: 2025-12-05 02:03:08.162 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:03:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1619: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Dec  5 02:03:10 compute-0 systemd-logind[792]: New session 63 of user zuul.
Dec  5 02:03:10 compute-0 systemd[1]: Started Session 63 of User zuul.
Dec  5 02:03:11 compute-0 nova_compute[349548]: 2025-12-05 02:03:11.843 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:03:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1620: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Dec  5 02:03:12 compute-0 python3[433793]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 02:03:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:03:13 compute-0 nova_compute[349548]: 2025-12-05 02:03:13.164 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:03:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1621: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 307 B/s wr, 0 op/s
Dec  5 02:03:14 compute-0 podman[433833]: 2025-12-05 02:03:14.763484328 +0000 UTC m=+0.158051451 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 02:03:14 compute-0 podman[433834]: 2025-12-05 02:03:14.79103737 +0000 UTC m=+0.181623452 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 02:03:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1622: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:03:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:03:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:03:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:03:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:03:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:03:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:03:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:03:16
Dec  5 02:03:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 02:03:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 02:03:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['volumes', 'images', 'default.rgw.control', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'backups']
Dec  5 02:03:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 02:03:16 compute-0 podman[433875]: 2025-12-05 02:03:16.712872348 +0000 UTC m=+0.117213146 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm)
Dec  5 02:03:16 compute-0 podman[433874]: 2025-12-05 02:03:16.731727857 +0000 UTC m=+0.142312650 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  5 02:03:16 compute-0 nova_compute[349548]: 2025-12-05 02:03:16.846 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:03:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 02:03:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:03:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:03:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:03:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:03:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 02:03:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:03:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:03:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:03:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:03:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1623: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:03:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:03:18 compute-0 nova_compute[349548]: 2025-12-05 02:03:18.168 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:03:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1624: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:03:20 compute-0 python3[434086]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep podman_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 02:03:20 compute-0 podman[434085]: 2025-12-05 02:03:20.065762277 +0000 UTC m=+0.141901108 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., release-0.7.12=, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=edpm, release=1214.1726694543, io.openshift.tags=base rhel9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.29.0)
Dec  5 02:03:21 compute-0 nova_compute[349548]: 2025-12-05 02:03:21.849 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:03:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1625: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:03:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:03:23 compute-0 nova_compute[349548]: 2025-12-05 02:03:23.171 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:03:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1626: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:03:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1627: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:03:26 compute-0 nova_compute[349548]: 2025-12-05 02:03:26.854 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00110425264130364 of space, bias 1.0, pg target 0.331275792391092 quantized to 32 (current 32)
Dec  5 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  5 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:03:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 02:03:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1628: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:03:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:03:28 compute-0 nova_compute[349548]: 2025-12-05 02:03:28.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:03:28 compute-0 nova_compute[349548]: 2025-12-05 02:03:28.068 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 02:03:28 compute-0 nova_compute[349548]: 2025-12-05 02:03:28.174 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:03:29 compute-0 podman[434268]: 2025-12-05 02:03:29.701202573 +0000 UTC m=+0.112475864 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  5 02:03:29 compute-0 podman[434269]: 2025-12-05 02:03:29.713690623 +0000 UTC m=+0.115421007 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  5 02:03:29 compute-0 podman[434270]: 2025-12-05 02:03:29.715696379 +0000 UTC m=+0.114186602 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  5 02:03:29 compute-0 podman[434271]: 2025-12-05 02:03:29.734362982 +0000 UTC m=+0.133951786 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.buildah.version=1.33.7, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, architecture=x86_64, config_id=edpm, vendor=Red Hat, Inc., distribution-scope=public, container_name=openstack_network_exporter)
Dec  5 02:03:29 compute-0 podman[158197]: time="2025-12-05T02:03:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:03:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:03:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 02:03:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:03:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8637 "" "Go-http-client/1.1"
Dec  5 02:03:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1629: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:03:29 compute-0 python3[434402]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep kepler#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 02:03:30 compute-0 nova_compute[349548]: 2025-12-05 02:03:30.068 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:03:30 compute-0 nova_compute[349548]: 2025-12-05 02:03:30.069 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:03:30 compute-0 nova_compute[349548]: 2025-12-05 02:03:30.069 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:03:31 compute-0 nova_compute[349548]: 2025-12-05 02:03:31.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:03:31 compute-0 nova_compute[349548]: 2025-12-05 02:03:31.068 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 02:03:31 compute-0 nova_compute[349548]: 2025-12-05 02:03:31.382 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-3611d2ae-da33-4e55-aec7-0bec88d3b4e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:03:31 compute-0 nova_compute[349548]: 2025-12-05 02:03:31.383 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-3611d2ae-da33-4e55-aec7-0bec88d3b4e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:03:31 compute-0 nova_compute[349548]: 2025-12-05 02:03:31.383 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  5 02:03:31 compute-0 openstack_network_exporter[366555]: ERROR   02:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:03:31 compute-0 openstack_network_exporter[366555]: ERROR   02:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:03:31 compute-0 openstack_network_exporter[366555]: ERROR   02:03:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:03:31 compute-0 openstack_network_exporter[366555]: ERROR   02:03:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:03:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:03:31 compute-0 openstack_network_exporter[366555]: ERROR   02:03:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:03:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:03:31 compute-0 nova_compute[349548]: 2025-12-05 02:03:31.856 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:03:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1630: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:03:32 compute-0 nova_compute[349548]: 2025-12-05 02:03:32.856 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Updating instance_info_cache with network_info: [{"id": "2799035c-b9e1-4c24-b031-9824b684480c", "address": "fa:16:3e:10:64:51", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2799035c-b9", "ovs_interfaceid": "2799035c-b9e1-4c24-b031-9824b684480c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:03:32 compute-0 nova_compute[349548]: 2025-12-05 02:03:32.881 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-3611d2ae-da33-4e55-aec7-0bec88d3b4e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:03:32 compute-0 nova_compute[349548]: 2025-12-05 02:03:32.882 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  5 02:03:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:03:33 compute-0 nova_compute[349548]: 2025-12-05 02:03:33.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:03:33 compute-0 nova_compute[349548]: 2025-12-05 02:03:33.100 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:03:33 compute-0 nova_compute[349548]: 2025-12-05 02:03:33.101 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:03:33 compute-0 nova_compute[349548]: 2025-12-05 02:03:33.102 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:03:33 compute-0 nova_compute[349548]: 2025-12-05 02:03:33.103 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 02:03:33 compute-0 nova_compute[349548]: 2025-12-05 02:03:33.105 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:03:33 compute-0 nova_compute[349548]: 2025-12-05 02:03:33.179 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:03:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:03:33 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/619876228' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:03:33 compute-0 nova_compute[349548]: 2025-12-05 02:03:33.596 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:03:33 compute-0 nova_compute[349548]: 2025-12-05 02:03:33.757 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:03:33 compute-0 nova_compute[349548]: 2025-12-05 02:03:33.759 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:03:33 compute-0 nova_compute[349548]: 2025-12-05 02:03:33.760 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:03:33 compute-0 nova_compute[349548]: 2025-12-05 02:03:33.770 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:03:33 compute-0 nova_compute[349548]: 2025-12-05 02:03:33.771 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:03:33 compute-0 nova_compute[349548]: 2025-12-05 02:03:33.772 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:03:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1631: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:03:34 compute-0 nova_compute[349548]: 2025-12-05 02:03:34.353 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:03:34 compute-0 nova_compute[349548]: 2025-12-05 02:03:34.354 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3619MB free_disk=59.92203903198242GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 02:03:34 compute-0 nova_compute[349548]: 2025-12-05 02:03:34.355 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:03:34 compute-0 nova_compute[349548]: 2025-12-05 02:03:34.355 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:03:34 compute-0 nova_compute[349548]: 2025-12-05 02:03:34.435 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b69a0e24-1bc4-46a5-92d7-367c1efd53df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:03:34 compute-0 nova_compute[349548]: 2025-12-05 02:03:34.435 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 3611d2ae-da33-4e55-aec7-0bec88d3b4e0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:03:34 compute-0 nova_compute[349548]: 2025-12-05 02:03:34.436 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 02:03:34 compute-0 nova_compute[349548]: 2025-12-05 02:03:34.436 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 02:03:34 compute-0 nova_compute[349548]: 2025-12-05 02:03:34.453 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing inventories for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  5 02:03:34 compute-0 nova_compute[349548]: 2025-12-05 02:03:34.471 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating ProviderTree inventory for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  5 02:03:34 compute-0 nova_compute[349548]: 2025-12-05 02:03:34.472 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating inventory in ProviderTree for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  5 02:03:34 compute-0 nova_compute[349548]: 2025-12-05 02:03:34.493 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing aggregate associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  5 02:03:34 compute-0 nova_compute[349548]: 2025-12-05 02:03:34.542 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing trait associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, traits: HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,HW_CPU_X86_ABM,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE41,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI2,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  5 02:03:34 compute-0 nova_compute[349548]: 2025-12-05 02:03:34.611 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:03:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:03:35 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1000060851' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:03:35 compute-0 nova_compute[349548]: 2025-12-05 02:03:35.107 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:03:35 compute-0 nova_compute[349548]: 2025-12-05 02:03:35.123 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:03:35 compute-0 nova_compute[349548]: 2025-12-05 02:03:35.361 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:03:35 compute-0 nova_compute[349548]: 2025-12-05 02:03:35.366 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 02:03:35 compute-0 nova_compute[349548]: 2025-12-05 02:03:35.368 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.013s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:03:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1632: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:03:36 compute-0 nova_compute[349548]: 2025-12-05 02:03:36.364 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:03:36 compute-0 nova_compute[349548]: 2025-12-05 02:03:36.365 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:03:36 compute-0 nova_compute[349548]: 2025-12-05 02:03:36.859 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:03:37 compute-0 nova_compute[349548]: 2025-12-05 02:03:37.062 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:03:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1633: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:03:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:03:38 compute-0 nova_compute[349548]: 2025-12-05 02:03:38.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:03:38 compute-0 nova_compute[349548]: 2025-12-05 02:03:38.186 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:03:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1634: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:03:41 compute-0 nova_compute[349548]: 2025-12-05 02:03:41.863 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:03:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1635: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:03:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:03:43 compute-0 nova_compute[349548]: 2025-12-05 02:03:43.189 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:03:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1636: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:03:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 02:03:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3725910511' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 02:03:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 02:03:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3725910511' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 02:03:45 compute-0 podman[434633]: 2025-12-05 02:03:45.61011592 +0000 UTC m=+0.109326726 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 02:03:45 compute-0 podman[434632]: 2025-12-05 02:03:45.634950946 +0000 UTC m=+0.146367094 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  5 02:03:45 compute-0 python3[434700]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep openstack_network_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  5 02:03:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1637: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:03:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:03:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:03:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:03:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:03:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:03:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:03:46 compute-0 nova_compute[349548]: 2025-12-05 02:03:46.866 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:03:47 compute-0 podman[434739]: 2025-12-05 02:03:47.726191402 +0000 UTC m=+0.114223942 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125)
Dec  5 02:03:47 compute-0 podman[434738]: 2025-12-05 02:03:47.72718475 +0000 UTC m=+0.124886371 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  5 02:03:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1638: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:03:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:03:48 compute-0 nova_compute[349548]: 2025-12-05 02:03:48.192 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:03:49 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:03:49 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:03:49 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 02:03:49 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:03:49 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 02:03:49 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:03:49 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 3c6d8abe-70c2-42d4-a107-e393386b4789 does not exist
Dec  5 02:03:49 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev b20c11bd-4c41-46d5-8d5d-88d98907dfa6 does not exist
Dec  5 02:03:49 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev aa147508-a70b-4859-80ed-9d489b05e01b does not exist
Dec  5 02:03:49 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 02:03:49 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 02:03:49 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 02:03:49 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:03:49 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:03:49 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:03:49 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:03:49 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:03:49 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:03:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1639: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:03:50 compute-0 podman[435045]: 2025-12-05 02:03:50.288585514 +0000 UTC m=+0.095385655 container create 04ce5437b2b29bdf2deacce60aa6d39f0443fb336dce83f3569646841cb597f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:03:50 compute-0 podman[435045]: 2025-12-05 02:03:50.244288342 +0000 UTC m=+0.051088503 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:03:50 compute-0 systemd[1]: Started libpod-conmon-04ce5437b2b29bdf2deacce60aa6d39f0443fb336dce83f3569646841cb597f8.scope.
Dec  5 02:03:50 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:03:50 compute-0 podman[435045]: 2025-12-05 02:03:50.416066467 +0000 UTC m=+0.222866668 container init 04ce5437b2b29bdf2deacce60aa6d39f0443fb336dce83f3569646841cb597f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bassi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:03:50 compute-0 podman[435045]: 2025-12-05 02:03:50.430116831 +0000 UTC m=+0.236916942 container start 04ce5437b2b29bdf2deacce60aa6d39f0443fb336dce83f3569646841cb597f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bassi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:03:50 compute-0 podman[435045]: 2025-12-05 02:03:50.43541545 +0000 UTC m=+0.242215591 container attach 04ce5437b2b29bdf2deacce60aa6d39f0443fb336dce83f3569646841cb597f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:03:50 compute-0 nifty_bassi[435061]: 167 167
Dec  5 02:03:50 compute-0 systemd[1]: libpod-04ce5437b2b29bdf2deacce60aa6d39f0443fb336dce83f3569646841cb597f8.scope: Deactivated successfully.
Dec  5 02:03:50 compute-0 podman[435045]: 2025-12-05 02:03:50.441025337 +0000 UTC m=+0.247825478 container died 04ce5437b2b29bdf2deacce60aa6d39f0443fb336dce83f3569646841cb597f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:03:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b54d0539e13e3171f31aab36886e24d71df4a8c1f8be6f57f492211058eb111-merged.mount: Deactivated successfully.
Dec  5 02:03:50 compute-0 podman[435059]: 2025-12-05 02:03:50.509065044 +0000 UTC m=+0.152965479 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, config_id=edpm, distribution-scope=public, release=1214.1726694543, release-0.7.12=, managed_by=edpm_ansible, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, name=ubi9, vcs-type=git, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  5 02:03:50 compute-0 podman[435045]: 2025-12-05 02:03:50.515230667 +0000 UTC m=+0.322030778 container remove 04ce5437b2b29bdf2deacce60aa6d39f0443fb336dce83f3569646841cb597f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bassi, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  5 02:03:50 compute-0 systemd[1]: libpod-conmon-04ce5437b2b29bdf2deacce60aa6d39f0443fb336dce83f3569646841cb597f8.scope: Deactivated successfully.
Dec  5 02:03:50 compute-0 podman[435104]: 2025-12-05 02:03:50.800599616 +0000 UTC m=+0.087462673 container create 898b827d4a6024cfa782b3ebdaaf83cc257f4dc219241501a2f5c9cf7c3ff25a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_keldysh, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  5 02:03:50 compute-0 podman[435104]: 2025-12-05 02:03:50.766992524 +0000 UTC m=+0.053855661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:03:50 compute-0 systemd[1]: Started libpod-conmon-898b827d4a6024cfa782b3ebdaaf83cc257f4dc219241501a2f5c9cf7c3ff25a.scope.
Dec  5 02:03:50 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71b9d3729300b7383998995f9b875ceb97ad1075666fbda813c7c52e64cfd8e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71b9d3729300b7383998995f9b875ceb97ad1075666fbda813c7c52e64cfd8e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71b9d3729300b7383998995f9b875ceb97ad1075666fbda813c7c52e64cfd8e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71b9d3729300b7383998995f9b875ceb97ad1075666fbda813c7c52e64cfd8e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71b9d3729300b7383998995f9b875ceb97ad1075666fbda813c7c52e64cfd8e9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 02:03:50 compute-0 podman[435104]: 2025-12-05 02:03:50.938976984 +0000 UTC m=+0.225840101 container init 898b827d4a6024cfa782b3ebdaaf83cc257f4dc219241501a2f5c9cf7c3ff25a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:03:50 compute-0 podman[435104]: 2025-12-05 02:03:50.973197203 +0000 UTC m=+0.260060280 container start 898b827d4a6024cfa782b3ebdaaf83cc257f4dc219241501a2f5c9cf7c3ff25a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_keldysh, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:03:50 compute-0 podman[435104]: 2025-12-05 02:03:50.981060134 +0000 UTC m=+0.267923261 container attach 898b827d4a6024cfa782b3ebdaaf83cc257f4dc219241501a2f5c9cf7c3ff25a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:03:51 compute-0 nova_compute[349548]: 2025-12-05 02:03:51.870 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:03:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1640: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:03:52 compute-0 unruffled_keldysh[435120]: --> passed data devices: 0 physical, 3 LVM
Dec  5 02:03:52 compute-0 unruffled_keldysh[435120]: --> relative data size: 1.0
Dec  5 02:03:52 compute-0 unruffled_keldysh[435120]: --> All data devices are unavailable
Dec  5 02:03:52 compute-0 systemd[1]: libpod-898b827d4a6024cfa782b3ebdaaf83cc257f4dc219241501a2f5c9cf7c3ff25a.scope: Deactivated successfully.
Dec  5 02:03:52 compute-0 systemd[1]: libpod-898b827d4a6024cfa782b3ebdaaf83cc257f4dc219241501a2f5c9cf7c3ff25a.scope: Consumed 1.132s CPU time.
Dec  5 02:03:52 compute-0 podman[435104]: 2025-12-05 02:03:52.182321384 +0000 UTC m=+1.469184491 container died 898b827d4a6024cfa782b3ebdaaf83cc257f4dc219241501a2f5c9cf7c3ff25a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:03:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-71b9d3729300b7383998995f9b875ceb97ad1075666fbda813c7c52e64cfd8e9-merged.mount: Deactivated successfully.
Dec  5 02:03:52 compute-0 podman[435104]: 2025-12-05 02:03:52.61636278 +0000 UTC m=+1.903225867 container remove 898b827d4a6024cfa782b3ebdaaf83cc257f4dc219241501a2f5c9cf7c3ff25a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_keldysh, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:03:52 compute-0 systemd[1]: libpod-conmon-898b827d4a6024cfa782b3ebdaaf83cc257f4dc219241501a2f5c9cf7c3ff25a.scope: Deactivated successfully.
Dec  5 02:03:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:03:52 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Dec  5 02:03:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:03:52.969053) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  5 02:03:52 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Dec  5 02:03:52 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900232969103, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 1009, "num_deletes": 251, "total_data_size": 1362392, "memory_usage": 1388152, "flush_reason": "Manual Compaction"}
Dec  5 02:03:52 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Dec  5 02:03:52 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900232979637, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 861818, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33017, "largest_seqno": 34025, "table_properties": {"data_size": 857750, "index_size": 1656, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10744, "raw_average_key_size": 20, "raw_value_size": 848983, "raw_average_value_size": 1654, "num_data_blocks": 74, "num_entries": 513, "num_filter_entries": 513, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764900147, "oldest_key_time": 1764900147, "file_creation_time": 1764900232, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Dec  5 02:03:52 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 10709 microseconds, and 4226 cpu microseconds.
Dec  5 02:03:52 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 02:03:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:03:52.979759) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 861818 bytes OK
Dec  5 02:03:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:03:52.979791) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Dec  5 02:03:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:03:52.983225) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Dec  5 02:03:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:03:52.983250) EVENT_LOG_v1 {"time_micros": 1764900232983242, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  5 02:03:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:03:52.983276) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  5 02:03:52 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 1357586, prev total WAL file size 1357586, number of live WAL files 2.
Dec  5 02:03:52 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:03:52 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:03:52.984775) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323533' seq:72057594037927935, type:22 .. '6D6772737461740031353034' seq:0, type:0; will stop at (end)
Dec  5 02:03:52 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  5 02:03:52 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(841KB)], [74(9575KB)]
Dec  5 02:03:52 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900232984864, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 10667080, "oldest_snapshot_seqno": -1}
Dec  5 02:03:53 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 5336 keys, 7853840 bytes, temperature: kUnknown
Dec  5 02:03:53 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900233055563, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 7853840, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7819600, "index_size": 19794, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13381, "raw_key_size": 134791, "raw_average_key_size": 25, "raw_value_size": 7724482, "raw_average_value_size": 1447, "num_data_blocks": 818, "num_entries": 5336, "num_filter_entries": 5336, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764900232, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Dec  5 02:03:53 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 02:03:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:03:53.055981) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 7853840 bytes
Dec  5 02:03:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:03:53.059195) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 150.7 rd, 110.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 9.4 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(21.5) write-amplify(9.1) OK, records in: 5816, records dropped: 480 output_compression: NoCompression
Dec  5 02:03:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:03:53.059220) EVENT_LOG_v1 {"time_micros": 1764900233059208, "job": 42, "event": "compaction_finished", "compaction_time_micros": 70798, "compaction_time_cpu_micros": 38914, "output_level": 6, "num_output_files": 1, "total_output_size": 7853840, "num_input_records": 5816, "num_output_records": 5336, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  5 02:03:53 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:03:53 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900233059624, "job": 42, "event": "table_file_deletion", "file_number": 76}
Dec  5 02:03:53 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:03:53 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900233062394, "job": 42, "event": "table_file_deletion", "file_number": 74}
Dec  5 02:03:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:03:52.984535) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:03:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:03:53.062589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:03:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:03:53.062597) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:03:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:03:53.062600) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:03:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:03:53.063196) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:03:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:03:53.063201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:03:53 compute-0 nova_compute[349548]: 2025-12-05 02:03:53.195 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:03:53 compute-0 podman[435300]: 2025-12-05 02:03:53.789774009 +0000 UTC m=+0.087106801 container create 454a11abde918f9f15b74f83055f6870b2243c157b95259a496433827f7b4860 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:03:53 compute-0 podman[435300]: 2025-12-05 02:03:53.752044203 +0000 UTC m=+0.049377045 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:03:53 compute-0 systemd[1]: Started libpod-conmon-454a11abde918f9f15b74f83055f6870b2243c157b95259a496433827f7b4860.scope.
Dec  5 02:03:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1641: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:03:53 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:03:53 compute-0 podman[435300]: 2025-12-05 02:03:53.953616232 +0000 UTC m=+0.250949014 container init 454a11abde918f9f15b74f83055f6870b2243c157b95259a496433827f7b4860 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_engelbart, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:03:53 compute-0 podman[435300]: 2025-12-05 02:03:53.971505083 +0000 UTC m=+0.268837885 container start 454a11abde918f9f15b74f83055f6870b2243c157b95259a496433827f7b4860 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec  5 02:03:53 compute-0 podman[435300]: 2025-12-05 02:03:53.978556341 +0000 UTC m=+0.275889103 container attach 454a11abde918f9f15b74f83055f6870b2243c157b95259a496433827f7b4860 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  5 02:03:53 compute-0 practical_engelbart[435316]: 167 167
Dec  5 02:03:53 compute-0 systemd[1]: libpod-454a11abde918f9f15b74f83055f6870b2243c157b95259a496433827f7b4860.scope: Deactivated successfully.
Dec  5 02:03:53 compute-0 podman[435300]: 2025-12-05 02:03:53.985754512 +0000 UTC m=+0.283087314 container died 454a11abde918f9f15b74f83055f6870b2243c157b95259a496433827f7b4860 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_engelbart, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  5 02:03:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-0800ec38cb51fe045abc6144b085b243d26d797ef22e79d6deecd3d13a776926-merged.mount: Deactivated successfully.
Dec  5 02:03:54 compute-0 podman[435300]: 2025-12-05 02:03:54.063495351 +0000 UTC m=+0.360828123 container remove 454a11abde918f9f15b74f83055f6870b2243c157b95259a496433827f7b4860 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_engelbart, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:03:54 compute-0 systemd[1]: libpod-conmon-454a11abde918f9f15b74f83055f6870b2243c157b95259a496433827f7b4860.scope: Deactivated successfully.
Dec  5 02:03:54 compute-0 podman[435339]: 2025-12-05 02:03:54.331567965 +0000 UTC m=+0.087039340 container create ddc510f1980060ca769dcc2ae9879d32b59bb5f1baea404877124e38a203aa98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  5 02:03:54 compute-0 podman[435339]: 2025-12-05 02:03:54.298830248 +0000 UTC m=+0.054301673 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:03:54 compute-0 systemd[1]: Started libpod-conmon-ddc510f1980060ca769dcc2ae9879d32b59bb5f1baea404877124e38a203aa98.scope.
Dec  5 02:03:54 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:03:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24aa83fcfef473f74903e94adb6f4b8af10073a6069b0b64e5f9544020aa3132/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:03:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24aa83fcfef473f74903e94adb6f4b8af10073a6069b0b64e5f9544020aa3132/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:03:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24aa83fcfef473f74903e94adb6f4b8af10073a6069b0b64e5f9544020aa3132/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:03:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24aa83fcfef473f74903e94adb6f4b8af10073a6069b0b64e5f9544020aa3132/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:03:54 compute-0 podman[435339]: 2025-12-05 02:03:54.459291585 +0000 UTC m=+0.214763010 container init ddc510f1980060ca769dcc2ae9879d32b59bb5f1baea404877124e38a203aa98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_williamson, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  5 02:03:54 compute-0 podman[435339]: 2025-12-05 02:03:54.484743289 +0000 UTC m=+0.240214644 container start ddc510f1980060ca769dcc2ae9879d32b59bb5f1baea404877124e38a203aa98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_williamson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:03:54 compute-0 podman[435339]: 2025-12-05 02:03:54.489151272 +0000 UTC m=+0.244622857 container attach ddc510f1980060ca769dcc2ae9879d32b59bb5f1baea404877124e38a203aa98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]: {
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:    "0": [
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:        {
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:            "devices": [
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:                "/dev/loop3"
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:            ],
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:            "lv_name": "ceph_lv0",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:            "lv_size": "21470642176",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:            "name": "ceph_lv0",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:            "tags": {
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:                "ceph.cluster_name": "ceph",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:                "ceph.crush_device_class": "",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:                "ceph.encrypted": "0",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:                "ceph.osd_id": "0",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:                "ceph.type": "block",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:                "ceph.vdo": "0"
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:            },
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:            "type": "block",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:            "vg_name": "ceph_vg0"
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:        }
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:    ],
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:    "1": [
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:        {
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:            "devices": [
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:                "/dev/loop4"
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:            ],
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:            "lv_name": "ceph_lv1",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:            "lv_size": "21470642176",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:            "name": "ceph_lv1",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:            "tags": {
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:                "ceph.cluster_name": "ceph",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:                "ceph.crush_device_class": "",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:                "ceph.encrypted": "0",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:                "ceph.osd_id": "1",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:                "ceph.type": "block",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:                "ceph.vdo": "0"
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:            },
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:            "type": "block",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:            "vg_name": "ceph_vg1"
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:        }
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:    ],
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:    "2": [
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:        {
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:            "devices": [
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:                "/dev/loop5"
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:            ],
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:            "lv_name": "ceph_lv2",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:            "lv_size": "21470642176",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:            "name": "ceph_lv2",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:            "tags": {
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:                "ceph.cluster_name": "ceph",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:                "ceph.crush_device_class": "",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:                "ceph.encrypted": "0",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:                "ceph.osd_id": "2",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:                "ceph.type": "block",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:                "ceph.vdo": "0"
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:            },
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:            "type": "block",
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:            "vg_name": "ceph_vg2"
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:        }
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]:    ]
Dec  5 02:03:55 compute-0 compassionate_williamson[435355]: }
Dec  5 02:03:55 compute-0 systemd[1]: libpod-ddc510f1980060ca769dcc2ae9879d32b59bb5f1baea404877124e38a203aa98.scope: Deactivated successfully.
Dec  5 02:03:55 compute-0 podman[435339]: 2025-12-05 02:03:55.317328706 +0000 UTC m=+1.072800091 container died ddc510f1980060ca769dcc2ae9879d32b59bb5f1baea404877124e38a203aa98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:03:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-24aa83fcfef473f74903e94adb6f4b8af10073a6069b0b64e5f9544020aa3132-merged.mount: Deactivated successfully.
Dec  5 02:03:55 compute-0 podman[435339]: 2025-12-05 02:03:55.442135504 +0000 UTC m=+1.197606859 container remove ddc510f1980060ca769dcc2ae9879d32b59bb5f1baea404877124e38a203aa98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:03:55 compute-0 systemd[1]: libpod-conmon-ddc510f1980060ca769dcc2ae9879d32b59bb5f1baea404877124e38a203aa98.scope: Deactivated successfully.
Dec  5 02:03:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1642: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:03:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:03:56.197 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:03:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:03:56.198 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:03:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:03:56.200 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:03:56 compute-0 podman[435512]: 2025-12-05 02:03:56.567618701 +0000 UTC m=+0.082901865 container create 8614913910dd3c2820098e216475ae8ad00bc0c33602ae13a887b1847f2421f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_maxwell, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:03:56 compute-0 podman[435512]: 2025-12-05 02:03:56.532574879 +0000 UTC m=+0.047858153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:03:56 compute-0 systemd[1]: Started libpod-conmon-8614913910dd3c2820098e216475ae8ad00bc0c33602ae13a887b1847f2421f1.scope.
Dec  5 02:03:56 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:03:56 compute-0 podman[435512]: 2025-12-05 02:03:56.710062763 +0000 UTC m=+0.225345937 container init 8614913910dd3c2820098e216475ae8ad00bc0c33602ae13a887b1847f2421f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  5 02:03:56 compute-0 podman[435512]: 2025-12-05 02:03:56.726315699 +0000 UTC m=+0.241598893 container start 8614913910dd3c2820098e216475ae8ad00bc0c33602ae13a887b1847f2421f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:03:56 compute-0 podman[435512]: 2025-12-05 02:03:56.733388827 +0000 UTC m=+0.248672011 container attach 8614913910dd3c2820098e216475ae8ad00bc0c33602ae13a887b1847f2421f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_maxwell, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  5 02:03:56 compute-0 jovial_maxwell[435526]: 167 167
Dec  5 02:03:56 compute-0 systemd[1]: libpod-8614913910dd3c2820098e216475ae8ad00bc0c33602ae13a887b1847f2421f1.scope: Deactivated successfully.
Dec  5 02:03:56 compute-0 podman[435512]: 2025-12-05 02:03:56.741414822 +0000 UTC m=+0.256698016 container died 8614913910dd3c2820098e216475ae8ad00bc0c33602ae13a887b1847f2421f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:03:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b47f1683717c53ebbd1ec33ea359394fd5cf61583369240b8adc3c6b3bb426d-merged.mount: Deactivated successfully.
Dec  5 02:03:56 compute-0 podman[435512]: 2025-12-05 02:03:56.813524263 +0000 UTC m=+0.328807447 container remove 8614913910dd3c2820098e216475ae8ad00bc0c33602ae13a887b1847f2421f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_maxwell, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  5 02:03:56 compute-0 systemd[1]: libpod-conmon-8614913910dd3c2820098e216475ae8ad00bc0c33602ae13a887b1847f2421f1.scope: Deactivated successfully.
Dec  5 02:03:56 compute-0 nova_compute[349548]: 2025-12-05 02:03:56.872 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:03:57 compute-0 podman[435552]: 2025-12-05 02:03:57.112896905 +0000 UTC m=+0.097441042 container create 07cdba860a79bb5264c03b1e09305cec1796500279462865372b627eb34d94fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:03:57 compute-0 podman[435552]: 2025-12-05 02:03:57.077802481 +0000 UTC m=+0.062346678 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:03:57 compute-0 systemd[1]: Started libpod-conmon-07cdba860a79bb5264c03b1e09305cec1796500279462865372b627eb34d94fd.scope.
Dec  5 02:03:57 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:03:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c13e0b47c97999cf2380b0a2a5d808da7cf4fdafee0d8a80a4f4379d00ecc7f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:03:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c13e0b47c97999cf2380b0a2a5d808da7cf4fdafee0d8a80a4f4379d00ecc7f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:03:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c13e0b47c97999cf2380b0a2a5d808da7cf4fdafee0d8a80a4f4379d00ecc7f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:03:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c13e0b47c97999cf2380b0a2a5d808da7cf4fdafee0d8a80a4f4379d00ecc7f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:03:57 compute-0 podman[435552]: 2025-12-05 02:03:57.297569721 +0000 UTC m=+0.282113828 container init 07cdba860a79bb5264c03b1e09305cec1796500279462865372b627eb34d94fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mclean, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:03:57 compute-0 podman[435552]: 2025-12-05 02:03:57.313839317 +0000 UTC m=+0.298383444 container start 07cdba860a79bb5264c03b1e09305cec1796500279462865372b627eb34d94fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mclean, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:03:57 compute-0 podman[435552]: 2025-12-05 02:03:57.320493904 +0000 UTC m=+0.305038021 container attach 07cdba860a79bb5264c03b1e09305cec1796500279462865372b627eb34d94fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:03:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1643: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:03:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:03:58 compute-0 nova_compute[349548]: 2025-12-05 02:03:58.197 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:03:58 compute-0 cool_mclean[435568]: {
Dec  5 02:03:58 compute-0 cool_mclean[435568]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 02:03:58 compute-0 cool_mclean[435568]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:03:58 compute-0 cool_mclean[435568]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 02:03:58 compute-0 cool_mclean[435568]:        "osd_id": 0,
Dec  5 02:03:58 compute-0 cool_mclean[435568]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:03:58 compute-0 cool_mclean[435568]:        "type": "bluestore"
Dec  5 02:03:58 compute-0 cool_mclean[435568]:    },
Dec  5 02:03:58 compute-0 cool_mclean[435568]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 02:03:58 compute-0 cool_mclean[435568]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:03:58 compute-0 cool_mclean[435568]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 02:03:58 compute-0 cool_mclean[435568]:        "osd_id": 1,
Dec  5 02:03:58 compute-0 cool_mclean[435568]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:03:58 compute-0 cool_mclean[435568]:        "type": "bluestore"
Dec  5 02:03:58 compute-0 cool_mclean[435568]:    },
Dec  5 02:03:58 compute-0 cool_mclean[435568]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 02:03:58 compute-0 cool_mclean[435568]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:03:58 compute-0 cool_mclean[435568]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 02:03:58 compute-0 cool_mclean[435568]:        "osd_id": 2,
Dec  5 02:03:58 compute-0 cool_mclean[435568]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:03:58 compute-0 cool_mclean[435568]:        "type": "bluestore"
Dec  5 02:03:58 compute-0 cool_mclean[435568]:    }
Dec  5 02:03:58 compute-0 cool_mclean[435568]: }
Dec  5 02:03:58 compute-0 systemd[1]: libpod-07cdba860a79bb5264c03b1e09305cec1796500279462865372b627eb34d94fd.scope: Deactivated successfully.
Dec  5 02:03:58 compute-0 systemd[1]: libpod-07cdba860a79bb5264c03b1e09305cec1796500279462865372b627eb34d94fd.scope: Consumed 1.132s CPU time.
Dec  5 02:03:58 compute-0 podman[435552]: 2025-12-05 02:03:58.455031953 +0000 UTC m=+1.439576100 container died 07cdba860a79bb5264c03b1e09305cec1796500279462865372b627eb34d94fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mclean, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:03:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-c13e0b47c97999cf2380b0a2a5d808da7cf4fdafee0d8a80a4f4379d00ecc7f9-merged.mount: Deactivated successfully.
Dec  5 02:03:58 compute-0 podman[435552]: 2025-12-05 02:03:58.538623966 +0000 UTC m=+1.523168073 container remove 07cdba860a79bb5264c03b1e09305cec1796500279462865372b627eb34d94fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  5 02:03:58 compute-0 systemd[1]: libpod-conmon-07cdba860a79bb5264c03b1e09305cec1796500279462865372b627eb34d94fd.scope: Deactivated successfully.
Dec  5 02:03:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 02:03:58 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:03:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 02:03:58 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:03:58 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 47d7979e-1056-42c7-9555-d34f85cd4943 does not exist
Dec  5 02:03:58 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 4815d197-d3a4-4139-a64f-4a77ac3c0993 does not exist
Dec  5 02:03:59 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:03:59 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:03:59 compute-0 podman[158197]: time="2025-12-05T02:03:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:03:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:03:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 02:03:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:03:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8637 "" "Go-http-client/1.1"
Dec  5 02:03:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1644: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:04:00 compute-0 podman[435663]: 2025-12-05 02:04:00.711784889 +0000 UTC m=+0.110575610 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 02:04:00 compute-0 podman[435669]: 2025-12-05 02:04:00.736371908 +0000 UTC m=+0.129700896 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.openshift.expose-services=, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, name=ubi9-minimal, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9)
Dec  5 02:04:00 compute-0 podman[435662]: 2025-12-05 02:04:00.737184181 +0000 UTC m=+0.143414461 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  5 02:04:00 compute-0 podman[435664]: 2025-12-05 02:04:00.763985923 +0000 UTC m=+0.155499540 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  5 02:04:01 compute-0 openstack_network_exporter[366555]: ERROR   02:04:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:04:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:04:01 compute-0 openstack_network_exporter[366555]: ERROR   02:04:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:04:01 compute-0 openstack_network_exporter[366555]: ERROR   02:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:04:01 compute-0 openstack_network_exporter[366555]: ERROR   02:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:04:01 compute-0 openstack_network_exporter[366555]: ERROR   02:04:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:04:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:04:01 compute-0 nova_compute[349548]: 2025-12-05 02:04:01.875 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:04:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1645: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:04:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:04:03 compute-0 nova_compute[349548]: 2025-12-05 02:04:03.202 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:04:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1646: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:04:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1647: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:04:06 compute-0 nova_compute[349548]: 2025-12-05 02:04:06.879 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:04:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1648: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:04:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:04:08 compute-0 nova_compute[349548]: 2025-12-05 02:04:08.204 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:04:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1649: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:04:11 compute-0 nova_compute[349548]: 2025-12-05 02:04:11.883 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:04:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1650: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:04:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:04:13 compute-0 nova_compute[349548]: 2025-12-05 02:04:13.208 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:04:13 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec  5 02:04:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1651: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:04:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1652: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:04:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:04:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:04:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:04:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:04:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:04:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:04:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:04:16
Dec  5 02:04:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 02:04:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 02:04:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'volumes', 'vms', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'images', 'backups', 'cephfs.cephfs.meta']
Dec  5 02:04:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 02:04:16 compute-0 podman[435748]: 2025-12-05 02:04:16.708660761 +0000 UTC m=+0.113424950 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  5 02:04:16 compute-0 podman[435749]: 2025-12-05 02:04:16.735180734 +0000 UTC m=+0.135015475 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  5 02:04:16 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  5 02:04:16 compute-0 nova_compute[349548]: 2025-12-05 02:04:16.887 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:04:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 02:04:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:04:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 02:04:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:04:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:04:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1653: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:04:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:04:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:04:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:04:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:04:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:04:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:04:18 compute-0 nova_compute[349548]: 2025-12-05 02:04:18.211 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:04:18 compute-0 podman[435790]: 2025-12-05 02:04:18.708157807 +0000 UTC m=+0.098573994 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:04:18 compute-0 podman[435789]: 2025-12-05 02:04:18.713654311 +0000 UTC m=+0.122322420 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Dec  5 02:04:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1654: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:04:20 compute-0 podman[435828]: 2025-12-05 02:04:20.694563155 +0000 UTC m=+0.090974881 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, architecture=x86_64, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=kepler, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec  5 02:04:21 compute-0 nova_compute[349548]: 2025-12-05 02:04:21.890 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:04:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1655: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:04:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:04:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 02:04:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 7369 writes, 28K keys, 7369 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7369 writes, 1658 syncs, 4.44 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 796 writes, 1996 keys, 796 commit groups, 1.0 writes per commit group, ingest: 1.23 MB, 0.00 MB/s#012Interval WAL: 796 writes, 362 syncs, 2.20 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  5 02:04:23 compute-0 nova_compute[349548]: 2025-12-05 02:04:23.214 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:04:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1656: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:04:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1657: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:04:26 compute-0 nova_compute[349548]: 2025-12-05 02:04:26.893 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00110425264130364 of space, bias 1.0, pg target 0.331275792391092 quantized to 32 (current 32)
Dec  5 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  5 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:04:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 02:04:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1658: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:04:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:04:28 compute-0 nova_compute[349548]: 2025-12-05 02:04:28.218 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:04:29 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 02:04:29 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.2 total, 600.0 interval#012Cumulative writes: 8925 writes, 35K keys, 8925 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 8925 writes, 2023 syncs, 4.41 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 851 writes, 2760 keys, 851 commit groups, 1.0 writes per commit group, ingest: 1.82 MB, 0.00 MB/s#012Interval WAL: 851 writes, 368 syncs, 2.31 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  5 02:04:29 compute-0 podman[158197]: time="2025-12-05T02:04:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:04:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:04:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 02:04:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:04:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8647 "" "Go-http-client/1.1"
Dec  5 02:04:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1659: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:04:30 compute-0 nova_compute[349548]: 2025-12-05 02:04:30.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:04:30 compute-0 nova_compute[349548]: 2025-12-05 02:04:30.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 02:04:31 compute-0 nova_compute[349548]: 2025-12-05 02:04:31.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:04:31 compute-0 openstack_network_exporter[366555]: ERROR   02:04:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:04:31 compute-0 openstack_network_exporter[366555]: ERROR   02:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:04:31 compute-0 openstack_network_exporter[366555]: ERROR   02:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:04:31 compute-0 rsyslogd[188644]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  5 02:04:31 compute-0 openstack_network_exporter[366555]: ERROR   02:04:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:04:31 compute-0 rsyslogd[188644]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  5 02:04:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:04:31 compute-0 openstack_network_exporter[366555]: ERROR   02:04:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:04:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:04:31 compute-0 podman[435849]: 2025-12-05 02:04:31.734066361 +0000 UTC m=+0.140455187 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  5 02:04:31 compute-0 podman[435859]: 2025-12-05 02:04:31.739455782 +0000 UTC m=+0.132893745 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, release=1755695350, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_id=edpm, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public)
Dec  5 02:04:31 compute-0 podman[435848]: 2025-12-05 02:04:31.746046287 +0000 UTC m=+0.159062559 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  5 02:04:31 compute-0 podman[435850]: 2025-12-05 02:04:31.772503169 +0000 UTC m=+0.161847508 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller)
Dec  5 02:04:31 compute-0 nova_compute[349548]: 2025-12-05 02:04:31.896 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:04:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1660: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:04:32 compute-0 nova_compute[349548]: 2025-12-05 02:04:32.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:04:32 compute-0 nova_compute[349548]: 2025-12-05 02:04:32.068 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:04:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:04:33 compute-0 nova_compute[349548]: 2025-12-05 02:04:33.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:04:33 compute-0 nova_compute[349548]: 2025-12-05 02:04:33.066 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 02:04:33 compute-0 nova_compute[349548]: 2025-12-05 02:04:33.092 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  5 02:04:33 compute-0 nova_compute[349548]: 2025-12-05 02:04:33.092 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:04:33 compute-0 nova_compute[349548]: 2025-12-05 02:04:33.128 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:04:33 compute-0 nova_compute[349548]: 2025-12-05 02:04:33.129 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:04:33 compute-0 nova_compute[349548]: 2025-12-05 02:04:33.130 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:04:33 compute-0 nova_compute[349548]: 2025-12-05 02:04:33.130 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 02:04:33 compute-0 nova_compute[349548]: 2025-12-05 02:04:33.130 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:04:33 compute-0 nova_compute[349548]: 2025-12-05 02:04:33.223 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:04:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:04:33 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3667686660' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:04:33 compute-0 nova_compute[349548]: 2025-12-05 02:04:33.612 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:04:33 compute-0 nova_compute[349548]: 2025-12-05 02:04:33.724 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:04:33 compute-0 nova_compute[349548]: 2025-12-05 02:04:33.724 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:04:33 compute-0 nova_compute[349548]: 2025-12-05 02:04:33.725 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:04:33 compute-0 nova_compute[349548]: 2025-12-05 02:04:33.735 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:04:33 compute-0 nova_compute[349548]: 2025-12-05 02:04:33.735 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:04:33 compute-0 nova_compute[349548]: 2025-12-05 02:04:33.737 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:04:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1661: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:04:34 compute-0 nova_compute[349548]: 2025-12-05 02:04:34.318 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:04:34 compute-0 nova_compute[349548]: 2025-12-05 02:04:34.319 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3622MB free_disk=59.92203903198242GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 02:04:34 compute-0 nova_compute[349548]: 2025-12-05 02:04:34.319 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:04:34 compute-0 nova_compute[349548]: 2025-12-05 02:04:34.319 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:04:34 compute-0 nova_compute[349548]: 2025-12-05 02:04:34.409 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b69a0e24-1bc4-46a5-92d7-367c1efd53df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:04:34 compute-0 nova_compute[349548]: 2025-12-05 02:04:34.410 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 3611d2ae-da33-4e55-aec7-0bec88d3b4e0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:04:34 compute-0 nova_compute[349548]: 2025-12-05 02:04:34.410 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 02:04:34 compute-0 nova_compute[349548]: 2025-12-05 02:04:34.411 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 02:04:34 compute-0 nova_compute[349548]: 2025-12-05 02:04:34.480 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:04:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:04:35 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1921157223' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:04:35 compute-0 nova_compute[349548]: 2025-12-05 02:04:35.028 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:04:35 compute-0 nova_compute[349548]: 2025-12-05 02:04:35.042 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:04:35 compute-0 nova_compute[349548]: 2025-12-05 02:04:35.071 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:04:35 compute-0 nova_compute[349548]: 2025-12-05 02:04:35.074 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 02:04:35 compute-0 nova_compute[349548]: 2025-12-05 02:04:35.074 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:04:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 02:04:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 7411 writes, 29K keys, 7411 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7411 writes, 1632 syncs, 4.54 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 761 writes, 2337 keys, 761 commit groups, 1.0 writes per commit group, ingest: 1.65 MB, 0.00 MB/s#012Interval WAL: 761 writes, 334 syncs, 2.28 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  5 02:04:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1662: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:04:36 compute-0 nova_compute[349548]: 2025-12-05 02:04:36.900 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:04:37 compute-0 ceph-mgr[193209]: [devicehealth INFO root] Check health
Dec  5 02:04:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1663: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:04:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:04:38 compute-0 nova_compute[349548]: 2025-12-05 02:04:38.048 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:04:38 compute-0 nova_compute[349548]: 2025-12-05 02:04:38.049 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:04:38 compute-0 nova_compute[349548]: 2025-12-05 02:04:38.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:04:38 compute-0 nova_compute[349548]: 2025-12-05 02:04:38.227 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.321 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.321 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.326 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.338 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b69a0e24-1bc4-46a5-92d7-367c1efd53df', 'name': 'test_0', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.342 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '3611d2ae-da33-4e55-aec7-0bec88d3b4e0', 'name': 'vn-4ysdpfw-etyk2gsqvxro-nwtay2ho224x-vnf-wh6pa34aydpq', 'flavor': {'id': '7d473820-6f66-40b4-b8d1-decd466d7dd2', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'aa58c1e9-bdcc-4e60-9cee-eaeee0741251'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6ad982b73954486390215862ee62239f', 'user_id': 'ff880837791d4f49a54672b8d0e705ff', 'hostId': 'c00078154b620f81ef3acab090afa15b914aca6c57286253be564282', 'status': 'active', 'metadata': {'metering.server_group': 'b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.343 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.343 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd61438050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.343 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd61438050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.343 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.345 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.345 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.345 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.346 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.346 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.346 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-05T02:04:38.343731) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.347 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-05T02:04:38.346314) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.373 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.374 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.374 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.406 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.407 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.408 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.409 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.409 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.409 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.409 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.410 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.410 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.411 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-05T02:04:38.410427) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.411 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.412 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.412 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.412 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.413 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.413 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.413 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.413 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.414 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-05T02:04:38.413541) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.504 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.506 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.507 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.579 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.579 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.580 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.580 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.580 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.580 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.580 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.580 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.581 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.581 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 2043636416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.581 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 325714825 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.581 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.latency volume: 190759187 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.581 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.latency volume: 1726190004 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.581 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.latency volume: 302563806 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.582 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.latency volume: 198504004 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.582 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.583 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.583 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.583 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.583 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.583 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.583 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.584 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.584 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.584 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.584 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.585 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.585 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-05T02:04:38.581009) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.585 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-05T02:04:38.583457) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.585 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.585 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.585 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.585 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.586 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.586 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.586 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.586 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.586 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.586 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.587 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.587 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.587 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.587 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.587 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.588 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.588 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.588 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-05T02:04:38.586096) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.588 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.588 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 41762816 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.588 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-05T02:04:38.588175) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.588 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.588 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.588 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.589 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.589 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.589 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.589 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.589 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.590 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.590 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.590 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.590 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-05T02:04:38.590310) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.615 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.637 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.637 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.638 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.638 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.638 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.638 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.638 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.638 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 7524740776 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.639 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 28454640 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.639 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-05T02:04:38.638552) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.639 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.639 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.latency volume: 8278686410 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.640 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.latency volume: 33331693 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.640 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.641 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.641 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.641 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.641 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.641 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.641 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.641 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.642 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.642 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.643 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.643 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.644 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.644 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.644 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.645 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-05T02:04:38.641699) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.645 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.645 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.645 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.646 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.647 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-05T02:04:38.645745) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.650 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.655 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.655 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.656 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.656 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.656 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.656 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.656 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.656 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.657 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.658 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-05T02:04:38.656715) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.658 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.658 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.659 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.659 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.659 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.659 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.659 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.660 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-05T02:04:38.659310) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.660 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.660 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.661 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.661 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.662 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/disk.device.allocation volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.662 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.662 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.662 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.663 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.663 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.663 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.663 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.664 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-05T02:04:38.663324) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.664 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.664 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.664 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.665 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.665 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.665 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.665 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.665 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.666 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-05T02:04:38.665584) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.666 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.bytes volume: 2426 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.667 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.667 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.667 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.668 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.668 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.668 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.668 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.669 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-05T02:04:38.668332) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.669 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.670 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.670 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.670 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.670 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.670 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.671 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.671 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.671 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.671 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/memory.usage volume: 48.87890625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.672 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/memory.usage volume: 49.01171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.673 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.673 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.673 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-05T02:04:38.671241) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.674 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.674 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.674 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.674 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.674 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.675 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.676 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.676 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.676 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.676 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.676 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.676 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.677 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-05T02:04:38.674627) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.677 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.677 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-05T02:04:38.676797) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.677 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.678 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.678 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.678 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.678 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.678 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.678 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.678 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.679 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.679 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-05T02:04:38.678784) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.680 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.680 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.680 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.680 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.680 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.680 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.681 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/cpu volume: 49980000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.681 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/cpu volume: 43650000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.681 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.681 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.682 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.682 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.682 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.682 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.682 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.682 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-05T02:04:38.680721) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.683 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-05T02:04:38.682553) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.683 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.683 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.683 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.683 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.684 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.684 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.684 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.684 14 DEBUG ceilometer.compute.pollsters [-] b69a0e24-1bc4-46a5-92d7-367c1efd53df/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.684 14 DEBUG ceilometer.compute.pollsters [-] 3611d2ae-da33-4e55-aec7-0bec88d3b4e0/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.685 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-05T02:04:38.684381) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.685 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.685 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.685 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.686 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.687 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.687 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.687 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.687 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.687 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.687 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.687 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.687 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.687 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:04:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:04:38.687 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:04:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1664: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:04:41 compute-0 nova_compute[349548]: 2025-12-05 02:04:41.903 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:04:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1665: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:04:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:04:43 compute-0 nova_compute[349548]: 2025-12-05 02:04:43.230 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:04:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1666: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:04:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 02:04:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3853207781' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 02:04:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 02:04:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3853207781' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 02:04:45 compute-0 systemd[1]: session-63.scope: Deactivated successfully.
Dec  5 02:04:45 compute-0 systemd[1]: session-63.scope: Consumed 5.079s CPU time.
Dec  5 02:04:45 compute-0 systemd-logind[792]: Session 63 logged out. Waiting for processes to exit.
Dec  5 02:04:45 compute-0 systemd-logind[792]: Removed session 63.
Dec  5 02:04:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1667: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:04:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:04:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:04:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:04:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:04:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:04:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:04:46 compute-0 nova_compute[349548]: 2025-12-05 02:04:46.905 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:04:47 compute-0 podman[435979]: 2025-12-05 02:04:47.727649288 +0000 UTC m=+0.128413571 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  5 02:04:47 compute-0 podman[435980]: 2025-12-05 02:04:47.730393575 +0000 UTC m=+0.125518890 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  5 02:04:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1668: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:04:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:04:48 compute-0 nova_compute[349548]: 2025-12-05 02:04:48.235 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:04:49 compute-0 podman[436020]: 2025-12-05 02:04:49.685811004 +0000 UTC m=+0.101395733 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true)
Dec  5 02:04:49 compute-0 podman[436021]: 2025-12-05 02:04:49.699357944 +0000 UTC m=+0.096559698 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible)
Dec  5 02:04:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1669: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:04:51 compute-0 podman[436058]: 2025-12-05 02:04:51.711160152 +0000 UTC m=+0.115207890 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vcs-type=git, version=9.4, architecture=x86_64, io.buildah.version=1.29.0, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, release-0.7.12=, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1214.1726694543, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  5 02:04:51 compute-0 nova_compute[349548]: 2025-12-05 02:04:51.908 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:04:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1670: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:04:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:04:53 compute-0 nova_compute[349548]: 2025-12-05 02:04:53.238 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:04:53 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Dec  5 02:04:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:04:53.308014) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  5 02:04:53 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Dec  5 02:04:53 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900293308073, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 717, "num_deletes": 251, "total_data_size": 899822, "memory_usage": 912984, "flush_reason": "Manual Compaction"}
Dec  5 02:04:53 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Dec  5 02:04:53 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900293319833, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 891479, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34026, "largest_seqno": 34742, "table_properties": {"data_size": 887751, "index_size": 1572, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8318, "raw_average_key_size": 19, "raw_value_size": 880280, "raw_average_value_size": 2042, "num_data_blocks": 70, "num_entries": 431, "num_filter_entries": 431, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764900233, "oldest_key_time": 1764900233, "file_creation_time": 1764900293, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Dec  5 02:04:53 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 12216 microseconds, and 6073 cpu microseconds.
Dec  5 02:04:53 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 02:04:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:04:53.320236) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 891479 bytes OK
Dec  5 02:04:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:04:53.320262) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Dec  5 02:04:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:04:53.323133) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Dec  5 02:04:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:04:53.323156) EVENT_LOG_v1 {"time_micros": 1764900293323149, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  5 02:04:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:04:53.323179) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  5 02:04:53 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 896136, prev total WAL file size 896136, number of live WAL files 2.
Dec  5 02:04:53 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:04:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:04:53.324155) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Dec  5 02:04:53 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  5 02:04:53 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(870KB)], [77(7669KB)]
Dec  5 02:04:53 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900293324217, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 8745319, "oldest_snapshot_seqno": -1}
Dec  5 02:04:53 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 5254 keys, 6998817 bytes, temperature: kUnknown
Dec  5 02:04:53 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900293375607, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 6998817, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6965915, "index_size": 18648, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13189, "raw_key_size": 133733, "raw_average_key_size": 25, "raw_value_size": 6872953, "raw_average_value_size": 1308, "num_data_blocks": 763, "num_entries": 5254, "num_filter_entries": 5254, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764900293, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Dec  5 02:04:53 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 02:04:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:04:53.376138) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 6998817 bytes
Dec  5 02:04:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:04:53.379111) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.7 rd, 135.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 7.5 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(17.7) write-amplify(7.9) OK, records in: 5767, records dropped: 513 output_compression: NoCompression
Dec  5 02:04:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:04:53.379150) EVENT_LOG_v1 {"time_micros": 1764900293379132, "job": 44, "event": "compaction_finished", "compaction_time_micros": 51536, "compaction_time_cpu_micros": 29906, "output_level": 6, "num_output_files": 1, "total_output_size": 6998817, "num_input_records": 5767, "num_output_records": 5254, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  5 02:04:53 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:04:53 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900293379719, "job": 44, "event": "table_file_deletion", "file_number": 79}
Dec  5 02:04:53 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:04:53 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900293383326, "job": 44, "event": "table_file_deletion", "file_number": 77}
Dec  5 02:04:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:04:53.323983) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:04:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:04:53.383573) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:04:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:04:53.383581) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:04:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:04:53.383584) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:04:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:04:53.383587) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:04:53 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:04:53.383590) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:04:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1671: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:04:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1672: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:04:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:04:56.198 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:04:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:04:56.199 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:04:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:04:56.200 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:04:56 compute-0 nova_compute[349548]: 2025-12-05 02:04:56.912 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:04:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1673: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:04:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:04:58 compute-0 nova_compute[349548]: 2025-12-05 02:04:58.241 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:04:59 compute-0 podman[158197]: time="2025-12-05T02:04:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:04:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:04:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 02:04:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:04:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8647 "" "Go-http-client/1.1"
Dec  5 02:04:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1674: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:05:00 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:05:00 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:05:00 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 02:05:00 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:05:00 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 02:05:00 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:05:00 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 98babc16-3d08-46ab-99e5-3fdfd00febbc does not exist
Dec  5 02:05:00 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 81e1dbda-15d2-4075-a9aa-20f0dbfcd99b does not exist
Dec  5 02:05:00 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 47c69e09-0302-47c5-b4bc-f37689004c16 does not exist
Dec  5 02:05:00 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 02:05:00 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 02:05:00 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 02:05:00 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:05:00 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:05:00 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:05:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:05:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:05:00 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:05:01 compute-0 podman[436343]: 2025-12-05 02:05:01.226374524 +0000 UTC m=+0.089965323 container create f4ca7e484516bcdfdad01449383fe2d9e10ecdec76b4d8138d431d566c7da034 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  5 02:05:01 compute-0 podman[436343]: 2025-12-05 02:05:01.192181985 +0000 UTC m=+0.055772854 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:05:01 compute-0 systemd[1]: Started libpod-conmon-f4ca7e484516bcdfdad01449383fe2d9e10ecdec76b4d8138d431d566c7da034.scope.
Dec  5 02:05:01 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:05:01 compute-0 podman[436343]: 2025-12-05 02:05:01.370518074 +0000 UTC m=+0.234108883 container init f4ca7e484516bcdfdad01449383fe2d9e10ecdec76b4d8138d431d566c7da034 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:05:01 compute-0 podman[436343]: 2025-12-05 02:05:01.384607549 +0000 UTC m=+0.248198328 container start f4ca7e484516bcdfdad01449383fe2d9e10ecdec76b4d8138d431d566c7da034 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:05:01 compute-0 podman[436343]: 2025-12-05 02:05:01.390390641 +0000 UTC m=+0.253981520 container attach f4ca7e484516bcdfdad01449383fe2d9e10ecdec76b4d8138d431d566c7da034 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_williamson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  5 02:05:01 compute-0 heuristic_williamson[436359]: 167 167
Dec  5 02:05:01 compute-0 systemd[1]: libpod-f4ca7e484516bcdfdad01449383fe2d9e10ecdec76b4d8138d431d566c7da034.scope: Deactivated successfully.
Dec  5 02:05:01 compute-0 podman[436343]: 2025-12-05 02:05:01.395143304 +0000 UTC m=+0.258734113 container died f4ca7e484516bcdfdad01449383fe2d9e10ecdec76b4d8138d431d566c7da034 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:05:01 compute-0 openstack_network_exporter[366555]: ERROR   02:05:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:05:01 compute-0 openstack_network_exporter[366555]: ERROR   02:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:05:01 compute-0 openstack_network_exporter[366555]: ERROR   02:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:05:01 compute-0 openstack_network_exporter[366555]: ERROR   02:05:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:05:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:05:01 compute-0 openstack_network_exporter[366555]: ERROR   02:05:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:05:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:05:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-57a8e9179686cb910cbba9e50c77d271ba87690b083f7f38178aee67fcf87b0a-merged.mount: Deactivated successfully.
Dec  5 02:05:01 compute-0 podman[436343]: 2025-12-05 02:05:01.467557594 +0000 UTC m=+0.331148373 container remove f4ca7e484516bcdfdad01449383fe2d9e10ecdec76b4d8138d431d566c7da034 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_williamson, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  5 02:05:01 compute-0 systemd[1]: libpod-conmon-f4ca7e484516bcdfdad01449383fe2d9e10ecdec76b4d8138d431d566c7da034.scope: Deactivated successfully.
Dec  5 02:05:01 compute-0 podman[436381]: 2025-12-05 02:05:01.673793515 +0000 UTC m=+0.057507433 container create b732d257328d6cf6217590dd81e32906106205f372a78bd9120496b52fba6f7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  5 02:05:01 compute-0 systemd[1]: Started libpod-conmon-b732d257328d6cf6217590dd81e32906106205f372a78bd9120496b52fba6f7f.scope.
Dec  5 02:05:01 compute-0 podman[436381]: 2025-12-05 02:05:01.64971315 +0000 UTC m=+0.033427078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:05:01 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79827a8805bbd2edb8df80c2cc3e0114c0d8a66919154ee81a13f86974fb0b79/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79827a8805bbd2edb8df80c2cc3e0114c0d8a66919154ee81a13f86974fb0b79/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79827a8805bbd2edb8df80c2cc3e0114c0d8a66919154ee81a13f86974fb0b79/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79827a8805bbd2edb8df80c2cc3e0114c0d8a66919154ee81a13f86974fb0b79/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:05:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79827a8805bbd2edb8df80c2cc3e0114c0d8a66919154ee81a13f86974fb0b79/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 02:05:01 compute-0 podman[436381]: 2025-12-05 02:05:01.818095198 +0000 UTC m=+0.201809126 container init b732d257328d6cf6217590dd81e32906106205f372a78bd9120496b52fba6f7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_antonelli, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  5 02:05:01 compute-0 podman[436381]: 2025-12-05 02:05:01.842177943 +0000 UTC m=+0.225891851 container start b732d257328d6cf6217590dd81e32906106205f372a78bd9120496b52fba6f7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:05:01 compute-0 podman[436381]: 2025-12-05 02:05:01.849162829 +0000 UTC m=+0.232876737 container attach b732d257328d6cf6217590dd81e32906106205f372a78bd9120496b52fba6f7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:05:01 compute-0 nova_compute[349548]: 2025-12-05 02:05:01.916 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:05:01 compute-0 podman[436399]: 2025-12-05 02:05:01.922380041 +0000 UTC m=+0.090349373 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  5 02:05:01 compute-0 podman[436398]: 2025-12-05 02:05:01.930991533 +0000 UTC m=+0.117206096 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  5 02:05:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1675: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:05:01 compute-0 podman[436400]: 2025-12-05 02:05:01.953260607 +0000 UTC m=+0.127643019 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, io.openshift.tags=minimal rhel9, vcs-type=git, version=9.6, release=1755695350, io.openshift.expose-services=, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., container_name=openstack_network_exporter, name=ubi9-minimal, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  5 02:05:01 compute-0 podman[436401]: 2025-12-05 02:05:01.988047692 +0000 UTC m=+0.156639281 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:05:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:05:03 compute-0 quizzical_antonelli[436395]: --> passed data devices: 0 physical, 3 LVM
Dec  5 02:05:03 compute-0 quizzical_antonelli[436395]: --> relative data size: 1.0
Dec  5 02:05:03 compute-0 quizzical_antonelli[436395]: --> All data devices are unavailable
Dec  5 02:05:03 compute-0 systemd[1]: libpod-b732d257328d6cf6217590dd81e32906106205f372a78bd9120496b52fba6f7f.scope: Deactivated successfully.
Dec  5 02:05:03 compute-0 podman[436381]: 2025-12-05 02:05:03.124964069 +0000 UTC m=+1.508677987 container died b732d257328d6cf6217590dd81e32906106205f372a78bd9120496b52fba6f7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_antonelli, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  5 02:05:03 compute-0 systemd[1]: libpod-b732d257328d6cf6217590dd81e32906106205f372a78bd9120496b52fba6f7f.scope: Consumed 1.204s CPU time.
Dec  5 02:05:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-79827a8805bbd2edb8df80c2cc3e0114c0d8a66919154ee81a13f86974fb0b79-merged.mount: Deactivated successfully.
Dec  5 02:05:03 compute-0 podman[436381]: 2025-12-05 02:05:03.212612296 +0000 UTC m=+1.596326214 container remove b732d257328d6cf6217590dd81e32906106205f372a78bd9120496b52fba6f7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_antonelli, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec  5 02:05:03 compute-0 systemd[1]: libpod-conmon-b732d257328d6cf6217590dd81e32906106205f372a78bd9120496b52fba6f7f.scope: Deactivated successfully.
Dec  5 02:05:03 compute-0 nova_compute[349548]: 2025-12-05 02:05:03.244 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:05:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1676: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:05:04 compute-0 podman[436662]: 2025-12-05 02:05:04.346039996 +0000 UTC m=+0.094892331 container create 463917c783601275fc364c7eb5213f37bb52338fcbad393ffbb3f54567cfa673 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_swirles, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  5 02:05:04 compute-0 podman[436662]: 2025-12-05 02:05:04.296860907 +0000 UTC m=+0.045713242 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:05:04 compute-0 systemd[1]: Started libpod-conmon-463917c783601275fc364c7eb5213f37bb52338fcbad393ffbb3f54567cfa673.scope.
Dec  5 02:05:04 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:05:04 compute-0 podman[436662]: 2025-12-05 02:05:04.481236075 +0000 UTC m=+0.230088460 container init 463917c783601275fc364c7eb5213f37bb52338fcbad393ffbb3f54567cfa673 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_swirles, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  5 02:05:04 compute-0 podman[436662]: 2025-12-05 02:05:04.498368795 +0000 UTC m=+0.247221120 container start 463917c783601275fc364c7eb5213f37bb52338fcbad393ffbb3f54567cfa673 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:05:04 compute-0 agitated_swirles[436677]: 167 167
Dec  5 02:05:04 compute-0 podman[436662]: 2025-12-05 02:05:04.508274313 +0000 UTC m=+0.257126648 container attach 463917c783601275fc364c7eb5213f37bb52338fcbad393ffbb3f54567cfa673 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_swirles, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  5 02:05:04 compute-0 systemd[1]: libpod-463917c783601275fc364c7eb5213f37bb52338fcbad393ffbb3f54567cfa673.scope: Deactivated successfully.
Dec  5 02:05:04 compute-0 podman[436662]: 2025-12-05 02:05:04.510352771 +0000 UTC m=+0.259205146 container died 463917c783601275fc364c7eb5213f37bb52338fcbad393ffbb3f54567cfa673 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_swirles, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Dec  5 02:05:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-a66d77fff5873d2bc35c7624a1c5debf9e7dfacd3384e04675cfba20e03838fe-merged.mount: Deactivated successfully.
Dec  5 02:05:04 compute-0 podman[436662]: 2025-12-05 02:05:04.582138763 +0000 UTC m=+0.330991098 container remove 463917c783601275fc364c7eb5213f37bb52338fcbad393ffbb3f54567cfa673 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  5 02:05:04 compute-0 systemd[1]: libpod-conmon-463917c783601275fc364c7eb5213f37bb52338fcbad393ffbb3f54567cfa673.scope: Deactivated successfully.
Dec  5 02:05:04 compute-0 podman[436700]: 2025-12-05 02:05:04.864988931 +0000 UTC m=+0.077575995 container create 62ec20c7b7d1b947544db6d0a77b25155fcf4bf633570f1ada8568e2b6562b5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_torvalds, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:05:04 compute-0 podman[436700]: 2025-12-05 02:05:04.836575725 +0000 UTC m=+0.049162789 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:05:04 compute-0 systemd[1]: Started libpod-conmon-62ec20c7b7d1b947544db6d0a77b25155fcf4bf633570f1ada8568e2b6562b5f.scope.
Dec  5 02:05:04 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:05:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ddb0fb959ae2841151722acf485fe3c101408938cbf57e7b4a17cdad7f5a30/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:05:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ddb0fb959ae2841151722acf485fe3c101408938cbf57e7b4a17cdad7f5a30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:05:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ddb0fb959ae2841151722acf485fe3c101408938cbf57e7b4a17cdad7f5a30/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:05:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ddb0fb959ae2841151722acf485fe3c101408938cbf57e7b4a17cdad7f5a30/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:05:05 compute-0 podman[436700]: 2025-12-05 02:05:05.042002903 +0000 UTC m=+0.254590017 container init 62ec20c7b7d1b947544db6d0a77b25155fcf4bf633570f1ada8568e2b6562b5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_torvalds, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:05:05 compute-0 podman[436700]: 2025-12-05 02:05:05.070019478 +0000 UTC m=+0.282606542 container start 62ec20c7b7d1b947544db6d0a77b25155fcf4bf633570f1ada8568e2b6562b5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  5 02:05:05 compute-0 podman[436700]: 2025-12-05 02:05:05.076799898 +0000 UTC m=+0.289386962 container attach 62ec20c7b7d1b947544db6d0a77b25155fcf4bf633570f1ada8568e2b6562b5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_torvalds, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]: {
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:    "0": [
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:        {
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:            "devices": [
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:                "/dev/loop3"
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:            ],
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:            "lv_name": "ceph_lv0",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:            "lv_size": "21470642176",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:            "name": "ceph_lv0",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:            "tags": {
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:                "ceph.cluster_name": "ceph",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:                "ceph.crush_device_class": "",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:                "ceph.encrypted": "0",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:                "ceph.osd_id": "0",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:                "ceph.type": "block",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:                "ceph.vdo": "0"
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:            },
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:            "type": "block",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:            "vg_name": "ceph_vg0"
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:        }
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:    ],
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:    "1": [
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:        {
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:            "devices": [
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:                "/dev/loop4"
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:            ],
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:            "lv_name": "ceph_lv1",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:            "lv_size": "21470642176",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:            "name": "ceph_lv1",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:            "tags": {
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:                "ceph.cluster_name": "ceph",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:                "ceph.crush_device_class": "",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:                "ceph.encrypted": "0",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:                "ceph.osd_id": "1",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:                "ceph.type": "block",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:                "ceph.vdo": "0"
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:            },
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:            "type": "block",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:            "vg_name": "ceph_vg1"
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:        }
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:    ],
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:    "2": [
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:        {
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:            "devices": [
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:                "/dev/loop5"
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:            ],
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:            "lv_name": "ceph_lv2",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:            "lv_size": "21470642176",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:            "name": "ceph_lv2",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:            "tags": {
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:                "ceph.cluster_name": "ceph",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:                "ceph.crush_device_class": "",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:                "ceph.encrypted": "0",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:                "ceph.osd_id": "2",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:                "ceph.type": "block",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:                "ceph.vdo": "0"
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:            },
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:            "type": "block",
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:            "vg_name": "ceph_vg2"
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:        }
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]:    ]
Dec  5 02:05:05 compute-0 suspicious_torvalds[436716]: }
Dec  5 02:05:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1677: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:05:05 compute-0 systemd[1]: libpod-62ec20c7b7d1b947544db6d0a77b25155fcf4bf633570f1ada8568e2b6562b5f.scope: Deactivated successfully.
Dec  5 02:05:06 compute-0 podman[436725]: 2025-12-05 02:05:06.034279775 +0000 UTC m=+0.042101851 container died 62ec20c7b7d1b947544db6d0a77b25155fcf4bf633570f1ada8568e2b6562b5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_torvalds, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  5 02:05:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-59ddb0fb959ae2841151722acf485fe3c101408938cbf57e7b4a17cdad7f5a30-merged.mount: Deactivated successfully.
Dec  5 02:05:06 compute-0 podman[436725]: 2025-12-05 02:05:06.145451701 +0000 UTC m=+0.153273727 container remove 62ec20c7b7d1b947544db6d0a77b25155fcf4bf633570f1ada8568e2b6562b5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:05:06 compute-0 systemd[1]: libpod-conmon-62ec20c7b7d1b947544db6d0a77b25155fcf4bf633570f1ada8568e2b6562b5f.scope: Deactivated successfully.
Dec  5 02:05:06 compute-0 nova_compute[349548]: 2025-12-05 02:05:06.919 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:05:07 compute-0 podman[436876]: 2025-12-05 02:05:07.212573522 +0000 UTC m=+0.061737331 container create 3136e18b461bea9a650b37917ac351315b467bb1cc887cf3e12cb0b21be5bb7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  5 02:05:07 compute-0 systemd[1]: Started libpod-conmon-3136e18b461bea9a650b37917ac351315b467bb1cc887cf3e12cb0b21be5bb7d.scope.
Dec  5 02:05:07 compute-0 podman[436876]: 2025-12-05 02:05:07.197440828 +0000 UTC m=+0.046604657 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:05:07 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:05:07 compute-0 podman[436876]: 2025-12-05 02:05:07.352780622 +0000 UTC m=+0.201944521 container init 3136e18b461bea9a650b37917ac351315b467bb1cc887cf3e12cb0b21be5bb7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_yonath, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:05:07 compute-0 podman[436876]: 2025-12-05 02:05:07.369777638 +0000 UTC m=+0.218941487 container start 3136e18b461bea9a650b37917ac351315b467bb1cc887cf3e12cb0b21be5bb7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_yonath, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  5 02:05:07 compute-0 podman[436876]: 2025-12-05 02:05:07.376593689 +0000 UTC m=+0.225757538 container attach 3136e18b461bea9a650b37917ac351315b467bb1cc887cf3e12cb0b21be5bb7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_yonath, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:05:07 compute-0 crazy_yonath[436891]: 167 167
Dec  5 02:05:07 compute-0 systemd[1]: libpod-3136e18b461bea9a650b37917ac351315b467bb1cc887cf3e12cb0b21be5bb7d.scope: Deactivated successfully.
Dec  5 02:05:07 compute-0 conmon[436891]: conmon 3136e18b461bea9a650b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3136e18b461bea9a650b37917ac351315b467bb1cc887cf3e12cb0b21be5bb7d.scope/container/memory.events
Dec  5 02:05:07 compute-0 podman[436876]: 2025-12-05 02:05:07.38266771 +0000 UTC m=+0.231831569 container died 3136e18b461bea9a650b37917ac351315b467bb1cc887cf3e12cb0b21be5bb7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  5 02:05:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-901af356a2aa863c0a3c299ec45d1255f3591dcdace5a6d7c9ae5bd9718f0d73-merged.mount: Deactivated successfully.
Dec  5 02:05:07 compute-0 podman[436876]: 2025-12-05 02:05:07.4533041 +0000 UTC m=+0.302467929 container remove 3136e18b461bea9a650b37917ac351315b467bb1cc887cf3e12cb0b21be5bb7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_yonath, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:05:07 compute-0 systemd[1]: libpod-conmon-3136e18b461bea9a650b37917ac351315b467bb1cc887cf3e12cb0b21be5bb7d.scope: Deactivated successfully.
Dec  5 02:05:07 compute-0 podman[436916]: 2025-12-05 02:05:07.684305974 +0000 UTC m=+0.073998695 container create cd34364ffb1fbcb73dddbeb821da46470ac9ad338d209c3a4d3bd681bf5f1ac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:05:07 compute-0 podman[436916]: 2025-12-05 02:05:07.653540782 +0000 UTC m=+0.043233523 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:05:07 compute-0 systemd[1]: Started libpod-conmon-cd34364ffb1fbcb73dddbeb821da46470ac9ad338d209c3a4d3bd681bf5f1ac4.scope.
Dec  5 02:05:07 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:05:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e7ab28fde98738a5945a44520228f55a92932e9683fbe9578593d2a6feb9180/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:05:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e7ab28fde98738a5945a44520228f55a92932e9683fbe9578593d2a6feb9180/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:05:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e7ab28fde98738a5945a44520228f55a92932e9683fbe9578593d2a6feb9180/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:05:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e7ab28fde98738a5945a44520228f55a92932e9683fbe9578593d2a6feb9180/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:05:07 compute-0 podman[436916]: 2025-12-05 02:05:07.847057836 +0000 UTC m=+0.236750577 container init cd34364ffb1fbcb73dddbeb821da46470ac9ad338d209c3a4d3bd681bf5f1ac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:05:07 compute-0 podman[436916]: 2025-12-05 02:05:07.879731282 +0000 UTC m=+0.269424013 container start cd34364ffb1fbcb73dddbeb821da46470ac9ad338d209c3a4d3bd681bf5f1ac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  5 02:05:07 compute-0 podman[436916]: 2025-12-05 02:05:07.886300236 +0000 UTC m=+0.275992967 container attach cd34364ffb1fbcb73dddbeb821da46470ac9ad338d209c3a4d3bd681bf5f1ac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ride, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  5 02:05:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1678: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:05:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:05:08 compute-0 nova_compute[349548]: 2025-12-05 02:05:08.247 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:05:08 compute-0 pedantic_ride[436931]: {
Dec  5 02:05:08 compute-0 pedantic_ride[436931]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 02:05:08 compute-0 pedantic_ride[436931]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:05:08 compute-0 pedantic_ride[436931]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 02:05:08 compute-0 pedantic_ride[436931]:        "osd_id": 0,
Dec  5 02:05:08 compute-0 pedantic_ride[436931]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:05:08 compute-0 pedantic_ride[436931]:        "type": "bluestore"
Dec  5 02:05:08 compute-0 pedantic_ride[436931]:    },
Dec  5 02:05:08 compute-0 pedantic_ride[436931]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 02:05:08 compute-0 pedantic_ride[436931]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:05:08 compute-0 pedantic_ride[436931]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 02:05:08 compute-0 pedantic_ride[436931]:        "osd_id": 1,
Dec  5 02:05:08 compute-0 pedantic_ride[436931]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:05:08 compute-0 pedantic_ride[436931]:        "type": "bluestore"
Dec  5 02:05:08 compute-0 pedantic_ride[436931]:    },
Dec  5 02:05:08 compute-0 pedantic_ride[436931]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 02:05:08 compute-0 pedantic_ride[436931]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:05:08 compute-0 pedantic_ride[436931]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 02:05:08 compute-0 pedantic_ride[436931]:        "osd_id": 2,
Dec  5 02:05:08 compute-0 pedantic_ride[436931]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:05:08 compute-0 pedantic_ride[436931]:        "type": "bluestore"
Dec  5 02:05:08 compute-0 pedantic_ride[436931]:    }
Dec  5 02:05:08 compute-0 pedantic_ride[436931]: }
Dec  5 02:05:08 compute-0 systemd[1]: libpod-cd34364ffb1fbcb73dddbeb821da46470ac9ad338d209c3a4d3bd681bf5f1ac4.scope: Deactivated successfully.
Dec  5 02:05:08 compute-0 podman[436916]: 2025-12-05 02:05:08.985122715 +0000 UTC m=+1.374815456 container died cd34364ffb1fbcb73dddbeb821da46470ac9ad338d209c3a4d3bd681bf5f1ac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  5 02:05:08 compute-0 systemd[1]: libpod-cd34364ffb1fbcb73dddbeb821da46470ac9ad338d209c3a4d3bd681bf5f1ac4.scope: Consumed 1.118s CPU time.
Dec  5 02:05:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e7ab28fde98738a5945a44520228f55a92932e9683fbe9578593d2a6feb9180-merged.mount: Deactivated successfully.
Dec  5 02:05:09 compute-0 podman[436916]: 2025-12-05 02:05:09.075491468 +0000 UTC m=+1.465184179 container remove cd34364ffb1fbcb73dddbeb821da46470ac9ad338d209c3a4d3bd681bf5f1ac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  5 02:05:09 compute-0 systemd[1]: libpod-conmon-cd34364ffb1fbcb73dddbeb821da46470ac9ad338d209c3a4d3bd681bf5f1ac4.scope: Deactivated successfully.
Dec  5 02:05:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 02:05:09 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:05:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 02:05:09 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:05:09 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev ccb55c4c-4c89-4278-b08b-f0c7389963b2 does not exist
Dec  5 02:05:09 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 8edc4e7b-c24b-4218-85a2-5d95efca1f29 does not exist
Dec  5 02:05:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1679: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:05:10 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:05:10 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:05:11 compute-0 nova_compute[349548]: 2025-12-05 02:05:11.922 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:05:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1680: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:05:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:05:13 compute-0 nova_compute[349548]: 2025-12-05 02:05:13.250 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:05:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1681: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:05:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1682: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:05:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:05:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:05:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:05:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:05:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:05:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:05:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:05:16
Dec  5 02:05:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 02:05:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 02:05:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', '.mgr', 'default.rgw.log', 'volumes', '.rgw.root', 'default.rgw.control', 'vms']
Dec  5 02:05:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 02:05:16 compute-0 nova_compute[349548]: 2025-12-05 02:05:16.926 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:05:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 02:05:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1683: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:05:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:05:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 02:05:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:05:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:05:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:05:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:05:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:05:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:05:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:05:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:05:18 compute-0 nova_compute[349548]: 2025-12-05 02:05:18.252 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:05:18 compute-0 podman[437028]: 2025-12-05 02:05:18.709734471 +0000 UTC m=+0.115736515 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec  5 02:05:18 compute-0 podman[437029]: 2025-12-05 02:05:18.719477904 +0000 UTC m=+0.129813580 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  5 02:05:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1684: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:05:20 compute-0 podman[437069]: 2025-12-05 02:05:20.714440601 +0000 UTC m=+0.118696268 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 02:05:20 compute-0 podman[437068]: 2025-12-05 02:05:20.729623937 +0000 UTC m=+0.131988951 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  5 02:05:21 compute-0 nova_compute[349548]: 2025-12-05 02:05:21.929 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:05:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1685: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:05:22 compute-0 podman[437108]: 2025-12-05 02:05:22.744838022 +0000 UTC m=+0.149494291 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.openshift.expose-services=, version=9.4, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30)
Dec  5 02:05:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:05:23 compute-0 nova_compute[349548]: 2025-12-05 02:05:23.256 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:05:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1686: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:05:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1687: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:05:26 compute-0 nova_compute[349548]: 2025-12-05 02:05:26.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:05:26 compute-0 nova_compute[349548]: 2025-12-05 02:05:26.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  5 02:05:26 compute-0 nova_compute[349548]: 2025-12-05 02:05:26.085 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  5 02:05:26 compute-0 nova_compute[349548]: 2025-12-05 02:05:26.932 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00110425264130364 of space, bias 1.0, pg target 0.331275792391092 quantized to 32 (current 32)
Dec  5 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  5 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:05:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 02:05:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1688: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:05:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:05:28 compute-0 nova_compute[349548]: 2025-12-05 02:05:28.259 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:05:29 compute-0 podman[158197]: time="2025-12-05T02:05:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:05:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:05:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 02:05:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:05:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8631 "" "Go-http-client/1.1"
Dec  5 02:05:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1689: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:05:31 compute-0 nova_compute[349548]: 2025-12-05 02:05:31.087 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:05:31 compute-0 openstack_network_exporter[366555]: ERROR   02:05:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:05:31 compute-0 openstack_network_exporter[366555]: ERROR   02:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:05:31 compute-0 openstack_network_exporter[366555]: ERROR   02:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:05:31 compute-0 openstack_network_exporter[366555]: ERROR   02:05:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:05:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:05:31 compute-0 openstack_network_exporter[366555]: ERROR   02:05:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:05:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:05:31 compute-0 nova_compute[349548]: 2025-12-05 02:05:31.936 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:05:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1690: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:05:32 compute-0 nova_compute[349548]: 2025-12-05 02:05:32.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:05:32 compute-0 nova_compute[349548]: 2025-12-05 02:05:32.066 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 02:05:32 compute-0 podman[437130]: 2025-12-05 02:05:32.71285748 +0000 UTC m=+0.114330205 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 02:05:32 compute-0 podman[437129]: 2025-12-05 02:05:32.727188522 +0000 UTC m=+0.133898424 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  5 02:05:32 compute-0 podman[437132]: 2025-12-05 02:05:32.731954706 +0000 UTC m=+0.130875980 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, architecture=x86_64, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6)
Dec  5 02:05:32 compute-0 podman[437131]: 2025-12-05 02:05:32.757511762 +0000 UTC m=+0.156638742 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  5 02:05:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:05:33 compute-0 nova_compute[349548]: 2025-12-05 02:05:33.068 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:05:33 compute-0 nova_compute[349548]: 2025-12-05 02:05:33.263 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:05:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1691: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:05:34 compute-0 nova_compute[349548]: 2025-12-05 02:05:34.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:05:34 compute-0 nova_compute[349548]: 2025-12-05 02:05:34.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 02:05:34 compute-0 nova_compute[349548]: 2025-12-05 02:05:34.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  5 02:05:34 compute-0 nova_compute[349548]: 2025-12-05 02:05:34.552 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:05:34 compute-0 nova_compute[349548]: 2025-12-05 02:05:34.552 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:05:34 compute-0 nova_compute[349548]: 2025-12-05 02:05:34.553 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  5 02:05:34 compute-0 nova_compute[349548]: 2025-12-05 02:05:34.553 349552 DEBUG nova.objects.instance [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b69a0e24-1bc4-46a5-92d7-367c1efd53df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:05:35 compute-0 nova_compute[349548]: 2025-12-05 02:05:35.920 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updating instance_info_cache with network_info: [{"id": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "address": "fa:16:3e:0c:12:24", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.48", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68143c81-65", "ovs_interfaceid": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:05:35 compute-0 nova_compute[349548]: 2025-12-05 02:05:35.940 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-b69a0e24-1bc4-46a5-92d7-367c1efd53df" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:05:35 compute-0 nova_compute[349548]: 2025-12-05 02:05:35.940 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  5 02:05:35 compute-0 nova_compute[349548]: 2025-12-05 02:05:35.941 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:05:35 compute-0 nova_compute[349548]: 2025-12-05 02:05:35.942 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:05:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1692: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:05:35 compute-0 nova_compute[349548]: 2025-12-05 02:05:35.979 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:05:35 compute-0 nova_compute[349548]: 2025-12-05 02:05:35.980 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:05:35 compute-0 nova_compute[349548]: 2025-12-05 02:05:35.980 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:05:35 compute-0 nova_compute[349548]: 2025-12-05 02:05:35.980 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 02:05:35 compute-0 nova_compute[349548]: 2025-12-05 02:05:35.981 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:05:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:05:36 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4034647949' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:05:36 compute-0 nova_compute[349548]: 2025-12-05 02:05:36.517 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:05:36 compute-0 nova_compute[349548]: 2025-12-05 02:05:36.645 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:05:36 compute-0 nova_compute[349548]: 2025-12-05 02:05:36.646 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:05:36 compute-0 nova_compute[349548]: 2025-12-05 02:05:36.646 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:05:36 compute-0 nova_compute[349548]: 2025-12-05 02:05:36.658 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:05:36 compute-0 nova_compute[349548]: 2025-12-05 02:05:36.659 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:05:36 compute-0 nova_compute[349548]: 2025-12-05 02:05:36.659 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:05:36 compute-0 nova_compute[349548]: 2025-12-05 02:05:36.940 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:05:37 compute-0 nova_compute[349548]: 2025-12-05 02:05:37.261 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:05:37 compute-0 nova_compute[349548]: 2025-12-05 02:05:37.264 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3634MB free_disk=59.92203903198242GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 02:05:37 compute-0 nova_compute[349548]: 2025-12-05 02:05:37.265 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:05:37 compute-0 nova_compute[349548]: 2025-12-05 02:05:37.265 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:05:37 compute-0 nova_compute[349548]: 2025-12-05 02:05:37.422 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance b69a0e24-1bc4-46a5-92d7-367c1efd53df actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:05:37 compute-0 nova_compute[349548]: 2025-12-05 02:05:37.423 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 3611d2ae-da33-4e55-aec7-0bec88d3b4e0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:05:37 compute-0 nova_compute[349548]: 2025-12-05 02:05:37.424 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 02:05:37 compute-0 nova_compute[349548]: 2025-12-05 02:05:37.424 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 02:05:37 compute-0 nova_compute[349548]: 2025-12-05 02:05:37.642 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:05:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1693: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:05:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:05:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:05:38 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1311608690' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:05:38 compute-0 nova_compute[349548]: 2025-12-05 02:05:38.174 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:05:38 compute-0 nova_compute[349548]: 2025-12-05 02:05:38.187 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:05:38 compute-0 nova_compute[349548]: 2025-12-05 02:05:38.203 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:05:38 compute-0 nova_compute[349548]: 2025-12-05 02:05:38.206 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 02:05:38 compute-0 nova_compute[349548]: 2025-12-05 02:05:38.207 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.941s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:05:38 compute-0 nova_compute[349548]: 2025-12-05 02:05:38.208 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:05:38 compute-0 nova_compute[349548]: 2025-12-05 02:05:38.265 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:05:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1694: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:05:40 compute-0 nova_compute[349548]: 2025-12-05 02:05:40.371 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:05:40 compute-0 nova_compute[349548]: 2025-12-05 02:05:40.372 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:05:40 compute-0 nova_compute[349548]: 2025-12-05 02:05:40.372 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:05:41 compute-0 nova_compute[349548]: 2025-12-05 02:05:41.063 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:05:41 compute-0 nova_compute[349548]: 2025-12-05 02:05:41.943 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:05:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1695: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:05:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:05:43 compute-0 nova_compute[349548]: 2025-12-05 02:05:43.269 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:05:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1696: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:05:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 02:05:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2250566486' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 02:05:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 02:05:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2250566486' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 02:05:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1697: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:05:46 compute-0 nova_compute[349548]: 2025-12-05 02:05:46.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:05:46 compute-0 nova_compute[349548]: 2025-12-05 02:05:46.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  5 02:05:46 compute-0 nova_compute[349548]: 2025-12-05 02:05:46.081 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:05:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:05:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:05:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:05:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:05:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:05:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:05:46 compute-0 nova_compute[349548]: 2025-12-05 02:05:46.947 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:05:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1698: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:05:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:05:48 compute-0 nova_compute[349548]: 2025-12-05 02:05:48.272 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:05:49 compute-0 podman[437258]: 2025-12-05 02:05:49.713341147 +0000 UTC m=+0.118010179 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  5 02:05:49 compute-0 podman[437257]: 2025-12-05 02:05:49.718759459 +0000 UTC m=+0.125971782 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec  5 02:05:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1699: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:05:51 compute-0 podman[437297]: 2025-12-05 02:05:51.721173255 +0000 UTC m=+0.121616569 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  5 02:05:51 compute-0 podman[437296]: 2025-12-05 02:05:51.723630814 +0000 UTC m=+0.131669201 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  5 02:05:51 compute-0 nova_compute[349548]: 2025-12-05 02:05:51.951 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:05:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1700: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:05:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:05:53 compute-0 nova_compute[349548]: 2025-12-05 02:05:53.275 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:05:53 compute-0 podman[437334]: 2025-12-05 02:05:53.737820181 +0000 UTC m=+0.148204075 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, version=9.4, architecture=x86_64, distribution-scope=public, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.openshift.expose-services=, name=ubi9, release=1214.1726694543, maintainer=Red Hat, Inc.)
Dec  5 02:05:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1701: 321 pgs: 321 active+clean; 139 MiB data, 289 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.307 349552 DEBUG oslo_concurrency.lockutils [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.308 349552 DEBUG oslo_concurrency.lockutils [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.308 349552 DEBUG oslo_concurrency.lockutils [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.308 349552 DEBUG oslo_concurrency.lockutils [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.309 349552 DEBUG oslo_concurrency.lockutils [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.311 349552 INFO nova.compute.manager [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Terminating instance#033[00m
Dec  5 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.313 349552 DEBUG nova.compute.manager [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  5 02:05:54 compute-0 kernel: tap2799035c-b9 (unregistering): left promiscuous mode
Dec  5 02:05:54 compute-0 NetworkManager[49092]: <info>  [1764900354.4894] device (tap2799035c-b9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  5 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.504 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:05:54 compute-0 ovn_controller[89286]: 2025-12-05T02:05:54Z|00058|binding|INFO|Releasing lport 2799035c-b9e1-4c24-b031-9824b684480c from this chassis (sb_readonly=0)
Dec  5 02:05:54 compute-0 ovn_controller[89286]: 2025-12-05T02:05:54Z|00059|binding|INFO|Setting lport 2799035c-b9e1-4c24-b031-9824b684480c down in Southbound
Dec  5 02:05:54 compute-0 ovn_controller[89286]: 2025-12-05T02:05:54Z|00060|binding|INFO|Removing iface tap2799035c-b9 ovn-installed in OVS
Dec  5 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.510 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:05:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:05:54.516 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:64:51 192.168.0.169'], port_security=['fa:16:3e:10:64:51 192.168.0.169'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-qkgif4ysdpfw-etyk2gsqvxro-nwtay2ho224x-port-44wmftlb3hgo', 'neutron:cidrs': '192.168.0.169/24', 'neutron:device_id': '3611d2ae-da33-4e55-aec7-0bec88d3b4e0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-qkgif4ysdpfw-etyk2gsqvxro-nwtay2ho224x-port-44wmftlb3hgo', 'neutron:project_id': '6ad982b73954486390215862ee62239f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cf07c149-4b4f-4cc9-a5b5-cfd139acbede', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.221', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8440543a-d57d-422f-b491-49a678c2776e, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=2799035c-b9e1-4c24-b031-9824b684480c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 02:05:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:05:54.518 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 2799035c-b9e1-4c24-b031-9824b684480c in datapath 49f7d2f1-f1ff-4dcc-94db-d088dc8d3183 unbound from our chassis#033[00m
Dec  5 02:05:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:05:54.520 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 49f7d2f1-f1ff-4dcc-94db-d088dc8d3183#033[00m
Dec  5 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.532 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:05:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:05:54.547 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[050476c8-763c-4808-9373-d125d6da87ca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:05:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:05:54.593 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[d9bb6124-2012-4379-b3cb-249280840c9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:05:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:05:54.597 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[4563ec15-0fd5-4ff4-b110-ba3975947be4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:05:54 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Dec  5 02:05:54 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 2min 13.002s CPU time.
Dec  5 02:05:54 compute-0 systemd-machined[138700]: Machine qemu-4-instance-00000004 terminated.
Dec  5 02:05:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:05:54.632 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[87921158-9638-4a5b-9835-3b1f8c503688]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:05:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:05:54.659 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[2bfc2e8d-ddd4-4eb0-9437-1682beb9a374]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap49f7d2f1-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:8a:33'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 16, 'rx_bytes': 616, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 16, 'rx_bytes': 616, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537514, 'reachable_time': 38410, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 437364, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:05:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:05:54.682 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[5cb390b9-9986-4a89-b249-d5408274a9d0]: (4, ({'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap49f7d2f1-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 537531, 'tstamp': 537531}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 437365, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap49f7d2f1-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 537536, 'tstamp': 537536}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 437365, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:05:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:05:54.684 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap49f7d2f1-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.686 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.697 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:05:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:05:54.698 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap49f7d2f1-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:05:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:05:54.699 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 02:05:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:05:54.700 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap49f7d2f1-f0, col_values=(('external_ids', {'iface-id': '35b0af3f-4a87-44c5-9b77-2f08261b9985'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:05:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:05:54.700 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.774 349552 INFO nova.virt.libvirt.driver [-] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Instance destroyed successfully.#033[00m
Dec  5 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.775 349552 DEBUG nova.objects.instance [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lazy-loading 'resources' on Instance uuid 3611d2ae-da33-4e55-aec7-0bec88d3b4e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.790 349552 DEBUG nova.virt.libvirt.vif [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T01:55:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-4ysdpfw-etyk2gsqvxro-nwtay2ho224x-vnf-wh6pa34aydpq',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-4ysdpfw-etyk2gsqvxro-nwtay2ho224x-vnf-wh6pa34aydpq',id=4,image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-05T01:55:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='b1daa6e2-02a9-4f4f-bb3e-c27b00c752a1'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6ad982b73954486390215862ee62239f',ramdisk_id='',reservation_id='r-105jpxj7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-05T01:55:46Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04ODA2MDY4NjMzMjAxNTAxMzcxPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTg4MDYwNjg2MzMyMDE1MDEzNzE9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODgwNjA2ODYzMzIwMTUwMTM3MT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTg4MDYwNjg2MzMyMDE1MDEzNzE9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04ODA2MDY4NjMzMjAxNTAxMzcxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04ODA2MDY4NjMzMjAxNTAxMzcxPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKC
Dec  5 02:05:54 compute-0 nova_compute[349548]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODgwNjA2ODYzMzIwMTUwMTM3MT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTg4MDYwNjg2MzMyMDE1MDEzNzE9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04ODA2MDY4NjMzMjAxNTAxMzcxPT0tLQo=',user_id='ff880837791d4f49a54672b8d0e705ff',uuid=3611d2ae-da33-4e55-aec7-0bec88d3b4e0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2799035c-b9e1-4c24-b031-9824b684480c", "address": "fa:16:3e:10:64:51", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2799035c-b9", "ovs_interfaceid": "2799035c-b9e1-4c24-b031-9824b684480c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  5 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.792 349552 DEBUG nova.network.os_vif_util [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converting VIF {"id": "2799035c-b9e1-4c24-b031-9824b684480c", "address": "fa:16:3e:10:64:51", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2799035c-b9", "ovs_interfaceid": "2799035c-b9e1-4c24-b031-9824b684480c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.793 349552 DEBUG nova.network.os_vif_util [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:10:64:51,bridge_name='br-int',has_traffic_filtering=True,id=2799035c-b9e1-4c24-b031-9824b684480c,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2799035c-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.794 349552 DEBUG os_vif [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:10:64:51,bridge_name='br-int',has_traffic_filtering=True,id=2799035c-b9e1-4c24-b031-9824b684480c,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2799035c-b9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  5 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.796 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.797 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2799035c-b9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.800 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.803 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  5 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.804 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.807 349552 INFO os_vif [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:10:64:51,bridge_name='br-int',has_traffic_filtering=True,id=2799035c-b9e1-4c24-b031-9824b684480c,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2799035c-b9')#033[00m
Dec  5 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.891 349552 DEBUG nova.compute.manager [req-798d238f-c905-4d4e-a113-5002d3b8f5c8 req-003838d9-d3f2-43a6-ac6a-e6c3abf87600 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Received event network-vif-unplugged-2799035c-b9e1-4c24-b031-9824b684480c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.891 349552 DEBUG oslo_concurrency.lockutils [req-798d238f-c905-4d4e-a113-5002d3b8f5c8 req-003838d9-d3f2-43a6-ac6a-e6c3abf87600 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.892 349552 DEBUG oslo_concurrency.lockutils [req-798d238f-c905-4d4e-a113-5002d3b8f5c8 req-003838d9-d3f2-43a6-ac6a-e6c3abf87600 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.892 349552 DEBUG oslo_concurrency.lockutils [req-798d238f-c905-4d4e-a113-5002d3b8f5c8 req-003838d9-d3f2-43a6-ac6a-e6c3abf87600 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.893 349552 DEBUG nova.compute.manager [req-798d238f-c905-4d4e-a113-5002d3b8f5c8 req-003838d9-d3f2-43a6-ac6a-e6c3abf87600 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] No waiting events found dispatching network-vif-unplugged-2799035c-b9e1-4c24-b031-9824b684480c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.895 349552 DEBUG nova.compute.manager [req-798d238f-c905-4d4e-a113-5002d3b8f5c8 req-003838d9-d3f2-43a6-ac6a-e6c3abf87600 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Received event network-vif-unplugged-2799035c-b9e1-4c24-b031-9824b684480c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  5 02:05:54 compute-0 rsyslogd[188644]: message too long (8192) with configured size 8096, begin of message is: 2025-12-05 02:05:54.790 349552 DEBUG nova.virt.libvirt.vif [None req-7a53523e-9a [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  5 02:05:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:05:54.934 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:c8:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:b5:45:4f:f9:d2'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 02:05:54 compute-0 nova_compute[349548]: 2025-12-05 02:05:54.934 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:05:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:05:54.936 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  5 02:05:55 compute-0 nova_compute[349548]: 2025-12-05 02:05:55.876 349552 DEBUG nova.compute.manager [req-394131f4-7a47-4a35-9a3c-7f7933c74daf req-bf53862d-f018-4517-a21c-2ddb211e10fd a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Received event network-changed-2799035c-b9e1-4c24-b031-9824b684480c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:05:55 compute-0 nova_compute[349548]: 2025-12-05 02:05:55.877 349552 DEBUG nova.compute.manager [req-394131f4-7a47-4a35-9a3c-7f7933c74daf req-bf53862d-f018-4517-a21c-2ddb211e10fd a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Refreshing instance network info cache due to event network-changed-2799035c-b9e1-4c24-b031-9824b684480c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  5 02:05:55 compute-0 nova_compute[349548]: 2025-12-05 02:05:55.877 349552 DEBUG oslo_concurrency.lockutils [req-394131f4-7a47-4a35-9a3c-7f7933c74daf req-bf53862d-f018-4517-a21c-2ddb211e10fd a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-3611d2ae-da33-4e55-aec7-0bec88d3b4e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:05:55 compute-0 nova_compute[349548]: 2025-12-05 02:05:55.878 349552 DEBUG oslo_concurrency.lockutils [req-394131f4-7a47-4a35-9a3c-7f7933c74daf req-bf53862d-f018-4517-a21c-2ddb211e10fd a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-3611d2ae-da33-4e55-aec7-0bec88d3b4e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:05:55 compute-0 nova_compute[349548]: 2025-12-05 02:05:55.878 349552 DEBUG nova.network.neutron [req-394131f4-7a47-4a35-9a3c-7f7933c74daf req-bf53862d-f018-4517-a21c-2ddb211e10fd a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Refreshing network info cache for port 2799035c-b9e1-4c24-b031-9824b684480c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  5 02:05:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1702: 321 pgs: 321 active+clean; 120 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 8.6 KiB/s rd, 341 B/s wr, 11 op/s
Dec  5 02:05:56 compute-0 nova_compute[349548]: 2025-12-05 02:05:56.160 349552 INFO nova.virt.libvirt.driver [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Deleting instance files /var/lib/nova/instances/3611d2ae-da33-4e55-aec7-0bec88d3b4e0_del#033[00m
Dec  5 02:05:56 compute-0 nova_compute[349548]: 2025-12-05 02:05:56.162 349552 INFO nova.virt.libvirt.driver [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Deletion of /var/lib/nova/instances/3611d2ae-da33-4e55-aec7-0bec88d3b4e0_del complete#033[00m
Dec  5 02:05:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:05:56.199 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:05:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:05:56.200 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:05:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:05:56.201 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:05:56 compute-0 nova_compute[349548]: 2025-12-05 02:05:56.231 349552 INFO nova.compute.manager [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Took 1.92 seconds to destroy the instance on the hypervisor.#033[00m
Dec  5 02:05:56 compute-0 nova_compute[349548]: 2025-12-05 02:05:56.232 349552 DEBUG oslo.service.loopingcall [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  5 02:05:56 compute-0 nova_compute[349548]: 2025-12-05 02:05:56.234 349552 DEBUG nova.compute.manager [-] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  5 02:05:56 compute-0 nova_compute[349548]: 2025-12-05 02:05:56.234 349552 DEBUG nova.network.neutron [-] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  5 02:05:56 compute-0 nova_compute[349548]: 2025-12-05 02:05:56.898 349552 DEBUG nova.network.neutron [req-394131f4-7a47-4a35-9a3c-7f7933c74daf req-bf53862d-f018-4517-a21c-2ddb211e10fd a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Updated VIF entry in instance network info cache for port 2799035c-b9e1-4c24-b031-9824b684480c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  5 02:05:56 compute-0 nova_compute[349548]: 2025-12-05 02:05:56.898 349552 DEBUG nova.network.neutron [req-394131f4-7a47-4a35-9a3c-7f7933c74daf req-bf53862d-f018-4517-a21c-2ddb211e10fd a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Updating instance_info_cache with network_info: [{"id": "2799035c-b9e1-4c24-b031-9824b684480c", "address": "fa:16:3e:10:64:51", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.169", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2799035c-b9", "ovs_interfaceid": "2799035c-b9e1-4c24-b031-9824b684480c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:05:56 compute-0 nova_compute[349548]: 2025-12-05 02:05:56.918 349552 DEBUG oslo_concurrency.lockutils [req-394131f4-7a47-4a35-9a3c-7f7933c74daf req-bf53862d-f018-4517-a21c-2ddb211e10fd a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-3611d2ae-da33-4e55-aec7-0bec88d3b4e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:05:56 compute-0 nova_compute[349548]: 2025-12-05 02:05:56.954 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:05:56 compute-0 nova_compute[349548]: 2025-12-05 02:05:56.976 349552 DEBUG nova.compute.manager [req-4881592f-42e8-4b45-b724-30e3f90096a4 req-64419768-b714-4358-8708-9ba529c9806c a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Received event network-vif-plugged-2799035c-b9e1-4c24-b031-9824b684480c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:05:56 compute-0 nova_compute[349548]: 2025-12-05 02:05:56.977 349552 DEBUG oslo_concurrency.lockutils [req-4881592f-42e8-4b45-b724-30e3f90096a4 req-64419768-b714-4358-8708-9ba529c9806c a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:05:56 compute-0 nova_compute[349548]: 2025-12-05 02:05:56.978 349552 DEBUG oslo_concurrency.lockutils [req-4881592f-42e8-4b45-b724-30e3f90096a4 req-64419768-b714-4358-8708-9ba529c9806c a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:05:56 compute-0 nova_compute[349548]: 2025-12-05 02:05:56.978 349552 DEBUG oslo_concurrency.lockutils [req-4881592f-42e8-4b45-b724-30e3f90096a4 req-64419768-b714-4358-8708-9ba529c9806c a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:05:56 compute-0 nova_compute[349548]: 2025-12-05 02:05:56.979 349552 DEBUG nova.compute.manager [req-4881592f-42e8-4b45-b724-30e3f90096a4 req-64419768-b714-4358-8708-9ba529c9806c a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] No waiting events found dispatching network-vif-plugged-2799035c-b9e1-4c24-b031-9824b684480c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 02:05:56 compute-0 nova_compute[349548]: 2025-12-05 02:05:56.979 349552 WARNING nova.compute.manager [req-4881592f-42e8-4b45-b724-30e3f90096a4 req-64419768-b714-4358-8708-9ba529c9806c a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Received unexpected event network-vif-plugged-2799035c-b9e1-4c24-b031-9824b684480c for instance with vm_state active and task_state deleting.#033[00m
Dec  5 02:05:57 compute-0 nova_compute[349548]: 2025-12-05 02:05:57.091 349552 DEBUG nova.network.neutron [-] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:05:57 compute-0 nova_compute[349548]: 2025-12-05 02:05:57.109 349552 INFO nova.compute.manager [-] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Took 0.87 seconds to deallocate network for instance.#033[00m
Dec  5 02:05:57 compute-0 nova_compute[349548]: 2025-12-05 02:05:57.154 349552 DEBUG oslo_concurrency.lockutils [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:05:57 compute-0 nova_compute[349548]: 2025-12-05 02:05:57.155 349552 DEBUG oslo_concurrency.lockutils [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:05:57 compute-0 nova_compute[349548]: 2025-12-05 02:05:57.275 349552 DEBUG oslo_concurrency.processutils [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:05:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:05:57 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1858799124' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:05:57 compute-0 nova_compute[349548]: 2025-12-05 02:05:57.807 349552 DEBUG oslo_concurrency.processutils [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:05:57 compute-0 nova_compute[349548]: 2025-12-05 02:05:57.821 349552 DEBUG nova.compute.provider_tree [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:05:57 compute-0 nova_compute[349548]: 2025-12-05 02:05:57.843 349552 DEBUG nova.scheduler.client.report [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:05:57 compute-0 nova_compute[349548]: 2025-12-05 02:05:57.879 349552 DEBUG oslo_concurrency.lockutils [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:05:57 compute-0 nova_compute[349548]: 2025-12-05 02:05:57.916 349552 INFO nova.scheduler.client.report [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Deleted allocations for instance 3611d2ae-da33-4e55-aec7-0bec88d3b4e0#033[00m
Dec  5 02:05:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1703: 321 pgs: 321 active+clean; 78 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.7 KiB/s wr, 30 op/s
Dec  5 02:05:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:05:58 compute-0 nova_compute[349548]: 2025-12-05 02:05:58.017 349552 DEBUG oslo_concurrency.lockutils [None req-7a53523e-9acc-4892-b6cf-9110bd32994b ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "3611d2ae-da33-4e55-aec7-0bec88d3b4e0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:05:59 compute-0 podman[158197]: time="2025-12-05T02:05:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:05:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:05:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 02:05:59 compute-0 nova_compute[349548]: 2025-12-05 02:05:59.800 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:05:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:05:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8634 "" "Go-http-client/1.1"
Dec  5 02:05:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1704: 321 pgs: 321 active+clean; 78 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.7 KiB/s wr, 30 op/s
Dec  5 02:06:01 compute-0 openstack_network_exporter[366555]: ERROR   02:06:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:06:01 compute-0 openstack_network_exporter[366555]: ERROR   02:06:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:06:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:06:01 compute-0 openstack_network_exporter[366555]: ERROR   02:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:06:01 compute-0 openstack_network_exporter[366555]: ERROR   02:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:06:01 compute-0 openstack_network_exporter[366555]: ERROR   02:06:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:06:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:06:01 compute-0 nova_compute[349548]: 2025-12-05 02:06:01.957 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:06:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1705: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  5 02:06:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:06:03 compute-0 podman[437420]: 2025-12-05 02:06:03.698037579 +0000 UTC m=+0.099332815 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 02:06:03 compute-0 podman[437419]: 2025-12-05 02:06:03.72768597 +0000 UTC m=+0.129666505 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  5 02:06:03 compute-0 podman[437422]: 2025-12-05 02:06:03.742552557 +0000 UTC m=+0.128509253 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9-minimal, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  5 02:06:03 compute-0 podman[437421]: 2025-12-05 02:06:03.781655963 +0000 UTC m=+0.169895493 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec  5 02:06:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1706: 321 pgs: 321 active+clean; 78 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  5 02:06:04 compute-0 nova_compute[349548]: 2025-12-05 02:06:04.804 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:06:04 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:06:04.939 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8dd76c1c-ab01-42af-b35e-2e870841b6ad, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:06:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1707: 321 pgs: 321 active+clean; 78 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.7 KiB/s wr, 49 op/s
Dec  5 02:06:06 compute-0 nova_compute[349548]: 2025-12-05 02:06:06.961 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:06:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1708: 321 pgs: 321 active+clean; 78 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 1.4 KiB/s wr, 75 op/s
Dec  5 02:06:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:06:09 compute-0 nova_compute[349548]: 2025-12-05 02:06:09.767 349552 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764900354.7657616, 3611d2ae-da33-4e55-aec7-0bec88d3b4e0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:06:09 compute-0 nova_compute[349548]: 2025-12-05 02:06:09.768 349552 INFO nova.compute.manager [-] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] VM Stopped (Lifecycle Event)#033[00m
Dec  5 02:06:09 compute-0 nova_compute[349548]: 2025-12-05 02:06:09.791 349552 DEBUG nova.compute.manager [None req-8182f6bc-714f-49f8-bb72-79f902194f8d - - - - - -] [instance: 3611d2ae-da33-4e55-aec7-0bec88d3b4e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:06:09 compute-0 nova_compute[349548]: 2025-12-05 02:06:09.807 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:06:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1709: 321 pgs: 321 active+clean; 78 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 56 op/s
Dec  5 02:06:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:06:10 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:06:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 02:06:10 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:06:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 02:06:10 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:06:10 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev fba79e18-a11d-4ce1-b759-cb95035c4441 does not exist
Dec  5 02:06:10 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 3376d10d-caeb-4635-8b55-037038ef4944 does not exist
Dec  5 02:06:10 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 694c1e9d-dbcc-4e1a-a5c3-fa7620fc210d does not exist
Dec  5 02:06:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 02:06:10 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 02:06:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 02:06:10 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:06:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:06:10 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:06:11 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:06:11 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:06:11 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:06:11 compute-0 nova_compute[349548]: 2025-12-05 02:06:11.964 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:06:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1710: 321 pgs: 321 active+clean; 78 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 0 B/s wr, 69 op/s
Dec  5 02:06:12 compute-0 podman[437772]: 2025-12-05 02:06:12.046190818 +0000 UTC m=+0.093785150 container create 338ad9464d46e1e5e87fc538ed3a58ce9458f0887f8710e629aeff4a6eccc221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  5 02:06:12 compute-0 systemd[1]: Started libpod-conmon-338ad9464d46e1e5e87fc538ed3a58ce9458f0887f8710e629aeff4a6eccc221.scope.
Dec  5 02:06:12 compute-0 podman[437772]: 2025-12-05 02:06:12.012105512 +0000 UTC m=+0.059699884 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:06:12 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:06:12 compute-0 podman[437772]: 2025-12-05 02:06:12.160494942 +0000 UTC m=+0.208089304 container init 338ad9464d46e1e5e87fc538ed3a58ce9458f0887f8710e629aeff4a6eccc221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  5 02:06:12 compute-0 podman[437772]: 2025-12-05 02:06:12.177828307 +0000 UTC m=+0.225422609 container start 338ad9464d46e1e5e87fc538ed3a58ce9458f0887f8710e629aeff4a6eccc221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_wozniak, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:06:12 compute-0 podman[437772]: 2025-12-05 02:06:12.184373641 +0000 UTC m=+0.231967963 container attach 338ad9464d46e1e5e87fc538ed3a58ce9458f0887f8710e629aeff4a6eccc221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_wozniak, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  5 02:06:12 compute-0 cool_wozniak[437787]: 167 167
Dec  5 02:06:12 compute-0 systemd[1]: libpod-338ad9464d46e1e5e87fc538ed3a58ce9458f0887f8710e629aeff4a6eccc221.scope: Deactivated successfully.
Dec  5 02:06:12 compute-0 podman[437772]: 2025-12-05 02:06:12.191766098 +0000 UTC m=+0.239360430 container died 338ad9464d46e1e5e87fc538ed3a58ce9458f0887f8710e629aeff4a6eccc221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:06:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c51af9e59d23edadf7908376ba673978c790a72db7629ef8c9215c66457a3e3-merged.mount: Deactivated successfully.
Dec  5 02:06:12 compute-0 podman[437772]: 2025-12-05 02:06:12.277017408 +0000 UTC m=+0.324611710 container remove 338ad9464d46e1e5e87fc538ed3a58ce9458f0887f8710e629aeff4a6eccc221 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  5 02:06:12 compute-0 systemd[1]: libpod-conmon-338ad9464d46e1e5e87fc538ed3a58ce9458f0887f8710e629aeff4a6eccc221.scope: Deactivated successfully.
Dec  5 02:06:12 compute-0 podman[437810]: 2025-12-05 02:06:12.578709334 +0000 UTC m=+0.111660851 container create 8c1b3b3c47bdc36d254a94c196169c5831d086343fdb69f9f5932c5908066b07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_poitras, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  5 02:06:12 compute-0 podman[437810]: 2025-12-05 02:06:12.54074542 +0000 UTC m=+0.073696987 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:06:12 compute-0 systemd[1]: Started libpod-conmon-8c1b3b3c47bdc36d254a94c196169c5831d086343fdb69f9f5932c5908066b07.scope.
Dec  5 02:06:12 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:06:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75d57ec27aa8ac55924a7ddbbe8fb85046565f69c1f485f4e8b74a59fd67bfe1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:06:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75d57ec27aa8ac55924a7ddbbe8fb85046565f69c1f485f4e8b74a59fd67bfe1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:06:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75d57ec27aa8ac55924a7ddbbe8fb85046565f69c1f485f4e8b74a59fd67bfe1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:06:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75d57ec27aa8ac55924a7ddbbe8fb85046565f69c1f485f4e8b74a59fd67bfe1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:06:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75d57ec27aa8ac55924a7ddbbe8fb85046565f69c1f485f4e8b74a59fd67bfe1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 02:06:12 compute-0 podman[437810]: 2025-12-05 02:06:12.724993144 +0000 UTC m=+0.257944671 container init 8c1b3b3c47bdc36d254a94c196169c5831d086343fdb69f9f5932c5908066b07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_poitras, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  5 02:06:12 compute-0 podman[437810]: 2025-12-05 02:06:12.746018103 +0000 UTC m=+0.278969580 container start 8c1b3b3c47bdc36d254a94c196169c5831d086343fdb69f9f5932c5908066b07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_poitras, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  5 02:06:12 compute-0 podman[437810]: 2025-12-05 02:06:12.750657464 +0000 UTC m=+0.283608941 container attach 8c1b3b3c47bdc36d254a94c196169c5831d086343fdb69f9f5932c5908066b07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:06:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:06:13 compute-0 admiring_poitras[437826]: --> passed data devices: 0 physical, 3 LVM
Dec  5 02:06:13 compute-0 admiring_poitras[437826]: --> relative data size: 1.0
Dec  5 02:06:13 compute-0 admiring_poitras[437826]: --> All data devices are unavailable
Dec  5 02:06:13 compute-0 systemd[1]: libpod-8c1b3b3c47bdc36d254a94c196169c5831d086343fdb69f9f5932c5908066b07.scope: Deactivated successfully.
Dec  5 02:06:13 compute-0 podman[437810]: 2025-12-05 02:06:13.950137543 +0000 UTC m=+1.483089060 container died 8c1b3b3c47bdc36d254a94c196169c5831d086343fdb69f9f5932c5908066b07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_poitras, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:06:13 compute-0 systemd[1]: libpod-8c1b3b3c47bdc36d254a94c196169c5831d086343fdb69f9f5932c5908066b07.scope: Consumed 1.154s CPU time.
Dec  5 02:06:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1711: 321 pgs: 321 active+clean; 78 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  5 02:06:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-75d57ec27aa8ac55924a7ddbbe8fb85046565f69c1f485f4e8b74a59fd67bfe1-merged.mount: Deactivated successfully.
Dec  5 02:06:14 compute-0 podman[437810]: 2025-12-05 02:06:14.039322973 +0000 UTC m=+1.572274460 container remove 8c1b3b3c47bdc36d254a94c196169c5831d086343fdb69f9f5932c5908066b07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_poitras, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:06:14 compute-0 systemd[1]: libpod-conmon-8c1b3b3c47bdc36d254a94c196169c5831d086343fdb69f9f5932c5908066b07.scope: Deactivated successfully.
Dec  5 02:06:14 compute-0 nova_compute[349548]: 2025-12-05 02:06:14.776 349552 DEBUG oslo_concurrency.lockutils [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:06:14 compute-0 nova_compute[349548]: 2025-12-05 02:06:14.778 349552 DEBUG oslo_concurrency.lockutils [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:06:14 compute-0 nova_compute[349548]: 2025-12-05 02:06:14.779 349552 DEBUG oslo_concurrency.lockutils [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:06:14 compute-0 nova_compute[349548]: 2025-12-05 02:06:14.780 349552 DEBUG oslo_concurrency.lockutils [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:06:14 compute-0 nova_compute[349548]: 2025-12-05 02:06:14.781 349552 DEBUG oslo_concurrency.lockutils [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:06:14 compute-0 nova_compute[349548]: 2025-12-05 02:06:14.783 349552 INFO nova.compute.manager [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Terminating instance#033[00m
Dec  5 02:06:14 compute-0 nova_compute[349548]: 2025-12-05 02:06:14.785 349552 DEBUG nova.compute.manager [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  5 02:06:14 compute-0 nova_compute[349548]: 2025-12-05 02:06:14.811 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:06:14 compute-0 kernel: tap68143c81-65 (unregistering): left promiscuous mode
Dec  5 02:06:14 compute-0 NetworkManager[49092]: <info>  [1764900374.9393] device (tap68143c81-65): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  5 02:06:14 compute-0 ovn_controller[89286]: 2025-12-05T02:06:14Z|00061|binding|INFO|Releasing lport 68143c81-65a4-4ed0-8902-dbe0c8d89224 from this chassis (sb_readonly=0)
Dec  5 02:06:14 compute-0 ovn_controller[89286]: 2025-12-05T02:06:14Z|00062|binding|INFO|Setting lport 68143c81-65a4-4ed0-8902-dbe0c8d89224 down in Southbound
Dec  5 02:06:14 compute-0 nova_compute[349548]: 2025-12-05 02:06:14.957 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:06:14 compute-0 ovn_controller[89286]: 2025-12-05T02:06:14Z|00063|binding|INFO|Removing iface tap68143c81-65 ovn-installed in OVS
Dec  5 02:06:14 compute-0 nova_compute[349548]: 2025-12-05 02:06:14.960 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:06:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:06:14.965 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:12:24 192.168.0.48'], port_security=['fa:16:3e:0c:12:24 192.168.0.48'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.48/24', 'neutron:device_id': 'b69a0e24-1bc4-46a5-92d7-367c1efd53df', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6ad982b73954486390215862ee62239f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cf07c149-4b4f-4cc9-a5b5-cfd139acbede', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.212'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8440543a-d57d-422f-b491-49a678c2776e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=68143c81-65a4-4ed0-8902-dbe0c8d89224) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 02:06:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:06:14.967 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 68143c81-65a4-4ed0-8902-dbe0c8d89224 in datapath 49f7d2f1-f1ff-4dcc-94db-d088dc8d3183 unbound from our chassis#033[00m
Dec  5 02:06:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:06:14.968 287122 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 49f7d2f1-f1ff-4dcc-94db-d088dc8d3183, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  5 02:06:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:06:14.970 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[67eb8688-d472-4d5a-89a4-6a0e875d438a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:06:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:06:14.971 287122 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183 namespace which is not needed anymore#033[00m
Dec  5 02:06:14 compute-0 nova_compute[349548]: 2025-12-05 02:06:14.987 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:06:15 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Dec  5 02:06:15 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 3min 28.101s CPU time.
Dec  5 02:06:15 compute-0 systemd-machined[138700]: Machine qemu-1-instance-00000001 terminated.
Dec  5 02:06:15 compute-0 podman[438021]: 2025-12-05 02:06:15.144869131 +0000 UTC m=+0.055282971 container create fc296c36233c1206c2f01b64465262c536aabefb1453d1c8e37a5cefb838df96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_meitner, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  5 02:06:15 compute-0 systemd[1]: Started libpod-conmon-fc296c36233c1206c2f01b64465262c536aabefb1453d1c8e37a5cefb838df96.scope.
Dec  5 02:06:15 compute-0 neutron-haproxy-ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183[412838]: [NOTICE]   (412842) : haproxy version is 2.8.14-c23fe91
Dec  5 02:06:15 compute-0 neutron-haproxy-ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183[412838]: [NOTICE]   (412842) : path to executable is /usr/sbin/haproxy
Dec  5 02:06:15 compute-0 neutron-haproxy-ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183[412838]: [WARNING]  (412842) : Exiting Master process...
Dec  5 02:06:15 compute-0 neutron-haproxy-ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183[412838]: [WARNING]  (412842) : Exiting Master process...
Dec  5 02:06:15 compute-0 neutron-haproxy-ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183[412838]: [ALERT]    (412842) : Current worker (412844) exited with code 143 (Terminated)
Dec  5 02:06:15 compute-0 neutron-haproxy-ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183[412838]: [WARNING]  (412842) : All workers exited. Exiting... (0)
Dec  5 02:06:15 compute-0 systemd[1]: libpod-70e46b28e6d55043e4ffa93fc50c9225b06cb6223f5ded4fca4e2ac8c241f8fe.scope: Deactivated successfully.
Dec  5 02:06:15 compute-0 podman[438021]: 2025-12-05 02:06:15.123270186 +0000 UTC m=+0.033684046 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:06:15 compute-0 podman[438034]: 2025-12-05 02:06:15.221113358 +0000 UTC m=+0.106349922 container died 70e46b28e6d55043e4ffa93fc50c9225b06cb6223f5ded4fca4e2ac8c241f8fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Dec  5 02:06:15 compute-0 nova_compute[349548]: 2025-12-05 02:06:15.236 349552 INFO nova.virt.libvirt.driver [-] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Instance destroyed successfully.#033[00m
Dec  5 02:06:15 compute-0 nova_compute[349548]: 2025-12-05 02:06:15.237 349552 DEBUG nova.objects.instance [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lazy-loading 'resources' on Instance uuid b69a0e24-1bc4-46a5-92d7-367c1efd53df obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:06:15 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:06:15 compute-0 nova_compute[349548]: 2025-12-05 02:06:15.262 349552 DEBUG nova.virt.libvirt.vif [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T01:47:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-05T01:48:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6ad982b73954486390215862ee62239f',ramdisk_id='',reservation_id='r-u7sbhrgz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='aa58c1e9-bdcc-4e60-9cee-eaeee0741251',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-05T01:48:05Z,user_data=None,user_id='ff880837791d4f49a54672b8d0e705ff',uuid=b69a0e24-1bc4-46a5-92d7-367c1efd53df,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "address": "fa:16:3e:0c:12:24", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.48", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68143c81-65", "ovs_interfaceid": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  5 02:06:15 compute-0 nova_compute[349548]: 2025-12-05 02:06:15.263 349552 DEBUG nova.network.os_vif_util [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converting VIF {"id": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "address": "fa:16:3e:0c:12:24", "network": {"id": "49f7d2f1-f1ff-4dcc-94db-d088dc8d3183", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.48", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6ad982b73954486390215862ee62239f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68143c81-65", "ovs_interfaceid": "68143c81-65a4-4ed0-8902-dbe0c8d89224", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 02:06:15 compute-0 nova_compute[349548]: 2025-12-05 02:06:15.264 349552 DEBUG nova.network.os_vif_util [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0c:12:24,bridge_name='br-int',has_traffic_filtering=True,id=68143c81-65a4-4ed0-8902-dbe0c8d89224,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68143c81-65') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 02:06:15 compute-0 nova_compute[349548]: 2025-12-05 02:06:15.264 349552 DEBUG os_vif [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0c:12:24,bridge_name='br-int',has_traffic_filtering=True,id=68143c81-65a4-4ed0-8902-dbe0c8d89224,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68143c81-65') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  5 02:06:15 compute-0 nova_compute[349548]: 2025-12-05 02:06:15.266 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:06:15 compute-0 nova_compute[349548]: 2025-12-05 02:06:15.266 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap68143c81-65, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:06:15 compute-0 nova_compute[349548]: 2025-12-05 02:06:15.271 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:06:15 compute-0 nova_compute[349548]: 2025-12-05 02:06:15.274 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  5 02:06:15 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-70e46b28e6d55043e4ffa93fc50c9225b06cb6223f5ded4fca4e2ac8c241f8fe-userdata-shm.mount: Deactivated successfully.
Dec  5 02:06:15 compute-0 nova_compute[349548]: 2025-12-05 02:06:15.278 349552 INFO os_vif [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0c:12:24,bridge_name='br-int',has_traffic_filtering=True,id=68143c81-65a4-4ed0-8902-dbe0c8d89224,network=Network(49f7d2f1-f1ff-4dcc-94db-d088dc8d3183),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68143c81-65')#033[00m
Dec  5 02:06:15 compute-0 podman[438021]: 2025-12-05 02:06:15.281336556 +0000 UTC m=+0.191750446 container init fc296c36233c1206c2f01b64465262c536aabefb1453d1c8e37a5cefb838df96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_meitner, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Dec  5 02:06:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-a820a613b1e07df1e33c546156b70839ccd983fd42dcef40eb3db4bae4f3e023-merged.mount: Deactivated successfully.
Dec  5 02:06:15 compute-0 podman[438021]: 2025-12-05 02:06:15.292831158 +0000 UTC m=+0.203245008 container start fc296c36233c1206c2f01b64465262c536aabefb1453d1c8e37a5cefb838df96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_meitner, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Dec  5 02:06:15 compute-0 podman[438034]: 2025-12-05 02:06:15.296961764 +0000 UTC m=+0.182198338 container cleanup 70e46b28e6d55043e4ffa93fc50c9225b06cb6223f5ded4fca4e2ac8c241f8fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  5 02:06:15 compute-0 nifty_meitner[438062]: 167 167
Dec  5 02:06:15 compute-0 systemd[1]: libpod-fc296c36233c1206c2f01b64465262c536aabefb1453d1c8e37a5cefb838df96.scope: Deactivated successfully.
Dec  5 02:06:15 compute-0 podman[438021]: 2025-12-05 02:06:15.307229012 +0000 UTC m=+0.217642902 container attach fc296c36233c1206c2f01b64465262c536aabefb1453d1c8e37a5cefb838df96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:06:15 compute-0 podman[438021]: 2025-12-05 02:06:15.308099976 +0000 UTC m=+0.218513866 container died fc296c36233c1206c2f01b64465262c536aabefb1453d1c8e37a5cefb838df96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_meitner, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:06:15 compute-0 systemd[1]: libpod-conmon-70e46b28e6d55043e4ffa93fc50c9225b06cb6223f5ded4fca4e2ac8c241f8fe.scope: Deactivated successfully.
Dec  5 02:06:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-908a593687eb4325e9ae8dbda8dec95f6b74b689b11673cfdbd2bf02b8835806-merged.mount: Deactivated successfully.
Dec  5 02:06:15 compute-0 podman[438021]: 2025-12-05 02:06:15.370640659 +0000 UTC m=+0.281054499 container remove fc296c36233c1206c2f01b64465262c536aabefb1453d1c8e37a5cefb838df96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  5 02:06:15 compute-0 systemd[1]: libpod-conmon-fc296c36233c1206c2f01b64465262c536aabefb1453d1c8e37a5cefb838df96.scope: Deactivated successfully.
Dec  5 02:06:15 compute-0 podman[438102]: 2025-12-05 02:06:15.402991336 +0000 UTC m=+0.067058541 container remove 70e46b28e6d55043e4ffa93fc50c9225b06cb6223f5ded4fca4e2ac8c241f8fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  5 02:06:15 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:06:15.419 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[cf358dd8-1fed-4f68-9042-b835be654403]: (4, ('Fri Dec  5 02:06:15 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183 (70e46b28e6d55043e4ffa93fc50c9225b06cb6223f5ded4fca4e2ac8c241f8fe)\n70e46b28e6d55043e4ffa93fc50c9225b06cb6223f5ded4fca4e2ac8c241f8fe\nFri Dec  5 02:06:15 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183 (70e46b28e6d55043e4ffa93fc50c9225b06cb6223f5ded4fca4e2ac8c241f8fe)\n70e46b28e6d55043e4ffa93fc50c9225b06cb6223f5ded4fca4e2ac8c241f8fe\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:06:15 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:06:15.423 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[c5b1a6ec-5738-43ef-a7cd-44a04beffbea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:06:15 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:06:15.424 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap49f7d2f1-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:06:15 compute-0 nova_compute[349548]: 2025-12-05 02:06:15.427 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:06:15 compute-0 kernel: tap49f7d2f1-f0: left promiscuous mode
Dec  5 02:06:15 compute-0 nova_compute[349548]: 2025-12-05 02:06:15.440 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:06:15 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:06:15.445 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[8f047bc5-54cf-4cce-b870-2d954975ea78]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:06:15 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:06:15.467 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[3a064bb4-f095-4902-b891-81dfe8d81e56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:06:15 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:06:15.469 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[a8ccf76d-7797-4e8b-a6cb-4b0056d6b618]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:06:15 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:06:15.489 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[b68dc1cf-6563-41e1-ad8a-1e39d0fefd2d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537500, 'reachable_time': 37315, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 438131, 'error': None, 'target': 'ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:06:15 compute-0 systemd[1]: run-netns-ovnmeta\x2d49f7d2f1\x2df1ff\x2d4dcc\x2d94db\x2dd088dc8d3183.mount: Deactivated successfully.
Dec  5 02:06:15 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:06:15.504 287504 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-49f7d2f1-f1ff-4dcc-94db-d088dc8d3183 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  5 02:06:15 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:06:15.504 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[97fd04d5-1d29-4e92-a5dc-c514efe1f125]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:06:15 compute-0 podman[438138]: 2025-12-05 02:06:15.640359949 +0000 UTC m=+0.099154110 container create c8c45582c6cfbcc0644ee7f1d1dd33338f323c55ac0943a96951abf8a652e38c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lewin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:06:15 compute-0 podman[438138]: 2025-12-05 02:06:15.587401475 +0000 UTC m=+0.046195676 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:06:15 compute-0 systemd[1]: Started libpod-conmon-c8c45582c6cfbcc0644ee7f1d1dd33338f323c55ac0943a96951abf8a652e38c.scope.
Dec  5 02:06:15 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:06:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81d56d6d131913e1cec8f0ceb97f1b2943433821158723bd33ce4e0f426edd94/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:06:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81d56d6d131913e1cec8f0ceb97f1b2943433821158723bd33ce4e0f426edd94/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:06:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81d56d6d131913e1cec8f0ceb97f1b2943433821158723bd33ce4e0f426edd94/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:06:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81d56d6d131913e1cec8f0ceb97f1b2943433821158723bd33ce4e0f426edd94/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:06:15 compute-0 podman[438138]: 2025-12-05 02:06:15.800982622 +0000 UTC m=+0.259776813 container init c8c45582c6cfbcc0644ee7f1d1dd33338f323c55ac0943a96951abf8a652e38c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  5 02:06:15 compute-0 nova_compute[349548]: 2025-12-05 02:06:15.814 349552 DEBUG nova.compute.manager [req-67e1698a-b024-4326-a1c6-20e5132a65fb req-3dd2efe5-2e50-4fcd-b141-0f8cc772d06f a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Received event network-vif-unplugged-68143c81-65a4-4ed0-8902-dbe0c8d89224 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:06:15 compute-0 nova_compute[349548]: 2025-12-05 02:06:15.815 349552 DEBUG oslo_concurrency.lockutils [req-67e1698a-b024-4326-a1c6-20e5132a65fb req-3dd2efe5-2e50-4fcd-b141-0f8cc772d06f a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:06:15 compute-0 nova_compute[349548]: 2025-12-05 02:06:15.816 349552 DEBUG oslo_concurrency.lockutils [req-67e1698a-b024-4326-a1c6-20e5132a65fb req-3dd2efe5-2e50-4fcd-b141-0f8cc772d06f a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:06:15 compute-0 nova_compute[349548]: 2025-12-05 02:06:15.816 349552 DEBUG oslo_concurrency.lockutils [req-67e1698a-b024-4326-a1c6-20e5132a65fb req-3dd2efe5-2e50-4fcd-b141-0f8cc772d06f a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:06:15 compute-0 nova_compute[349548]: 2025-12-05 02:06:15.816 349552 DEBUG nova.compute.manager [req-67e1698a-b024-4326-a1c6-20e5132a65fb req-3dd2efe5-2e50-4fcd-b141-0f8cc772d06f a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] No waiting events found dispatching network-vif-unplugged-68143c81-65a4-4ed0-8902-dbe0c8d89224 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 02:06:15 compute-0 nova_compute[349548]: 2025-12-05 02:06:15.816 349552 DEBUG nova.compute.manager [req-67e1698a-b024-4326-a1c6-20e5132a65fb req-3dd2efe5-2e50-4fcd-b141-0f8cc772d06f a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Received event network-vif-unplugged-68143c81-65a4-4ed0-8902-dbe0c8d89224 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  5 02:06:15 compute-0 podman[438138]: 2025-12-05 02:06:15.820422616 +0000 UTC m=+0.279216747 container start c8c45582c6cfbcc0644ee7f1d1dd33338f323c55ac0943a96951abf8a652e38c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lewin, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  5 02:06:15 compute-0 podman[438138]: 2025-12-05 02:06:15.825506199 +0000 UTC m=+0.284300340 container attach c8c45582c6cfbcc0644ee7f1d1dd33338f323c55ac0943a96951abf8a652e38c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  5 02:06:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1712: 321 pgs: 321 active+clean; 78 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  5 02:06:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:06:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:06:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:06:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:06:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:06:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:06:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:06:16
Dec  5 02:06:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 02:06:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 02:06:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['.rgw.root', 'backups', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'default.rgw.log', 'images']
Dec  5 02:06:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 02:06:16 compute-0 nova_compute[349548]: 2025-12-05 02:06:16.441 349552 INFO nova.virt.libvirt.driver [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Deleting instance files /var/lib/nova/instances/b69a0e24-1bc4-46a5-92d7-367c1efd53df_del#033[00m
Dec  5 02:06:16 compute-0 nova_compute[349548]: 2025-12-05 02:06:16.442 349552 INFO nova.virt.libvirt.driver [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Deletion of /var/lib/nova/instances/b69a0e24-1bc4-46a5-92d7-367c1efd53df_del complete#033[00m
Dec  5 02:06:16 compute-0 nova_compute[349548]: 2025-12-05 02:06:16.522 349552 INFO nova.compute.manager [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Took 1.74 seconds to destroy the instance on the hypervisor.#033[00m
Dec  5 02:06:16 compute-0 nova_compute[349548]: 2025-12-05 02:06:16.523 349552 DEBUG oslo.service.loopingcall [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  5 02:06:16 compute-0 nova_compute[349548]: 2025-12-05 02:06:16.523 349552 DEBUG nova.compute.manager [-] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  5 02:06:16 compute-0 nova_compute[349548]: 2025-12-05 02:06:16.524 349552 DEBUG nova.network.neutron [-] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  5 02:06:16 compute-0 jolly_lewin[438155]: {
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:    "0": [
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:        {
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:            "devices": [
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:                "/dev/loop3"
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:            ],
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:            "lv_name": "ceph_lv0",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:            "lv_size": "21470642176",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:            "name": "ceph_lv0",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:            "tags": {
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:                "ceph.cluster_name": "ceph",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:                "ceph.crush_device_class": "",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:                "ceph.encrypted": "0",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:                "ceph.osd_id": "0",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:                "ceph.type": "block",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:                "ceph.vdo": "0"
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:            },
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:            "type": "block",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:            "vg_name": "ceph_vg0"
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:        }
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:    ],
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:    "1": [
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:        {
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:            "devices": [
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:                "/dev/loop4"
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:            ],
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:            "lv_name": "ceph_lv1",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:            "lv_size": "21470642176",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:            "name": "ceph_lv1",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:            "tags": {
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:                "ceph.cluster_name": "ceph",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:                "ceph.crush_device_class": "",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:                "ceph.encrypted": "0",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:                "ceph.osd_id": "1",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:                "ceph.type": "block",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:                "ceph.vdo": "0"
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:            },
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:            "type": "block",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:            "vg_name": "ceph_vg1"
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:        }
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:    ],
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:    "2": [
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:        {
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:            "devices": [
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:                "/dev/loop5"
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:            ],
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:            "lv_name": "ceph_lv2",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:            "lv_size": "21470642176",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:            "name": "ceph_lv2",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:            "tags": {
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:                "ceph.cluster_name": "ceph",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:                "ceph.crush_device_class": "",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:                "ceph.encrypted": "0",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:                "ceph.osd_id": "2",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:                "ceph.type": "block",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:                "ceph.vdo": "0"
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:            },
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:            "type": "block",
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:            "vg_name": "ceph_vg2"
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:        }
Dec  5 02:06:16 compute-0 jolly_lewin[438155]:    ]
Dec  5 02:06:16 compute-0 jolly_lewin[438155]: }
Dec  5 02:06:16 compute-0 systemd[1]: libpod-c8c45582c6cfbcc0644ee7f1d1dd33338f323c55ac0943a96951abf8a652e38c.scope: Deactivated successfully.
Dec  5 02:06:16 compute-0 podman[438164]: 2025-12-05 02:06:16.769946111 +0000 UTC m=+0.047259356 container died c8c45582c6cfbcc0644ee7f1d1dd33338f323c55ac0943a96951abf8a652e38c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lewin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:06:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-81d56d6d131913e1cec8f0ceb97f1b2943433821158723bd33ce4e0f426edd94-merged.mount: Deactivated successfully.
Dec  5 02:06:16 compute-0 podman[438164]: 2025-12-05 02:06:16.889762069 +0000 UTC m=+0.167075304 container remove c8c45582c6cfbcc0644ee7f1d1dd33338f323c55ac0943a96951abf8a652e38c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_lewin, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:06:16 compute-0 systemd[1]: libpod-conmon-c8c45582c6cfbcc0644ee7f1d1dd33338f323c55ac0943a96951abf8a652e38c.scope: Deactivated successfully.
Dec  5 02:06:16 compute-0 nova_compute[349548]: 2025-12-05 02:06:16.967 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:06:17 compute-0 nova_compute[349548]: 2025-12-05 02:06:17.883 349552 DEBUG nova.compute.manager [req-572ee730-2eff-41f9-8b0a-a161ed1c2305 req-c789fef6-e6b9-4690-a0ed-f23b02fe5e51 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Received event network-vif-plugged-68143c81-65a4-4ed0-8902-dbe0c8d89224 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:06:17 compute-0 nova_compute[349548]: 2025-12-05 02:06:17.884 349552 DEBUG oslo_concurrency.lockutils [req-572ee730-2eff-41f9-8b0a-a161ed1c2305 req-c789fef6-e6b9-4690-a0ed-f23b02fe5e51 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:06:17 compute-0 nova_compute[349548]: 2025-12-05 02:06:17.885 349552 DEBUG oslo_concurrency.lockutils [req-572ee730-2eff-41f9-8b0a-a161ed1c2305 req-c789fef6-e6b9-4690-a0ed-f23b02fe5e51 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:06:17 compute-0 nova_compute[349548]: 2025-12-05 02:06:17.885 349552 DEBUG oslo_concurrency.lockutils [req-572ee730-2eff-41f9-8b0a-a161ed1c2305 req-c789fef6-e6b9-4690-a0ed-f23b02fe5e51 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:06:17 compute-0 nova_compute[349548]: 2025-12-05 02:06:17.885 349552 DEBUG nova.compute.manager [req-572ee730-2eff-41f9-8b0a-a161ed1c2305 req-c789fef6-e6b9-4690-a0ed-f23b02fe5e51 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] No waiting events found dispatching network-vif-plugged-68143c81-65a4-4ed0-8902-dbe0c8d89224 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 02:06:17 compute-0 nova_compute[349548]: 2025-12-05 02:06:17.886 349552 WARNING nova.compute.manager [req-572ee730-2eff-41f9-8b0a-a161ed1c2305 req-c789fef6-e6b9-4690-a0ed-f23b02fe5e51 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Received unexpected event network-vif-plugged-68143c81-65a4-4ed0-8902-dbe0c8d89224 for instance with vm_state active and task_state deleting.#033[00m
Dec  5 02:06:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 02:06:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:06:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 02:06:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:06:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:06:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:06:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:06:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:06:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:06:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:06:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1713: 321 pgs: 321 active+clean; 33 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 1.7 KiB/s wr, 64 op/s
Dec  5 02:06:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:06:18 compute-0 podman[438317]: 2025-12-05 02:06:18.077374197 +0000 UTC m=+0.083611055 container create 6d666649107f181172578cd7718675dc377a20e320258ba40ef86f58e60587ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_black, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:06:18 compute-0 podman[438317]: 2025-12-05 02:06:18.047647883 +0000 UTC m=+0.053884811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:06:18 compute-0 systemd[1]: Started libpod-conmon-6d666649107f181172578cd7718675dc377a20e320258ba40ef86f58e60587ff.scope.
Dec  5 02:06:18 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:06:18 compute-0 podman[438317]: 2025-12-05 02:06:18.215103007 +0000 UTC m=+0.221339945 container init 6d666649107f181172578cd7718675dc377a20e320258ba40ef86f58e60587ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_black, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  5 02:06:18 compute-0 podman[438317]: 2025-12-05 02:06:18.23054889 +0000 UTC m=+0.236785768 container start 6d666649107f181172578cd7718675dc377a20e320258ba40ef86f58e60587ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_black, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:06:18 compute-0 podman[438317]: 2025-12-05 02:06:18.237033582 +0000 UTC m=+0.243270500 container attach 6d666649107f181172578cd7718675dc377a20e320258ba40ef86f58e60587ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_black, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  5 02:06:18 compute-0 pedantic_black[438332]: 167 167
Dec  5 02:06:18 compute-0 systemd[1]: libpod-6d666649107f181172578cd7718675dc377a20e320258ba40ef86f58e60587ff.scope: Deactivated successfully.
Dec  5 02:06:18 compute-0 podman[438317]: 2025-12-05 02:06:18.242291339 +0000 UTC m=+0.248528217 container died 6d666649107f181172578cd7718675dc377a20e320258ba40ef86f58e60587ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_black, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  5 02:06:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-c294a0b072976971260a31bc302fdb8dfae908c69cad5b3016c960a7000c7a84-merged.mount: Deactivated successfully.
Dec  5 02:06:18 compute-0 podman[438317]: 2025-12-05 02:06:18.319102262 +0000 UTC m=+0.325339140 container remove 6d666649107f181172578cd7718675dc377a20e320258ba40ef86f58e60587ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  5 02:06:18 compute-0 systemd[1]: libpod-conmon-6d666649107f181172578cd7718675dc377a20e320258ba40ef86f58e60587ff.scope: Deactivated successfully.
Dec  5 02:06:18 compute-0 podman[438354]: 2025-12-05 02:06:18.574196522 +0000 UTC m=+0.078473100 container create 6243bc521f73846bc3ab93dfca503f54416e9b333e127d82d25e885eeaa18882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hamilton, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  5 02:06:18 compute-0 podman[438354]: 2025-12-05 02:06:18.532159114 +0000 UTC m=+0.036435722 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:06:18 compute-0 systemd[1]: Started libpod-conmon-6243bc521f73846bc3ab93dfca503f54416e9b333e127d82d25e885eeaa18882.scope.
Dec  5 02:06:18 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:06:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/292a89f71e83dbf0d5b323127569c041d8b3a38a2352b7970363a1989c355ef8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:06:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/292a89f71e83dbf0d5b323127569c041d8b3a38a2352b7970363a1989c355ef8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:06:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/292a89f71e83dbf0d5b323127569c041d8b3a38a2352b7970363a1989c355ef8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:06:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/292a89f71e83dbf0d5b323127569c041d8b3a38a2352b7970363a1989c355ef8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:06:18 compute-0 podman[438354]: 2025-12-05 02:06:18.732131729 +0000 UTC m=+0.236408387 container init 6243bc521f73846bc3ab93dfca503f54416e9b333e127d82d25e885eeaa18882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hamilton, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:06:18 compute-0 podman[438354]: 2025-12-05 02:06:18.764328702 +0000 UTC m=+0.268605270 container start 6243bc521f73846bc3ab93dfca503f54416e9b333e127d82d25e885eeaa18882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:06:18 compute-0 podman[438354]: 2025-12-05 02:06:18.772859491 +0000 UTC m=+0.277136139 container attach 6243bc521f73846bc3ab93dfca503f54416e9b333e127d82d25e885eeaa18882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec  5 02:06:18 compute-0 nova_compute[349548]: 2025-12-05 02:06:18.945 349552 DEBUG nova.network.neutron [-] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:06:19 compute-0 nova_compute[349548]: 2025-12-05 02:06:19.009 349552 INFO nova.compute.manager [-] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Took 2.49 seconds to deallocate network for instance.#033[00m
Dec  5 02:06:19 compute-0 nova_compute[349548]: 2025-12-05 02:06:19.079 349552 DEBUG oslo_concurrency.lockutils [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:06:19 compute-0 nova_compute[349548]: 2025-12-05 02:06:19.079 349552 DEBUG oslo_concurrency.lockutils [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:06:19 compute-0 nova_compute[349548]: 2025-12-05 02:06:19.084 349552 DEBUG nova.compute.manager [req-f6cb3adb-4e28-40f7-884b-5c4fb47d8647 req-b7a550a6-7352-4a20-b5b4-ef53bd625e42 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Received event network-vif-deleted-68143c81-65a4-4ed0-8902-dbe0c8d89224 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:06:19 compute-0 nova_compute[349548]: 2025-12-05 02:06:19.158 349552 DEBUG oslo_concurrency.processutils [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:06:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:06:19 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1535300410' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:06:19 compute-0 nova_compute[349548]: 2025-12-05 02:06:19.700 349552 DEBUG oslo_concurrency.processutils [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:06:19 compute-0 nova_compute[349548]: 2025-12-05 02:06:19.712 349552 DEBUG nova.compute.provider_tree [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:06:19 compute-0 nova_compute[349548]: 2025-12-05 02:06:19.743 349552 DEBUG nova.scheduler.client.report [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:06:19 compute-0 nova_compute[349548]: 2025-12-05 02:06:19.778 349552 DEBUG oslo_concurrency.lockutils [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:06:19 compute-0 nova_compute[349548]: 2025-12-05 02:06:19.811 349552 INFO nova.scheduler.client.report [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Deleted allocations for instance b69a0e24-1bc4-46a5-92d7-367c1efd53df#033[00m
Dec  5 02:06:19 compute-0 wizardly_hamilton[438371]: {
Dec  5 02:06:19 compute-0 wizardly_hamilton[438371]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 02:06:19 compute-0 wizardly_hamilton[438371]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:06:19 compute-0 wizardly_hamilton[438371]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 02:06:19 compute-0 wizardly_hamilton[438371]:        "osd_id": 0,
Dec  5 02:06:19 compute-0 wizardly_hamilton[438371]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:06:19 compute-0 wizardly_hamilton[438371]:        "type": "bluestore"
Dec  5 02:06:19 compute-0 wizardly_hamilton[438371]:    },
Dec  5 02:06:19 compute-0 wizardly_hamilton[438371]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 02:06:19 compute-0 wizardly_hamilton[438371]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:06:19 compute-0 wizardly_hamilton[438371]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 02:06:19 compute-0 wizardly_hamilton[438371]:        "osd_id": 1,
Dec  5 02:06:19 compute-0 wizardly_hamilton[438371]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:06:19 compute-0 wizardly_hamilton[438371]:        "type": "bluestore"
Dec  5 02:06:19 compute-0 wizardly_hamilton[438371]:    },
Dec  5 02:06:19 compute-0 wizardly_hamilton[438371]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 02:06:19 compute-0 wizardly_hamilton[438371]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:06:19 compute-0 wizardly_hamilton[438371]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 02:06:19 compute-0 wizardly_hamilton[438371]:        "osd_id": 2,
Dec  5 02:06:19 compute-0 wizardly_hamilton[438371]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:06:19 compute-0 wizardly_hamilton[438371]:        "type": "bluestore"
Dec  5 02:06:19 compute-0 wizardly_hamilton[438371]:    }
Dec  5 02:06:19 compute-0 wizardly_hamilton[438371]: }
Dec  5 02:06:19 compute-0 systemd[1]: libpod-6243bc521f73846bc3ab93dfca503f54416e9b333e127d82d25e885eeaa18882.scope: Deactivated successfully.
Dec  5 02:06:19 compute-0 systemd[1]: libpod-6243bc521f73846bc3ab93dfca503f54416e9b333e127d82d25e885eeaa18882.scope: Consumed 1.136s CPU time.
Dec  5 02:06:19 compute-0 nova_compute[349548]: 2025-12-05 02:06:19.949 349552 DEBUG oslo_concurrency.lockutils [None req-ca50c56c-5119-4f6b-bc47-c9b1e923dfe7 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Lock "b69a0e24-1bc4-46a5-92d7-367c1efd53df" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:06:19 compute-0 podman[438427]: 2025-12-05 02:06:19.978317199 +0000 UTC m=+0.054595681 container died 6243bc521f73846bc3ab93dfca503f54416e9b333e127d82d25e885eeaa18882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:06:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1714: 321 pgs: 321 active+clean; 33 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.7 KiB/s wr, 27 op/s
Dec  5 02:06:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-292a89f71e83dbf0d5b323127569c041d8b3a38a2352b7970363a1989c355ef8-merged.mount: Deactivated successfully.
Dec  5 02:06:20 compute-0 podman[438427]: 2025-12-05 02:06:20.066651645 +0000 UTC m=+0.142930107 container remove 6243bc521f73846bc3ab93dfca503f54416e9b333e127d82d25e885eeaa18882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:06:20 compute-0 podman[438434]: 2025-12-05 02:06:20.068326702 +0000 UTC m=+0.111293290 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 02:06:20 compute-0 systemd[1]: libpod-conmon-6243bc521f73846bc3ab93dfca503f54416e9b333e127d82d25e885eeaa18882.scope: Deactivated successfully.
Dec  5 02:06:20 compute-0 podman[438428]: 2025-12-05 02:06:20.08608029 +0000 UTC m=+0.126112526 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  5 02:06:20 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 02:06:20 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:06:20 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 02:06:20 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:06:20 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 18ab9890-cf2e-4cc1-bbf7-cc94564ca0dd does not exist
Dec  5 02:06:20 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev dff142e8-b791-4128-9069-bbe51b722a36 does not exist
Dec  5 02:06:20 compute-0 nova_compute[349548]: 2025-12-05 02:06:20.271 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:06:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:06:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:06:21 compute-0 nova_compute[349548]: 2025-12-05 02:06:21.970 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:06:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1715: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.7 KiB/s wr, 52 op/s
Dec  5 02:06:22 compute-0 nova_compute[349548]: 2025-12-05 02:06:22.106 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:06:22 compute-0 podman[438527]: 2025-12-05 02:06:22.735620364 +0000 UTC m=+0.130640972 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 02:06:22 compute-0 podman[438526]: 2025-12-05 02:06:22.753558787 +0000 UTC m=+0.155263072 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  5 02:06:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:06:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1716: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  5 02:06:24 compute-0 podman[438563]: 2025-12-05 02:06:24.724668845 +0000 UTC m=+0.127702750 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, release=1214.1726694543, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=base rhel9, vcs-type=git, config_id=edpm, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, distribution-scope=public, version=9.4, name=ubi9)
Dec  5 02:06:25 compute-0 nova_compute[349548]: 2025-12-05 02:06:25.276 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:06:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1717: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec  5 02:06:26 compute-0 nova_compute[349548]: 2025-12-05 02:06:26.974 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  5 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  5 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:06:26 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 02:06:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1718: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 39 op/s
Dec  5 02:06:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:06:29 compute-0 podman[158197]: time="2025-12-05T02:06:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:06:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:06:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec  5 02:06:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:06:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8161 "" "Go-http-client/1.1"
Dec  5 02:06:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1719: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 25 op/s
Dec  5 02:06:30 compute-0 nova_compute[349548]: 2025-12-05 02:06:30.227 349552 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764900375.2254682, b69a0e24-1bc4-46a5-92d7-367c1efd53df => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:06:30 compute-0 nova_compute[349548]: 2025-12-05 02:06:30.227 349552 INFO nova.compute.manager [-] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] VM Stopped (Lifecycle Event)#033[00m
Dec  5 02:06:30 compute-0 nova_compute[349548]: 2025-12-05 02:06:30.261 349552 DEBUG nova.compute.manager [None req-429f1c2f-f6a4-4448-a308-2d815af6be9c - - - - - -] [instance: b69a0e24-1bc4-46a5-92d7-367c1efd53df] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:06:30 compute-0 nova_compute[349548]: 2025-12-05 02:06:30.280 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:06:31 compute-0 nova_compute[349548]: 2025-12-05 02:06:31.091 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:06:31 compute-0 openstack_network_exporter[366555]: ERROR   02:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:06:31 compute-0 openstack_network_exporter[366555]: ERROR   02:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:06:31 compute-0 openstack_network_exporter[366555]: ERROR   02:06:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:06:31 compute-0 openstack_network_exporter[366555]: ERROR   02:06:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:06:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:06:31 compute-0 openstack_network_exporter[366555]: ERROR   02:06:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:06:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:06:31 compute-0 nova_compute[349548]: 2025-12-05 02:06:31.977 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:06:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1720: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 25 op/s
Dec  5 02:06:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:06:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1721: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:06:34 compute-0 nova_compute[349548]: 2025-12-05 02:06:34.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:06:34 compute-0 nova_compute[349548]: 2025-12-05 02:06:34.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:06:34 compute-0 nova_compute[349548]: 2025-12-05 02:06:34.068 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 02:06:34 compute-0 nova_compute[349548]: 2025-12-05 02:06:34.068 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:06:34 compute-0 nova_compute[349548]: 2025-12-05 02:06:34.103 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:06:34 compute-0 nova_compute[349548]: 2025-12-05 02:06:34.104 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:06:34 compute-0 nova_compute[349548]: 2025-12-05 02:06:34.104 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:06:34 compute-0 nova_compute[349548]: 2025-12-05 02:06:34.104 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 02:06:34 compute-0 nova_compute[349548]: 2025-12-05 02:06:34.105 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:06:34 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:06:34 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/954609985' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:06:34 compute-0 nova_compute[349548]: 2025-12-05 02:06:34.641 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:06:34 compute-0 podman[438604]: 2025-12-05 02:06:34.715500341 +0000 UTC m=+0.111008123 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 02:06:34 compute-0 podman[438603]: 2025-12-05 02:06:34.744559485 +0000 UTC m=+0.144981464 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  5 02:06:34 compute-0 podman[438606]: 2025-12-05 02:06:34.765535023 +0000 UTC m=+0.145552500 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, container_name=openstack_network_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, config_id=edpm, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, maintainer=Red Hat, Inc., architecture=x86_64, name=ubi9-minimal, build-date=2025-08-20T13:12:41)
Dec  5 02:06:34 compute-0 podman[438605]: 2025-12-05 02:06:34.786527792 +0000 UTC m=+0.171502888 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  5 02:06:35 compute-0 nova_compute[349548]: 2025-12-05 02:06:35.106 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:06:35 compute-0 nova_compute[349548]: 2025-12-05 02:06:35.108 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4096MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 02:06:35 compute-0 nova_compute[349548]: 2025-12-05 02:06:35.109 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:06:35 compute-0 nova_compute[349548]: 2025-12-05 02:06:35.110 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:06:35 compute-0 nova_compute[349548]: 2025-12-05 02:06:35.174 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 02:06:35 compute-0 nova_compute[349548]: 2025-12-05 02:06:35.175 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 02:06:35 compute-0 nova_compute[349548]: 2025-12-05 02:06:35.205 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:06:35 compute-0 nova_compute[349548]: 2025-12-05 02:06:35.284 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:06:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:06:35 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2159397416' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:06:35 compute-0 nova_compute[349548]: 2025-12-05 02:06:35.681 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:06:35 compute-0 nova_compute[349548]: 2025-12-05 02:06:35.696 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:06:35 compute-0 nova_compute[349548]: 2025-12-05 02:06:35.734 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:06:35 compute-0 nova_compute[349548]: 2025-12-05 02:06:35.754 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 02:06:35 compute-0 nova_compute[349548]: 2025-12-05 02:06:35.755 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:06:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1722: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:06:36 compute-0 nova_compute[349548]: 2025-12-05 02:06:36.755 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:06:36 compute-0 nova_compute[349548]: 2025-12-05 02:06:36.755 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 02:06:36 compute-0 nova_compute[349548]: 2025-12-05 02:06:36.781 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  5 02:06:36 compute-0 nova_compute[349548]: 2025-12-05 02:06:36.781 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:06:36 compute-0 nova_compute[349548]: 2025-12-05 02:06:36.980 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:06:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1723: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:06:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:06:38 compute-0 nova_compute[349548]: 2025-12-05 02:06:38.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:06:38 compute-0 nova_compute[349548]: 2025-12-05 02:06:38.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.321 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.322 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.323 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.326 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.327 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.328 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.328 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.329 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.326 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.329 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.340 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.341 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.341 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.342 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.342 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.342 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.342 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.343 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.343 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.344 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.344 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.345 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.345 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.345 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.346 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.346 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.347 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.347 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.347 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.348 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.348 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.348 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.348 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.348 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.348 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.349 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.349 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.354 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.354 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.355 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.355 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.356 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.356 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.361 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.361 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.362 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:06:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:06:38.362 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:06:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1724: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:06:40 compute-0 nova_compute[349548]: 2025-12-05 02:06:40.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:06:40 compute-0 nova_compute[349548]: 2025-12-05 02:06:40.289 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:06:41 compute-0 nova_compute[349548]: 2025-12-05 02:06:41.982 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:06:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1725: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:06:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:06:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1726: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:06:45 compute-0 nova_compute[349548]: 2025-12-05 02:06:45.292 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:06:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 02:06:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2446728686' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 02:06:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 02:06:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2446728686' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 02:06:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1727: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:06:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:06:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:06:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:06:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:06:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:06:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:06:46 compute-0 nova_compute[349548]: 2025-12-05 02:06:46.985 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:06:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1728: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:06:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:06:48 compute-0 ovn_controller[89286]: 2025-12-05T02:06:48Z|00064|memory_trim|INFO|Detected inactivity (last active 30014 ms ago): trimming memory
Dec  5 02:06:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1729: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:06:50 compute-0 nova_compute[349548]: 2025-12-05 02:06:50.296 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:06:50 compute-0 podman[438713]: 2025-12-05 02:06:50.717066821 +0000 UTC m=+0.116470665 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  5 02:06:50 compute-0 podman[438714]: 2025-12-05 02:06:50.735684313 +0000 UTC m=+0.132242077 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  5 02:06:51 compute-0 nova_compute[349548]: 2025-12-05 02:06:51.988 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:06:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1730: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:06:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:06:53 compute-0 podman[438756]: 2025-12-05 02:06:53.681781021 +0000 UTC m=+0.098401619 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 02:06:53 compute-0 podman[438755]: 2025-12-05 02:06:53.696558745 +0000 UTC m=+0.118207544 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  5 02:06:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1731: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:06:55 compute-0 nova_compute[349548]: 2025-12-05 02:06:55.300 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:06:55 compute-0 podman[438792]: 2025-12-05 02:06:55.744588801 +0000 UTC m=+0.155750857 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, architecture=x86_64, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, io.openshift.tags=base rhel9, managed_by=edpm_ansible, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9)
Dec  5 02:06:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1732: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:06:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:06:56.201 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:06:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:06:56.201 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:06:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:06:56.201 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:06:56 compute-0 nova_compute[349548]: 2025-12-05 02:06:56.991 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:06:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1733: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:06:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:06:59 compute-0 podman[158197]: time="2025-12-05T02:06:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:06:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:06:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec  5 02:06:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:06:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8180 "" "Go-http-client/1.1"
Dec  5 02:07:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1734: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:07:00 compute-0 nova_compute[349548]: 2025-12-05 02:07:00.304 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:07:01 compute-0 openstack_network_exporter[366555]: ERROR   02:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:07:01 compute-0 openstack_network_exporter[366555]: ERROR   02:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:07:01 compute-0 openstack_network_exporter[366555]: ERROR   02:07:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:07:01 compute-0 openstack_network_exporter[366555]: ERROR   02:07:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:07:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:07:01 compute-0 openstack_network_exporter[366555]: ERROR   02:07:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:07:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:07:01 compute-0 nova_compute[349548]: 2025-12-05 02:07:01.993 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:07:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1735: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:07:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:07:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1736: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:07:05 compute-0 nova_compute[349548]: 2025-12-05 02:07:05.309 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:07:05 compute-0 podman[438813]: 2025-12-05 02:07:05.712071512 +0000 UTC m=+0.109670465 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:07:05 compute-0 podman[438814]: 2025-12-05 02:07:05.731792034 +0000 UTC m=+0.126353152 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 02:07:05 compute-0 podman[438816]: 2025-12-05 02:07:05.737531325 +0000 UTC m=+0.111709132 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, maintainer=Red Hat, Inc., architecture=x86_64, io.openshift.tags=minimal rhel9, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-type=git)
Dec  5 02:07:05 compute-0 podman[438815]: 2025-12-05 02:07:05.769170302 +0000 UTC m=+0.152559207 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  5 02:07:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1737: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:07:06 compute-0 nova_compute[349548]: 2025-12-05 02:07:06.996 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:07:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:07:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1738: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:07:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1739: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:07:10 compute-0 nova_compute[349548]: 2025-12-05 02:07:10.312 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:07:12 compute-0 nova_compute[349548]: 2025-12-05 02:07:11.999 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:07:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1740: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:07:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:07:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1741: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:07:15 compute-0 nova_compute[349548]: 2025-12-05 02:07:15.315 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:07:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1742: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:07:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:07:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:07:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:07:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:07:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:07:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:07:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:07:16
Dec  5 02:07:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 02:07:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 02:07:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.log', 'vms', '.mgr', 'backups', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', '.rgw.root']
Dec  5 02:07:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 02:07:17 compute-0 nova_compute[349548]: 2025-12-05 02:07:17.002 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:07:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 02:07:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:07:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 02:07:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:07:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:07:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:07:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:07:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:07:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:07:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:07:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:07:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1743: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:07:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1744: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:07:20 compute-0 nova_compute[349548]: 2025-12-05 02:07:20.317 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:07:20 compute-0 podman[438976]: 2025-12-05 02:07:20.897555406 +0000 UTC m=+0.098924784 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  5 02:07:20 compute-0 podman[438977]: 2025-12-05 02:07:20.925999164 +0000 UTC m=+0.126954430 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 02:07:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 02:07:21 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:07:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 02:07:21 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:07:22 compute-0 nova_compute[349548]: 2025-12-05 02:07:22.005 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:07:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1745: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:07:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:07:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:07:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:07:22 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:07:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 02:07:22 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:07:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 02:07:22 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:07:22 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev a0fb62f1-eee1-46fa-8310-59dda40c1384 does not exist
Dec  5 02:07:22 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 11e363de-0c9a-45c6-bad3-e01f4241a3fa does not exist
Dec  5 02:07:22 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 48958a0f-ec99-43da-adc0-893522cc56ad does not exist
Dec  5 02:07:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 02:07:22 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 02:07:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 02:07:22 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:07:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:07:22 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:07:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:07:23 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:07:23 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:07:23 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:07:23 compute-0 podman[439333]: 2025-12-05 02:07:23.655078757 +0000 UTC m=+0.093742768 container create 629e959f035ecb71f69d41eadbd279246ac1741cea83e0ed03156af95c5f7bbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mahavira, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:07:23 compute-0 podman[439333]: 2025-12-05 02:07:23.619785078 +0000 UTC m=+0.058449149 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:07:23 compute-0 systemd[1]: Started libpod-conmon-629e959f035ecb71f69d41eadbd279246ac1741cea83e0ed03156af95c5f7bbb.scope.
Dec  5 02:07:23 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:07:23 compute-0 podman[439333]: 2025-12-05 02:07:23.819465215 +0000 UTC m=+0.258129286 container init 629e959f035ecb71f69d41eadbd279246ac1741cea83e0ed03156af95c5f7bbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mahavira, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:07:23 compute-0 podman[439333]: 2025-12-05 02:07:23.831744269 +0000 UTC m=+0.270408280 container start 629e959f035ecb71f69d41eadbd279246ac1741cea83e0ed03156af95c5f7bbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  5 02:07:23 compute-0 podman[439333]: 2025-12-05 02:07:23.838796727 +0000 UTC m=+0.277460728 container attach 629e959f035ecb71f69d41eadbd279246ac1741cea83e0ed03156af95c5f7bbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mahavira, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:07:23 compute-0 admiring_mahavira[439349]: 167 167
Dec  5 02:07:23 compute-0 systemd[1]: libpod-629e959f035ecb71f69d41eadbd279246ac1741cea83e0ed03156af95c5f7bbb.scope: Deactivated successfully.
Dec  5 02:07:23 compute-0 podman[439333]: 2025-12-05 02:07:23.843026295 +0000 UTC m=+0.281690306 container died 629e959f035ecb71f69d41eadbd279246ac1741cea83e0ed03156af95c5f7bbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mahavira, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:07:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-844f66852ba57d0e1401f8f387cc10fc72ea5a1fff22d6a3f455816e3ae451ca-merged.mount: Deactivated successfully.
Dec  5 02:07:23 compute-0 podman[439352]: 2025-12-05 02:07:23.906967668 +0000 UTC m=+0.135766077 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  5 02:07:23 compute-0 podman[439333]: 2025-12-05 02:07:23.918037008 +0000 UTC m=+0.356700999 container remove 629e959f035ecb71f69d41eadbd279246ac1741cea83e0ed03156af95c5f7bbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mahavira, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:07:23 compute-0 podman[439350]: 2025-12-05 02:07:23.931713851 +0000 UTC m=+0.163258697 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  5 02:07:23 compute-0 systemd[1]: libpod-conmon-629e959f035ecb71f69d41eadbd279246ac1741cea83e0ed03156af95c5f7bbb.scope: Deactivated successfully.
Dec  5 02:07:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1746: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:07:24 compute-0 podman[439407]: 2025-12-05 02:07:24.182251614 +0000 UTC m=+0.097261338 container create 1993b5864afd01f3e895dec278cb72ac94d94ae4f5440d71bb7943ada0421566 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_moser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:07:24 compute-0 podman[439407]: 2025-12-05 02:07:24.154163706 +0000 UTC m=+0.069173410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:07:24 compute-0 systemd[1]: Started libpod-conmon-1993b5864afd01f3e895dec278cb72ac94d94ae4f5440d71bb7943ada0421566.scope.
Dec  5 02:07:24 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:07:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61e4e0b3f4115b611983f10fd1a2bf372f62bb16faaee8ae605a1f3b2b85810e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:07:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61e4e0b3f4115b611983f10fd1a2bf372f62bb16faaee8ae605a1f3b2b85810e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:07:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61e4e0b3f4115b611983f10fd1a2bf372f62bb16faaee8ae605a1f3b2b85810e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:07:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61e4e0b3f4115b611983f10fd1a2bf372f62bb16faaee8ae605a1f3b2b85810e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:07:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61e4e0b3f4115b611983f10fd1a2bf372f62bb16faaee8ae605a1f3b2b85810e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 02:07:24 compute-0 podman[439407]: 2025-12-05 02:07:24.371141188 +0000 UTC m=+0.286150932 container init 1993b5864afd01f3e895dec278cb72ac94d94ae4f5440d71bb7943ada0421566 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:07:24 compute-0 podman[439407]: 2025-12-05 02:07:24.391500739 +0000 UTC m=+0.306510433 container start 1993b5864afd01f3e895dec278cb72ac94d94ae4f5440d71bb7943ada0421566 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  5 02:07:24 compute-0 podman[439407]: 2025-12-05 02:07:24.397492747 +0000 UTC m=+0.312502491 container attach 1993b5864afd01f3e895dec278cb72ac94d94ae4f5440d71bb7943ada0421566 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_moser, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:07:25 compute-0 nova_compute[349548]: 2025-12-05 02:07:25.321 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:07:25 compute-0 optimistic_moser[439421]: --> passed data devices: 0 physical, 3 LVM
Dec  5 02:07:25 compute-0 optimistic_moser[439421]: --> relative data size: 1.0
Dec  5 02:07:25 compute-0 optimistic_moser[439421]: --> All data devices are unavailable
Dec  5 02:07:25 compute-0 systemd[1]: libpod-1993b5864afd01f3e895dec278cb72ac94d94ae4f5440d71bb7943ada0421566.scope: Deactivated successfully.
Dec  5 02:07:25 compute-0 systemd[1]: libpod-1993b5864afd01f3e895dec278cb72ac94d94ae4f5440d71bb7943ada0421566.scope: Consumed 1.300s CPU time.
Dec  5 02:07:25 compute-0 conmon[439421]: conmon 1993b5864afd01f3e895 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1993b5864afd01f3e895dec278cb72ac94d94ae4f5440d71bb7943ada0421566.scope/container/memory.events
Dec  5 02:07:25 compute-0 podman[439452]: 2025-12-05 02:07:25.823207438 +0000 UTC m=+0.050671732 container died 1993b5864afd01f3e895dec278cb72ac94d94ae4f5440d71bb7943ada0421566 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_moser, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:07:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-61e4e0b3f4115b611983f10fd1a2bf372f62bb16faaee8ae605a1f3b2b85810e-merged.mount: Deactivated successfully.
Dec  5 02:07:25 compute-0 podman[439452]: 2025-12-05 02:07:25.925100914 +0000 UTC m=+0.152565188 container remove 1993b5864afd01f3e895dec278cb72ac94d94ae4f5440d71bb7943ada0421566 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_moser, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  5 02:07:25 compute-0 systemd[1]: libpod-conmon-1993b5864afd01f3e895dec278cb72ac94d94ae4f5440d71bb7943ada0421566.scope: Deactivated successfully.
Dec  5 02:07:26 compute-0 podman[439466]: 2025-12-05 02:07:26.019448608 +0000 UTC m=+0.113343558 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, release-0.7.12=, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, config_id=edpm, distribution-scope=public)
Dec  5 02:07:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1747: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 02:07:27 compute-0 nova_compute[349548]: 2025-12-05 02:07:27.008 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  5 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00025334537995702286 of space, bias 1.0, pg target 0.07600361398710685 quantized to 32 (current 32)
Dec  5 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:07:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 02:07:27 compute-0 podman[439623]: 2025-12-05 02:07:27.159805972 +0000 UTC m=+0.082499983 container create 395cb049764daf1d2f264b3020978714fcc04ad434ef912a84c78349db3a2c9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_nash, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:07:27 compute-0 podman[439623]: 2025-12-05 02:07:27.124459461 +0000 UTC m=+0.047153542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:07:27 compute-0 systemd[1]: Started libpod-conmon-395cb049764daf1d2f264b3020978714fcc04ad434ef912a84c78349db3a2c9a.scope.
Dec  5 02:07:27 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:07:27 compute-0 podman[439623]: 2025-12-05 02:07:27.30815691 +0000 UTC m=+0.230850961 container init 395cb049764daf1d2f264b3020978714fcc04ad434ef912a84c78349db3a2c9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Dec  5 02:07:27 compute-0 podman[439623]: 2025-12-05 02:07:27.326770222 +0000 UTC m=+0.249464243 container start 395cb049764daf1d2f264b3020978714fcc04ad434ef912a84c78349db3a2c9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_nash, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:07:27 compute-0 podman[439623]: 2025-12-05 02:07:27.333607494 +0000 UTC m=+0.256301565 container attach 395cb049764daf1d2f264b3020978714fcc04ad434ef912a84c78349db3a2c9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  5 02:07:27 compute-0 vigilant_nash[439638]: 167 167
Dec  5 02:07:27 compute-0 systemd[1]: libpod-395cb049764daf1d2f264b3020978714fcc04ad434ef912a84c78349db3a2c9a.scope: Deactivated successfully.
Dec  5 02:07:27 compute-0 podman[439623]: 2025-12-05 02:07:27.339616532 +0000 UTC m=+0.262310553 container died 395cb049764daf1d2f264b3020978714fcc04ad434ef912a84c78349db3a2c9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Dec  5 02:07:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1a4067de49258979567a0deef701ec512c00fe301ac30f4132f698124c3b72b-merged.mount: Deactivated successfully.
Dec  5 02:07:27 compute-0 podman[439623]: 2025-12-05 02:07:27.417406163 +0000 UTC m=+0.340100184 container remove 395cb049764daf1d2f264b3020978714fcc04ad434ef912a84c78349db3a2c9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Dec  5 02:07:27 compute-0 systemd[1]: libpod-conmon-395cb049764daf1d2f264b3020978714fcc04ad434ef912a84c78349db3a2c9a.scope: Deactivated successfully.
Dec  5 02:07:27 compute-0 podman[439660]: 2025-12-05 02:07:27.722454363 +0000 UTC m=+0.098975825 container create 570127c000464e0d4942a2e3068fdec2cbb7b94e676011661290ea6c864fb48a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_goldwasser, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:07:27 compute-0 podman[439660]: 2025-12-05 02:07:27.686166726 +0000 UTC m=+0.062688258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:07:27 compute-0 systemd[1]: Started libpod-conmon-570127c000464e0d4942a2e3068fdec2cbb7b94e676011661290ea6c864fb48a.scope.
Dec  5 02:07:27 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:07:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fbf0a9e98881ef8a907da13102b2d161b26f6c411f8df1cd28650950fa21f97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:07:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fbf0a9e98881ef8a907da13102b2d161b26f6c411f8df1cd28650950fa21f97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:07:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fbf0a9e98881ef8a907da13102b2d161b26f6c411f8df1cd28650950fa21f97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:07:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fbf0a9e98881ef8a907da13102b2d161b26f6c411f8df1cd28650950fa21f97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:07:27 compute-0 podman[439660]: 2025-12-05 02:07:27.908686813 +0000 UTC m=+0.285208325 container init 570127c000464e0d4942a2e3068fdec2cbb7b94e676011661290ea6c864fb48a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  5 02:07:27 compute-0 podman[439660]: 2025-12-05 02:07:27.934019013 +0000 UTC m=+0.310540475 container start 570127c000464e0d4942a2e3068fdec2cbb7b94e676011661290ea6c864fb48a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_goldwasser, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Dec  5 02:07:27 compute-0 podman[439660]: 2025-12-05 02:07:27.940163905 +0000 UTC m=+0.316685407 container attach 570127c000464e0d4942a2e3068fdec2cbb7b94e676011661290ea6c864fb48a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Dec  5 02:07:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:07:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1748: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]: {
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:    "0": [
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:        {
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:            "devices": [
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:                "/dev/loop3"
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:            ],
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:            "lv_name": "ceph_lv0",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:            "lv_size": "21470642176",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:            "name": "ceph_lv0",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:            "tags": {
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:                "ceph.cluster_name": "ceph",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:                "ceph.crush_device_class": "",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:                "ceph.encrypted": "0",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:                "ceph.osd_id": "0",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:                "ceph.type": "block",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:                "ceph.vdo": "0"
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:            },
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:            "type": "block",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:            "vg_name": "ceph_vg0"
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:        }
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:    ],
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:    "1": [
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:        {
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:            "devices": [
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:                "/dev/loop4"
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:            ],
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:            "lv_name": "ceph_lv1",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:            "lv_size": "21470642176",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:            "name": "ceph_lv1",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:            "tags": {
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:                "ceph.cluster_name": "ceph",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:                "ceph.crush_device_class": "",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:                "ceph.encrypted": "0",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:                "ceph.osd_id": "1",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:                "ceph.type": "block",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:                "ceph.vdo": "0"
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:            },
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:            "type": "block",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:            "vg_name": "ceph_vg1"
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:        }
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:    ],
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:    "2": [
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:        {
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:            "devices": [
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:                "/dev/loop5"
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:            ],
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:            "lv_name": "ceph_lv2",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:            "lv_size": "21470642176",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:            "name": "ceph_lv2",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:            "tags": {
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:                "ceph.cluster_name": "ceph",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:                "ceph.crush_device_class": "",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:                "ceph.encrypted": "0",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:                "ceph.osd_id": "2",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:                "ceph.type": "block",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:                "ceph.vdo": "0"
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:            },
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:            "type": "block",
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:            "vg_name": "ceph_vg2"
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:        }
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]:    ]
Dec  5 02:07:28 compute-0 hardcore_goldwasser[439676]: }
Dec  5 02:07:28 compute-0 systemd[1]: libpod-570127c000464e0d4942a2e3068fdec2cbb7b94e676011661290ea6c864fb48a.scope: Deactivated successfully.
Dec  5 02:07:28 compute-0 podman[439660]: 2025-12-05 02:07:28.796833836 +0000 UTC m=+1.173355278 container died 570127c000464e0d4942a2e3068fdec2cbb7b94e676011661290ea6c864fb48a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  5 02:07:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fbf0a9e98881ef8a907da13102b2d161b26f6c411f8df1cd28650950fa21f97-merged.mount: Deactivated successfully.
Dec  5 02:07:28 compute-0 podman[439660]: 2025-12-05 02:07:28.901237833 +0000 UTC m=+1.277759295 container remove 570127c000464e0d4942a2e3068fdec2cbb7b94e676011661290ea6c864fb48a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  5 02:07:28 compute-0 systemd[1]: libpod-conmon-570127c000464e0d4942a2e3068fdec2cbb7b94e676011661290ea6c864fb48a.scope: Deactivated successfully.
Dec  5 02:07:29 compute-0 podman[158197]: time="2025-12-05T02:07:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:07:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:07:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec  5 02:07:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:07:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8177 "" "Go-http-client/1.1"
Dec  5 02:07:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1749: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:07:30 compute-0 podman[439838]: 2025-12-05 02:07:30.131321211 +0000 UTC m=+0.079072067 container create 307e25cd9e1826ebbf5b0dc9255f358984c58c6c994f722d0fe994ffb513eeeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:07:30 compute-0 podman[439838]: 2025-12-05 02:07:30.107238316 +0000 UTC m=+0.054989192 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:07:30 compute-0 systemd[1]: Started libpod-conmon-307e25cd9e1826ebbf5b0dc9255f358984c58c6c994f722d0fe994ffb513eeeb.scope.
Dec  5 02:07:30 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:07:30 compute-0 podman[439838]: 2025-12-05 02:07:30.274038272 +0000 UTC m=+0.221789178 container init 307e25cd9e1826ebbf5b0dc9255f358984c58c6c994f722d0fe994ffb513eeeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hugle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  5 02:07:30 compute-0 podman[439838]: 2025-12-05 02:07:30.291537902 +0000 UTC m=+0.239288768 container start 307e25cd9e1826ebbf5b0dc9255f358984c58c6c994f722d0fe994ffb513eeeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hugle, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:07:30 compute-0 crazy_hugle[439853]: 167 167
Dec  5 02:07:30 compute-0 podman[439838]: 2025-12-05 02:07:30.300084702 +0000 UTC m=+0.247835558 container attach 307e25cd9e1826ebbf5b0dc9255f358984c58c6c994f722d0fe994ffb513eeeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  5 02:07:30 compute-0 systemd[1]: libpod-307e25cd9e1826ebbf5b0dc9255f358984c58c6c994f722d0fe994ffb513eeeb.scope: Deactivated successfully.
Dec  5 02:07:30 compute-0 podman[439838]: 2025-12-05 02:07:30.302434367 +0000 UTC m=+0.250185263 container died 307e25cd9e1826ebbf5b0dc9255f358984c58c6c994f722d0fe994ffb513eeeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  5 02:07:30 compute-0 nova_compute[349548]: 2025-12-05 02:07:30.327 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:07:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-164a0784eb7ea087632fcc88a4bd3e053ba1b0a94051bc20c3e871c7db1d22de-merged.mount: Deactivated successfully.
Dec  5 02:07:30 compute-0 podman[439838]: 2025-12-05 02:07:30.37851713 +0000 UTC m=+0.326267976 container remove 307e25cd9e1826ebbf5b0dc9255f358984c58c6c994f722d0fe994ffb513eeeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hugle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  5 02:07:30 compute-0 systemd[1]: libpod-conmon-307e25cd9e1826ebbf5b0dc9255f358984c58c6c994f722d0fe994ffb513eeeb.scope: Deactivated successfully.
Dec  5 02:07:30 compute-0 podman[439879]: 2025-12-05 02:07:30.624851565 +0000 UTC m=+0.089502420 container create 5c7c016273d3aac5a445fb02165454241ee3a09bd0058b4c3b2ab4cc1d1324d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mendeleev, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:07:30 compute-0 podman[439879]: 2025-12-05 02:07:30.593317521 +0000 UTC m=+0.057968386 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:07:30 compute-0 systemd[1]: Started libpod-conmon-5c7c016273d3aac5a445fb02165454241ee3a09bd0058b4c3b2ab4cc1d1324d2.scope.
Dec  5 02:07:30 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:07:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b096ae5052943d8da919db06bbc6afbfab8dddad6b4ed6362761dd4ab9e6f840/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:07:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b096ae5052943d8da919db06bbc6afbfab8dddad6b4ed6362761dd4ab9e6f840/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:07:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b096ae5052943d8da919db06bbc6afbfab8dddad6b4ed6362761dd4ab9e6f840/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:07:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b096ae5052943d8da919db06bbc6afbfab8dddad6b4ed6362761dd4ab9e6f840/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:07:30 compute-0 podman[439879]: 2025-12-05 02:07:30.806795694 +0000 UTC m=+0.271446609 container init 5c7c016273d3aac5a445fb02165454241ee3a09bd0058b4c3b2ab4cc1d1324d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mendeleev, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  5 02:07:30 compute-0 podman[439879]: 2025-12-05 02:07:30.824666985 +0000 UTC m=+0.289317850 container start 5c7c016273d3aac5a445fb02165454241ee3a09bd0058b4c3b2ab4cc1d1324d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:07:30 compute-0 podman[439879]: 2025-12-05 02:07:30.831542268 +0000 UTC m=+0.296193183 container attach 5c7c016273d3aac5a445fb02165454241ee3a09bd0058b4c3b2ab4cc1d1324d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:07:31 compute-0 openstack_network_exporter[366555]: ERROR   02:07:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:07:31 compute-0 openstack_network_exporter[366555]: ERROR   02:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:07:31 compute-0 openstack_network_exporter[366555]: ERROR   02:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:07:31 compute-0 openstack_network_exporter[366555]: ERROR   02:07:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:07:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:07:31 compute-0 openstack_network_exporter[366555]: ERROR   02:07:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:07:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:07:31 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:07:31.590 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:c8:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:b5:45:4f:f9:d2'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 02:07:31 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:07:31.592 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  5 02:07:31 compute-0 nova_compute[349548]: 2025-12-05 02:07:31.593 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:07:31 compute-0 vigorous_mendeleev[439895]: {
Dec  5 02:07:31 compute-0 vigorous_mendeleev[439895]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 02:07:31 compute-0 vigorous_mendeleev[439895]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:07:31 compute-0 vigorous_mendeleev[439895]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 02:07:31 compute-0 vigorous_mendeleev[439895]:        "osd_id": 0,
Dec  5 02:07:31 compute-0 vigorous_mendeleev[439895]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:07:31 compute-0 vigorous_mendeleev[439895]:        "type": "bluestore"
Dec  5 02:07:31 compute-0 vigorous_mendeleev[439895]:    },
Dec  5 02:07:31 compute-0 vigorous_mendeleev[439895]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 02:07:31 compute-0 vigorous_mendeleev[439895]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:07:31 compute-0 vigorous_mendeleev[439895]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 02:07:31 compute-0 vigorous_mendeleev[439895]:        "osd_id": 1,
Dec  5 02:07:31 compute-0 vigorous_mendeleev[439895]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:07:31 compute-0 vigorous_mendeleev[439895]:        "type": "bluestore"
Dec  5 02:07:31 compute-0 vigorous_mendeleev[439895]:    },
Dec  5 02:07:31 compute-0 vigorous_mendeleev[439895]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 02:07:31 compute-0 vigorous_mendeleev[439895]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:07:31 compute-0 vigorous_mendeleev[439895]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 02:07:31 compute-0 vigorous_mendeleev[439895]:        "osd_id": 2,
Dec  5 02:07:31 compute-0 vigorous_mendeleev[439895]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:07:31 compute-0 vigorous_mendeleev[439895]:        "type": "bluestore"
Dec  5 02:07:31 compute-0 vigorous_mendeleev[439895]:    }
Dec  5 02:07:31 compute-0 vigorous_mendeleev[439895]: }
Dec  5 02:07:32 compute-0 systemd[1]: libpod-5c7c016273d3aac5a445fb02165454241ee3a09bd0058b4c3b2ab4cc1d1324d2.scope: Deactivated successfully.
Dec  5 02:07:32 compute-0 podman[439879]: 2025-12-05 02:07:32.009550907 +0000 UTC m=+1.474201772 container died 5c7c016273d3aac5a445fb02165454241ee3a09bd0058b4c3b2ab4cc1d1324d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:07:32 compute-0 systemd[1]: libpod-5c7c016273d3aac5a445fb02165454241ee3a09bd0058b4c3b2ab4cc1d1324d2.scope: Consumed 1.186s CPU time.
Dec  5 02:07:32 compute-0 nova_compute[349548]: 2025-12-05 02:07:32.011 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:07:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1750: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:07:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-b096ae5052943d8da919db06bbc6afbfab8dddad6b4ed6362761dd4ab9e6f840-merged.mount: Deactivated successfully.
Dec  5 02:07:32 compute-0 podman[439879]: 2025-12-05 02:07:32.103215702 +0000 UTC m=+1.567866537 container remove 5c7c016273d3aac5a445fb02165454241ee3a09bd0058b4c3b2ab4cc1d1324d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:07:32 compute-0 systemd[1]: libpod-conmon-5c7c016273d3aac5a445fb02165454241ee3a09bd0058b4c3b2ab4cc1d1324d2.scope: Deactivated successfully.
Dec  5 02:07:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 02:07:32 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:07:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 02:07:32 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:07:32 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 371e2a94-8120-4590-9854-89f00f8cb64f does not exist
Dec  5 02:07:32 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev a3dc9336-5824-4bed-953e-eb43ed5e065f does not exist
Dec  5 02:07:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:07:33 compute-0 nova_compute[349548]: 2025-12-05 02:07:33.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:07:33 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:07:33 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:07:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1751: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:07:34 compute-0 nova_compute[349548]: 2025-12-05 02:07:34.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:07:34 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:07:34.594 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8dd76c1c-ab01-42af-b35e-2e870841b6ad, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:07:35 compute-0 nova_compute[349548]: 2025-12-05 02:07:35.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:07:35 compute-0 nova_compute[349548]: 2025-12-05 02:07:35.097 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:07:35 compute-0 nova_compute[349548]: 2025-12-05 02:07:35.097 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:07:35 compute-0 nova_compute[349548]: 2025-12-05 02:07:35.097 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:07:35 compute-0 nova_compute[349548]: 2025-12-05 02:07:35.098 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 02:07:35 compute-0 nova_compute[349548]: 2025-12-05 02:07:35.098 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:07:35 compute-0 nova_compute[349548]: 2025-12-05 02:07:35.335 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:07:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:07:35 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3272371931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:07:35 compute-0 nova_compute[349548]: 2025-12-05 02:07:35.613 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:07:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1752: 321 pgs: 321 active+clean; 16 MiB data, 229 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:07:36 compute-0 nova_compute[349548]: 2025-12-05 02:07:36.138 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:07:36 compute-0 nova_compute[349548]: 2025-12-05 02:07:36.140 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4098MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 02:07:36 compute-0 nova_compute[349548]: 2025-12-05 02:07:36.140 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:07:36 compute-0 nova_compute[349548]: 2025-12-05 02:07:36.140 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:07:36 compute-0 nova_compute[349548]: 2025-12-05 02:07:36.225 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 02:07:36 compute-0 nova_compute[349548]: 2025-12-05 02:07:36.226 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 02:07:36 compute-0 nova_compute[349548]: 2025-12-05 02:07:36.248 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:07:36 compute-0 podman[440035]: 2025-12-05 02:07:36.741516469 +0000 UTC m=+0.142158075 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  5 02:07:36 compute-0 podman[440037]: 2025-12-05 02:07:36.745643275 +0000 UTC m=+0.131066735 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, architecture=x86_64, distribution-scope=public, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git)
Dec  5 02:07:36 compute-0 podman[440034]: 2025-12-05 02:07:36.75366277 +0000 UTC m=+0.159384489 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd)
Dec  5 02:07:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:07:36 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3842685520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:07:36 compute-0 podman[440036]: 2025-12-05 02:07:36.776111039 +0000 UTC m=+0.170008746 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  5 02:07:36 compute-0 nova_compute[349548]: 2025-12-05 02:07:36.787 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:07:36 compute-0 nova_compute[349548]: 2025-12-05 02:07:36.796 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:07:36 compute-0 nova_compute[349548]: 2025-12-05 02:07:36.817 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:07:36 compute-0 nova_compute[349548]: 2025-12-05 02:07:36.820 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 02:07:36 compute-0 nova_compute[349548]: 2025-12-05 02:07:36.820 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:07:37 compute-0 nova_compute[349548]: 2025-12-05 02:07:37.015 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:07:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Dec  5 02:07:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Dec  5 02:07:37 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Dec  5 02:07:37 compute-0 nova_compute[349548]: 2025-12-05 02:07:37.821 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:07:37 compute-0 nova_compute[349548]: 2025-12-05 02:07:37.821 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 02:07:37 compute-0 nova_compute[349548]: 2025-12-05 02:07:37.821 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  5 02:07:37 compute-0 nova_compute[349548]: 2025-12-05 02:07:37.992 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  5 02:07:37 compute-0 nova_compute[349548]: 2025-12-05 02:07:37.992 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:07:37 compute-0 nova_compute[349548]: 2025-12-05 02:07:37.993 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:07:37 compute-0 nova_compute[349548]: 2025-12-05 02:07:37.994 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 02:07:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:07:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1754: 321 pgs: 321 active+clean; 28 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 921 B/s rd, 1.2 MiB/s wr, 1 op/s
Dec  5 02:07:39 compute-0 nova_compute[349548]: 2025-12-05 02:07:39.069 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:07:39 compute-0 nova_compute[349548]: 2025-12-05 02:07:39.069 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:07:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Dec  5 02:07:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Dec  5 02:07:39 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Dec  5 02:07:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1756: 321 pgs: 321 active+clean; 28 MiB data, 229 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1.6 MiB/s wr, 2 op/s
Dec  5 02:07:40 compute-0 nova_compute[349548]: 2025-12-05 02:07:40.338 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:07:41 compute-0 nova_compute[349548]: 2025-12-05 02:07:41.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:07:42 compute-0 nova_compute[349548]: 2025-12-05 02:07:42.018 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:07:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1757: 321 pgs: 321 active+clean; 49 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 4.1 MiB/s wr, 32 op/s
Dec  5 02:07:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:07:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1758: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Dec  5 02:07:44 compute-0 nova_compute[349548]: 2025-12-05 02:07:44.062 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:07:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 02:07:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2540063627' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 02:07:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 02:07:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2540063627' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 02:07:45 compute-0 nova_compute[349548]: 2025-12-05 02:07:45.340 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:07:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1759: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 3.2 MiB/s wr, 41 op/s
Dec  5 02:07:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:07:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:07:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:07:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:07:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:07:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:07:47 compute-0 nova_compute[349548]: 2025-12-05 02:07:47.021 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:07:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:07:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1760: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 2.8 MiB/s wr, 36 op/s
Dec  5 02:07:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1761: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.6 MiB/s wr, 33 op/s
Dec  5 02:07:50 compute-0 nova_compute[349548]: 2025-12-05 02:07:50.343 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:07:51 compute-0 podman[440123]: 2025-12-05 02:07:51.739773018 +0000 UTC m=+0.147411543 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec  5 02:07:51 compute-0 podman[440124]: 2025-12-05 02:07:51.756190508 +0000 UTC m=+0.158180995 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 02:07:52 compute-0 nova_compute[349548]: 2025-12-05 02:07:52.024 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:07:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1762: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.4 MiB/s wr, 30 op/s
Dec  5 02:07:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:07:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1763: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 683 KiB/s wr, 10 op/s
Dec  5 02:07:54 compute-0 podman[440167]: 2025-12-05 02:07:54.723806228 +0000 UTC m=+0.131330492 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  5 02:07:54 compute-0 podman[440166]: 2025-12-05 02:07:54.736415441 +0000 UTC m=+0.139366927 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Dec  5 02:07:55 compute-0 nova_compute[349548]: 2025-12-05 02:07:55.347 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:07:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1764: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:07:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:07:56.202 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:07:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:07:56.203 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:07:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:07:56.204 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:07:56 compute-0 podman[440205]: 2025-12-05 02:07:56.710810134 +0000 UTC m=+0.117702381 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., name=ubi9, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, build-date=2024-09-18T21:23:30, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  5 02:07:57 compute-0 nova_compute[349548]: 2025-12-05 02:07:57.029 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:07:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:07:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1765: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:07:59 compute-0 podman[158197]: time="2025-12-05T02:07:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:07:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:07:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec  5 02:07:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:07:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8174 "" "Go-http-client/1.1"
Dec  5 02:08:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1766: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:08:00 compute-0 nova_compute[349548]: 2025-12-05 02:08:00.350 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:01 compute-0 openstack_network_exporter[366555]: ERROR   02:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:08:01 compute-0 openstack_network_exporter[366555]: ERROR   02:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:08:01 compute-0 openstack_network_exporter[366555]: ERROR   02:08:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:08:01 compute-0 openstack_network_exporter[366555]: ERROR   02:08:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:08:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:08:01 compute-0 openstack_network_exporter[366555]: ERROR   02:08:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:08:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:08:01 compute-0 ovn_controller[89286]: 2025-12-05T02:08:01Z|00065|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec  5 02:08:02 compute-0 nova_compute[349548]: 2025-12-05 02:08:02.031 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1767: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:08:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:08:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1768: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:08:05 compute-0 nova_compute[349548]: 2025-12-05 02:08:05.354 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1769: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:08:07 compute-0 nova_compute[349548]: 2025-12-05 02:08:07.034 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:07 compute-0 podman[440225]: 2025-12-05 02:08:07.747385624 +0000 UTC m=+0.145600862 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  5 02:08:07 compute-0 podman[440224]: 2025-12-05 02:08:07.752089586 +0000 UTC m=+0.160056278 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS)
Dec  5 02:08:07 compute-0 podman[440227]: 2025-12-05 02:08:07.767687363 +0000 UTC m=+0.155355206 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, architecture=x86_64, build-date=2025-08-20T13:12:41, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, container_name=openstack_network_exporter, name=ubi9-minimal, vendor=Red Hat, Inc.)
Dec  5 02:08:07 compute-0 podman[440226]: 2025-12-05 02:08:07.775315467 +0000 UTC m=+0.167034333 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Dec  5 02:08:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:08:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1770: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:08:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1771: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:08:10 compute-0 nova_compute[349548]: 2025-12-05 02:08:10.358 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:11 compute-0 nova_compute[349548]: 2025-12-05 02:08:11.158 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:12 compute-0 nova_compute[349548]: 2025-12-05 02:08:12.037 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1772: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:08:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:08:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1773: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:08:14 compute-0 nova_compute[349548]: 2025-12-05 02:08:14.368 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:15 compute-0 nova_compute[349548]: 2025-12-05 02:08:15.137 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:15 compute-0 nova_compute[349548]: 2025-12-05 02:08:15.360 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1774: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:08:16 compute-0 nova_compute[349548]: 2025-12-05 02:08:16.065 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:08:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:08:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:08:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:08:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:08:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:08:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:08:16
Dec  5 02:08:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 02:08:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 02:08:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['volumes', 'backups', 'vms', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'default.rgw.log', '.rgw.root']
Dec  5 02:08:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 02:08:17 compute-0 nova_compute[349548]: 2025-12-05 02:08:17.040 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:17 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 02:08:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:08:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 02:08:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:08:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:08:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:08:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:08:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:08:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:08:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:08:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:08:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1775: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:08:18 compute-0 nova_compute[349548]: 2025-12-05 02:08:18.233 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1776: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:08:20 compute-0 nova_compute[349548]: 2025-12-05 02:08:20.363 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:22 compute-0 nova_compute[349548]: 2025-12-05 02:08:22.044 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1777: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:08:22 compute-0 nova_compute[349548]: 2025-12-05 02:08:22.159 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:22 compute-0 podman[440313]: 2025-12-05 02:08:22.672526162 +0000 UTC m=+0.089886221 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec  5 02:08:22 compute-0 podman[440314]: 2025-12-05 02:08:22.691404821 +0000 UTC m=+0.103249085 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 02:08:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:08:23 compute-0 nova_compute[349548]: 2025-12-05 02:08:23.529 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:23 compute-0 nova_compute[349548]: 2025-12-05 02:08:23.833 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1778: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:08:24 compute-0 nova_compute[349548]: 2025-12-05 02:08:24.290 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:25 compute-0 nova_compute[349548]: 2025-12-05 02:08:25.367 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:25 compute-0 podman[440355]: 2025-12-05 02:08:25.706202783 +0000 UTC m=+0.113502683 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 02:08:25 compute-0 podman[440354]: 2025-12-05 02:08:25.748129688 +0000 UTC m=+0.152907667 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  5 02:08:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1779: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  5 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec  5 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:08:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 02:08:27 compute-0 nova_compute[349548]: 2025-12-05 02:08:27.047 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:27 compute-0 podman[440393]: 2025-12-05 02:08:27.723575546 +0000 UTC m=+0.131087705 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, io.buildah.version=1.29.0, io.openshift.expose-services=, config_id=edpm, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, vcs-type=git, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, release-0.7.12=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  5 02:08:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:08:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1780: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:08:29 compute-0 nova_compute[349548]: 2025-12-05 02:08:29.296 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:29 compute-0 podman[158197]: time="2025-12-05T02:08:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:08:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:08:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec  5 02:08:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:08:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8167 "" "Go-http-client/1.1"
Dec  5 02:08:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1781: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:08:30 compute-0 nova_compute[349548]: 2025-12-05 02:08:30.370 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:31 compute-0 openstack_network_exporter[366555]: ERROR   02:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:08:31 compute-0 openstack_network_exporter[366555]: ERROR   02:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:08:31 compute-0 openstack_network_exporter[366555]: ERROR   02:08:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:08:31 compute-0 openstack_network_exporter[366555]: ERROR   02:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:08:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:08:31 compute-0 openstack_network_exporter[366555]: ERROR   02:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:08:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:08:32 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:32.011 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:c8:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:b5:45:4f:f9:d2'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 02:08:32 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:32.013 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  5 02:08:32 compute-0 nova_compute[349548]: 2025-12-05 02:08:32.013 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:32 compute-0 nova_compute[349548]: 2025-12-05 02:08:32.050 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1782: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:08:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:08:33 compute-0 nova_compute[349548]: 2025-12-05 02:08:33.379 349552 DEBUG oslo_concurrency.lockutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Acquiring lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:08:33 compute-0 nova_compute[349548]: 2025-12-05 02:08:33.381 349552 DEBUG oslo_concurrency.lockutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:08:33 compute-0 nova_compute[349548]: 2025-12-05 02:08:33.401 349552 DEBUG nova.compute.manager [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  5 02:08:33 compute-0 nova_compute[349548]: 2025-12-05 02:08:33.604 349552 DEBUG oslo_concurrency.lockutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:08:33 compute-0 nova_compute[349548]: 2025-12-05 02:08:33.605 349552 DEBUG oslo_concurrency.lockutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:08:33 compute-0 nova_compute[349548]: 2025-12-05 02:08:33.619 349552 DEBUG nova.virt.hardware [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  5 02:08:33 compute-0 nova_compute[349548]: 2025-12-05 02:08:33.620 349552 INFO nova.compute.claims [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  5 02:08:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec  5 02:08:33 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  5 02:08:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:08:33 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:08:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 02:08:33 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:08:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 02:08:33 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:08:33 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev a1545e1c-6890-4c10-9bbe-0866a3184cdb does not exist
Dec  5 02:08:33 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 23201496-1d82-464d-b3ed-874c6c9cae29 does not exist
Dec  5 02:08:33 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 2fa80860-0310-4b1a-8aab-6910291d19ac does not exist
Dec  5 02:08:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 02:08:33 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 02:08:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 02:08:33 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:08:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:08:33 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:08:33 compute-0 nova_compute[349548]: 2025-12-05 02:08:33.767 349552 DEBUG oslo_concurrency.processutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:08:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1783: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:08:34 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:08:34 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/829770048' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:08:34 compute-0 nova_compute[349548]: 2025-12-05 02:08:34.288 349552 DEBUG oslo_concurrency.processutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:08:34 compute-0 nova_compute[349548]: 2025-12-05 02:08:34.300 349552 DEBUG nova.compute.provider_tree [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:08:34 compute-0 nova_compute[349548]: 2025-12-05 02:08:34.329 349552 DEBUG nova.scheduler.client.report [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:08:34 compute-0 nova_compute[349548]: 2025-12-05 02:08:34.375 349552 DEBUG oslo_concurrency.lockutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:08:34 compute-0 nova_compute[349548]: 2025-12-05 02:08:34.376 349552 DEBUG nova.compute.manager [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  5 02:08:34 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  5 02:08:34 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:08:34 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:08:34 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:08:34 compute-0 nova_compute[349548]: 2025-12-05 02:08:34.455 349552 DEBUG nova.compute.manager [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  5 02:08:34 compute-0 nova_compute[349548]: 2025-12-05 02:08:34.456 349552 DEBUG nova.network.neutron [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  5 02:08:34 compute-0 nova_compute[349548]: 2025-12-05 02:08:34.489 349552 INFO nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  5 02:08:34 compute-0 nova_compute[349548]: 2025-12-05 02:08:34.522 349552 DEBUG nova.compute.manager [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  5 02:08:34 compute-0 nova_compute[349548]: 2025-12-05 02:08:34.620 349552 DEBUG nova.compute.manager [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  5 02:08:34 compute-0 nova_compute[349548]: 2025-12-05 02:08:34.622 349552 DEBUG nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  5 02:08:34 compute-0 nova_compute[349548]: 2025-12-05 02:08:34.623 349552 INFO nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Creating image(s)#033[00m
Dec  5 02:08:34 compute-0 nova_compute[349548]: 2025-12-05 02:08:34.672 349552 DEBUG nova.storage.rbd_utils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] rbd image a2605a46-d779-4fc3-aeff-1e040dbcf17d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:08:34 compute-0 podman[440701]: 2025-12-05 02:08:34.693463337 +0000 UTC m=+0.097341500 container create 1d54f129e8bb77bb961a43bc4edb04d8d8ddeef6dede3f2c02453236be1e6c53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:08:34 compute-0 podman[440701]: 2025-12-05 02:08:34.637656403 +0000 UTC m=+0.041534526 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:08:34 compute-0 nova_compute[349548]: 2025-12-05 02:08:34.747 349552 DEBUG nova.storage.rbd_utils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] rbd image a2605a46-d779-4fc3-aeff-1e040dbcf17d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:08:34 compute-0 systemd[1]: Started libpod-conmon-1d54f129e8bb77bb961a43bc4edb04d8d8ddeef6dede3f2c02453236be1e6c53.scope.
Dec  5 02:08:34 compute-0 nova_compute[349548]: 2025-12-05 02:08:34.793 349552 DEBUG nova.storage.rbd_utils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] rbd image a2605a46-d779-4fc3-aeff-1e040dbcf17d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:08:34 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:08:34 compute-0 nova_compute[349548]: 2025-12-05 02:08:34.806 349552 DEBUG oslo_concurrency.lockutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Acquiring lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:08:34 compute-0 nova_compute[349548]: 2025-12-05 02:08:34.808 349552 DEBUG oslo_concurrency.lockutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:08:34 compute-0 podman[440701]: 2025-12-05 02:08:34.835673503 +0000 UTC m=+0.239551796 container init 1d54f129e8bb77bb961a43bc4edb04d8d8ddeef6dede3f2c02453236be1e6c53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mirzakhani, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:08:34 compute-0 podman[440701]: 2025-12-05 02:08:34.857545516 +0000 UTC m=+0.261423679 container start 1d54f129e8bb77bb961a43bc4edb04d8d8ddeef6dede3f2c02453236be1e6c53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mirzakhani, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Dec  5 02:08:34 compute-0 podman[440701]: 2025-12-05 02:08:34.863124633 +0000 UTC m=+0.267002806 container attach 1d54f129e8bb77bb961a43bc4edb04d8d8ddeef6dede3f2c02453236be1e6c53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Dec  5 02:08:34 compute-0 angry_mirzakhani[440759]: 167 167
Dec  5 02:08:34 compute-0 systemd[1]: libpod-1d54f129e8bb77bb961a43bc4edb04d8d8ddeef6dede3f2c02453236be1e6c53.scope: Deactivated successfully.
Dec  5 02:08:34 compute-0 podman[440701]: 2025-12-05 02:08:34.869377168 +0000 UTC m=+0.273255301 container died 1d54f129e8bb77bb961a43bc4edb04d8d8ddeef6dede3f2c02453236be1e6c53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:08:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-5dd0ee023f3266ff9c1677ef13b94e6a1afa0fe71d93b06f8e6dd9daa9a86c59-merged.mount: Deactivated successfully.
Dec  5 02:08:34 compute-0 podman[440701]: 2025-12-05 02:08:34.945171522 +0000 UTC m=+0.349049655 container remove 1d54f129e8bb77bb961a43bc4edb04d8d8ddeef6dede3f2c02453236be1e6c53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  5 02:08:34 compute-0 systemd[1]: libpod-conmon-1d54f129e8bb77bb961a43bc4edb04d8d8ddeef6dede3f2c02453236be1e6c53.scope: Deactivated successfully.
Dec  5 02:08:35 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:35.015 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8dd76c1c-ab01-42af-b35e-2e870841b6ad, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:08:35 compute-0 nova_compute[349548]: 2025-12-05 02:08:35.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:08:35 compute-0 nova_compute[349548]: 2025-12-05 02:08:35.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:08:35 compute-0 nova_compute[349548]: 2025-12-05 02:08:35.185 349552 DEBUG nova.policy [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5e8484f22ce84af99708d2e728179b92', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '159039e5ad4a46a7be912cd9756c76c5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  5 02:08:35 compute-0 podman[440793]: 2025-12-05 02:08:35.206046124 +0000 UTC m=+0.076006681 container create f03cd81a17d2c9550617266a7a92de09912920c14c8254ef721727efcc55cee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  5 02:08:35 compute-0 podman[440793]: 2025-12-05 02:08:35.179544462 +0000 UTC m=+0.049505019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:08:35 compute-0 systemd[1]: Started libpod-conmon-f03cd81a17d2c9550617266a7a92de09912920c14c8254ef721727efcc55cee2.scope.
Dec  5 02:08:35 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:08:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57dd04497f02eb200fc90529b1b57adb27bea18b692c93996aae23eb7cfff968/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:08:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57dd04497f02eb200fc90529b1b57adb27bea18b692c93996aae23eb7cfff968/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:08:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57dd04497f02eb200fc90529b1b57adb27bea18b692c93996aae23eb7cfff968/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:08:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57dd04497f02eb200fc90529b1b57adb27bea18b692c93996aae23eb7cfff968/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:08:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57dd04497f02eb200fc90529b1b57adb27bea18b692c93996aae23eb7cfff968/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 02:08:35 compute-0 podman[440793]: 2025-12-05 02:08:35.347944332 +0000 UTC m=+0.217904869 container init f03cd81a17d2c9550617266a7a92de09912920c14c8254ef721727efcc55cee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ride, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:08:35 compute-0 podman[440793]: 2025-12-05 02:08:35.372262893 +0000 UTC m=+0.242223410 container start f03cd81a17d2c9550617266a7a92de09912920c14c8254ef721727efcc55cee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ride, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  5 02:08:35 compute-0 nova_compute[349548]: 2025-12-05 02:08:35.373 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:35 compute-0 podman[440793]: 2025-12-05 02:08:35.377683525 +0000 UTC m=+0.247644082 container attach f03cd81a17d2c9550617266a7a92de09912920c14c8254ef721727efcc55cee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:08:35 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Dec  5 02:08:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:08:35.443590) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  5 02:08:35 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Dec  5 02:08:35 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900515443686, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 2052, "num_deletes": 251, "total_data_size": 3366005, "memory_usage": 3427384, "flush_reason": "Manual Compaction"}
Dec  5 02:08:35 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Dec  5 02:08:35 compute-0 nova_compute[349548]: 2025-12-05 02:08:35.465 349552 DEBUG nova.virt.libvirt.imagebackend [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Image locations are: [{'url': 'rbd://cbd280d3-cbd8-528b-ace6-2b3a887cdcee/images/e9091bfb-b431-47c9-a284-79372046956b/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://cbd280d3-cbd8-528b-ace6-2b3a887cdcee/images/e9091bfb-b431-47c9-a284-79372046956b/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Dec  5 02:08:35 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900515469531, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 3299961, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34743, "largest_seqno": 36794, "table_properties": {"data_size": 3290606, "index_size": 5913, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18747, "raw_average_key_size": 20, "raw_value_size": 3271973, "raw_average_value_size": 3506, "num_data_blocks": 262, "num_entries": 933, "num_filter_entries": 933, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764900294, "oldest_key_time": 1764900294, "file_creation_time": 1764900515, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Dec  5 02:08:35 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 26005 microseconds, and 10259 cpu microseconds.
Dec  5 02:08:35 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 02:08:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:08:35.469602) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 3299961 bytes OK
Dec  5 02:08:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:08:35.469626) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Dec  5 02:08:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:08:35.471607) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Dec  5 02:08:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:08:35.471623) EVENT_LOG_v1 {"time_micros": 1764900515471618, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  5 02:08:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:08:35.471642) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  5 02:08:35 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 3357438, prev total WAL file size 3357438, number of live WAL files 2.
Dec  5 02:08:35 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:08:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:08:35.473043) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Dec  5 02:08:35 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  5 02:08:35 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(3222KB)], [80(6834KB)]
Dec  5 02:08:35 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900515473122, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 10298778, "oldest_snapshot_seqno": -1}
Dec  5 02:08:35 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 5669 keys, 8588057 bytes, temperature: kUnknown
Dec  5 02:08:35 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900515543525, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 8588057, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8550812, "index_size": 21967, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14213, "raw_key_size": 142945, "raw_average_key_size": 25, "raw_value_size": 8448933, "raw_average_value_size": 1490, "num_data_blocks": 902, "num_entries": 5669, "num_filter_entries": 5669, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764900515, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Dec  5 02:08:35 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 02:08:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:08:35.544562) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 8588057 bytes
Dec  5 02:08:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:08:35.547082) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 146.2 rd, 121.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 6.7 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(5.7) write-amplify(2.6) OK, records in: 6187, records dropped: 518 output_compression: NoCompression
Dec  5 02:08:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:08:35.547104) EVENT_LOG_v1 {"time_micros": 1764900515547094, "job": 46, "event": "compaction_finished", "compaction_time_micros": 70466, "compaction_time_cpu_micros": 42322, "output_level": 6, "num_output_files": 1, "total_output_size": 8588057, "num_input_records": 6187, "num_output_records": 5669, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  5 02:08:35 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:08:35 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900515548232, "job": 46, "event": "table_file_deletion", "file_number": 82}
Dec  5 02:08:35 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:08:35 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900515549874, "job": 46, "event": "table_file_deletion", "file_number": 80}
Dec  5 02:08:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:08:35.472727) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:08:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:08:35.550243) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:08:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:08:35.550251) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:08:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:08:35.550254) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:08:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:08:35.550257) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:08:35 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:08:35.550260) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:08:35 compute-0 nova_compute[349548]: 2025-12-05 02:08:35.764 349552 DEBUG oslo_concurrency.lockutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Acquiring lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:08:35 compute-0 nova_compute[349548]: 2025-12-05 02:08:35.765 349552 DEBUG oslo_concurrency.lockutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:08:35 compute-0 nova_compute[349548]: 2025-12-05 02:08:35.807 349552 DEBUG nova.compute.manager [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  5 02:08:35 compute-0 nova_compute[349548]: 2025-12-05 02:08:35.881 349552 DEBUG oslo_concurrency.lockutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:08:35 compute-0 nova_compute[349548]: 2025-12-05 02:08:35.882 349552 DEBUG oslo_concurrency.lockutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:08:35 compute-0 nova_compute[349548]: 2025-12-05 02:08:35.894 349552 DEBUG nova.virt.hardware [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  5 02:08:35 compute-0 nova_compute[349548]: 2025-12-05 02:08:35.895 349552 INFO nova.compute.claims [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  5 02:08:36 compute-0 nova_compute[349548]: 2025-12-05 02:08:36.058 349552 DEBUG nova.scheduler.client.report [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Refreshing inventories for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  5 02:08:36 compute-0 nova_compute[349548]: 2025-12-05 02:08:36.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:08:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1784: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:08:36 compute-0 nova_compute[349548]: 2025-12-05 02:08:36.080 349552 DEBUG nova.scheduler.client.report [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Updating ProviderTree inventory for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  5 02:08:36 compute-0 nova_compute[349548]: 2025-12-05 02:08:36.081 349552 DEBUG nova.compute.provider_tree [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Updating inventory in ProviderTree for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  5 02:08:36 compute-0 nova_compute[349548]: 2025-12-05 02:08:36.098 349552 DEBUG nova.scheduler.client.report [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Refreshing aggregate associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  5 02:08:36 compute-0 nova_compute[349548]: 2025-12-05 02:08:36.126 349552 DEBUG nova.scheduler.client.report [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Refreshing trait associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, traits: HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,HW_CPU_X86_ABM,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE41,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI2,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  5 02:08:36 compute-0 nova_compute[349548]: 2025-12-05 02:08:36.237 349552 DEBUG oslo_concurrency.processutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:08:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Dec  5 02:08:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Dec  5 02:08:36 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Dec  5 02:08:36 compute-0 nova_compute[349548]: 2025-12-05 02:08:36.631 349552 DEBUG nova.network.neutron [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Successfully created port: 1eebaade-abb1-412c-95f2-2b7240026f85 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  5 02:08:36 compute-0 naughty_ride[440809]: --> passed data devices: 0 physical, 3 LVM
Dec  5 02:08:36 compute-0 naughty_ride[440809]: --> relative data size: 1.0
Dec  5 02:08:36 compute-0 naughty_ride[440809]: --> All data devices are unavailable
Dec  5 02:08:36 compute-0 systemd[1]: libpod-f03cd81a17d2c9550617266a7a92de09912920c14c8254ef721727efcc55cee2.scope: Deactivated successfully.
Dec  5 02:08:36 compute-0 systemd[1]: libpod-f03cd81a17d2c9550617266a7a92de09912920c14c8254ef721727efcc55cee2.scope: Consumed 1.246s CPU time.
Dec  5 02:08:36 compute-0 conmon[440809]: conmon f03cd81a17d2c9550617 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f03cd81a17d2c9550617266a7a92de09912920c14c8254ef721727efcc55cee2.scope/container/memory.events
Dec  5 02:08:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:08:36 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4028471828' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:08:36 compute-0 nova_compute[349548]: 2025-12-05 02:08:36.787 349552 DEBUG oslo_concurrency.processutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:08:36 compute-0 nova_compute[349548]: 2025-12-05 02:08:36.795 349552 DEBUG nova.compute.provider_tree [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:08:36 compute-0 podman[440858]: 2025-12-05 02:08:36.799637071 +0000 UTC m=+0.053617664 container died f03cd81a17d2c9550617266a7a92de09912920c14c8254ef721727efcc55cee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ride, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:08:36 compute-0 nova_compute[349548]: 2025-12-05 02:08:36.812 349552 DEBUG nova.scheduler.client.report [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:08:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-57dd04497f02eb200fc90529b1b57adb27bea18b692c93996aae23eb7cfff968-merged.mount: Deactivated successfully.
Dec  5 02:08:36 compute-0 nova_compute[349548]: 2025-12-05 02:08:36.846 349552 DEBUG oslo_concurrency.lockutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.964s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:08:36 compute-0 nova_compute[349548]: 2025-12-05 02:08:36.847 349552 DEBUG nova.compute.manager [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  5 02:08:36 compute-0 podman[440858]: 2025-12-05 02:08:36.877193775 +0000 UTC m=+0.131174348 container remove f03cd81a17d2c9550617266a7a92de09912920c14c8254ef721727efcc55cee2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ride, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  5 02:08:36 compute-0 systemd[1]: libpod-conmon-f03cd81a17d2c9550617266a7a92de09912920c14c8254ef721727efcc55cee2.scope: Deactivated successfully.
Dec  5 02:08:36 compute-0 nova_compute[349548]: 2025-12-05 02:08:36.953 349552 DEBUG oslo_concurrency.processutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.055 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.061 349552 DEBUG oslo_concurrency.processutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3.part --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.063 349552 DEBUG nova.virt.images [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] e9091bfb-b431-47c9-a284-79372046956b was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec  5 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.065 349552 DEBUG nova.privsep.utils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  5 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.067 349552 DEBUG oslo_concurrency.processutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3.part /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.091 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.092 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.093 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  5 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.284 349552 DEBUG oslo_concurrency.processutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3.part /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3.converted" returned: 0 in 0.217s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.290 349552 DEBUG oslo_concurrency.processutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.391 349552 DEBUG oslo_concurrency.processutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3.converted --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.393 349552 DEBUG oslo_concurrency.lockutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.436 349552 DEBUG nova.storage.rbd_utils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] rbd image a2605a46-d779-4fc3-aeff-1e040dbcf17d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.450 349552 DEBUG oslo_concurrency.processutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 a2605a46-d779-4fc3-aeff-1e040dbcf17d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.489 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec  5 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.490 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec  5 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.490 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  5 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.495 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.495 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.496 349552 DEBUG nova.compute.manager [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  5 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.496 349552 DEBUG nova.network.neutron [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  5 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.503 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.765 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.766 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.767 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.768 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.772 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.793 349552 INFO nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  5 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.827 349552 DEBUG nova.compute.manager [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  5 02:08:37 compute-0 nova_compute[349548]: 2025-12-05 02:08:37.888 349552 DEBUG oslo_concurrency.processutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 a2605a46-d779-4fc3-aeff-1e040dbcf17d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:08:37 compute-0 podman[441072]: 2025-12-05 02:08:37.962453274 +0000 UTC m=+0.057961126 container create 2563d7667c29b2e7d38f9336d3f06bd7c9eab96ce3948c5f2da43bf87ce1ddce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_solomon, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  5 02:08:38 compute-0 systemd[1]: Started libpod-conmon-2563d7667c29b2e7d38f9336d3f06bd7c9eab96ce3948c5f2da43bf87ce1ddce.scope.
Dec  5 02:08:38 compute-0 podman[441072]: 2025-12-05 02:08:37.942349141 +0000 UTC m=+0.037857013 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:08:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:08:38 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.057 349552 DEBUG nova.compute.manager [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  5 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.059 349552 DEBUG nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  5 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.060 349552 INFO nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Creating image(s)#033[00m
Dec  5 02:08:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1786: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.6 KiB/s wr, 16 op/s
Dec  5 02:08:38 compute-0 podman[441072]: 2025-12-05 02:08:38.077071457 +0000 UTC m=+0.172579329 container init 2563d7667c29b2e7d38f9336d3f06bd7c9eab96ce3948c5f2da43bf87ce1ddce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  5 02:08:38 compute-0 podman[441072]: 2025-12-05 02:08:38.087249022 +0000 UTC m=+0.182756864 container start 2563d7667c29b2e7d38f9336d3f06bd7c9eab96ce3948c5f2da43bf87ce1ddce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:08:38 compute-0 podman[441072]: 2025-12-05 02:08:38.091481931 +0000 UTC m=+0.186989793 container attach 2563d7667c29b2e7d38f9336d3f06bd7c9eab96ce3948c5f2da43bf87ce1ddce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:08:38 compute-0 sad_solomon[441152]: 167 167
Dec  5 02:08:38 compute-0 systemd[1]: libpod-2563d7667c29b2e7d38f9336d3f06bd7c9eab96ce3948c5f2da43bf87ce1ddce.scope: Deactivated successfully.
Dec  5 02:08:38 compute-0 podman[441072]: 2025-12-05 02:08:38.09608333 +0000 UTC m=+0.191591172 container died 2563d7667c29b2e7d38f9336d3f06bd7c9eab96ce3948c5f2da43bf87ce1ddce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  5 02:08:38 compute-0 podman[441131]: 2025-12-05 02:08:38.120453593 +0000 UTC m=+0.097301959 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 02:08:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a563331aa56326b2f534c415961d606a01eb1226e6a17d03c55199c8249665a-merged.mount: Deactivated successfully.
Dec  5 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.144 349552 DEBUG nova.storage.rbd_utils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] rbd image 939ae9f2-b89c-4a19-96de-ab4dfc882a35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:08:38 compute-0 podman[441072]: 2025-12-05 02:08:38.148054446 +0000 UTC m=+0.243562288 container remove 2563d7667c29b2e7d38f9336d3f06bd7c9eab96ce3948c5f2da43bf87ce1ddce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_solomon, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec  5 02:08:38 compute-0 podman[441120]: 2025-12-05 02:08:38.151118292 +0000 UTC m=+0.134746468 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  5 02:08:38 compute-0 podman[441134]: 2025-12-05 02:08:38.156249636 +0000 UTC m=+0.123728099 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, managed_by=edpm_ansible, version=9.6, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, config_id=edpm)
Dec  5 02:08:38 compute-0 systemd[1]: libpod-conmon-2563d7667c29b2e7d38f9336d3f06bd7c9eab96ce3948c5f2da43bf87ce1ddce.scope: Deactivated successfully.
Dec  5 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.183 349552 DEBUG nova.storage.rbd_utils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] rbd image 939ae9f2-b89c-4a19-96de-ab4dfc882a35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:08:38 compute-0 podman[441132]: 2025-12-05 02:08:38.19384109 +0000 UTC m=+0.168771942 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  5 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.216 349552 DEBUG nova.storage.rbd_utils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] rbd image 939ae9f2-b89c-4a19-96de-ab4dfc882a35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.231 349552 DEBUG oslo_concurrency.processutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:08:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:08:38 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/168538381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.251 349552 DEBUG nova.policy [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3439b5cde2ff4830bb0294f007842282', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '70b71e0f6ffe47ed86a910f90d71557a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  5 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.271 349552 DEBUG nova.storage.rbd_utils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] resizing rbd image a2605a46-d779-4fc3-aeff-1e040dbcf17d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  5 02:08:38 compute-0 podman[441303]: 2025-12-05 02:08:38.321253831 +0000 UTC m=+0.050824826 container create c7a3b38ebce188ef0eaf8ffd3f63f22a6d4826d6fbe1dd244df9d4cec9299e13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_maxwell, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.322 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.322 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.323 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.324 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.326 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.327 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.327 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.328 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.329 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.329 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.329 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.329 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.330 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.331 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.331 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.331 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.332 349552 DEBUG oslo_concurrency.processutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.332 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.333 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': [], 'disk.device.write.requests': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.333 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.334 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.334 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.336 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.334 349552 DEBUG oslo_concurrency.lockutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Acquiring lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.337 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.338 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:08:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:08:38.339 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.340 349552 DEBUG oslo_concurrency.lockutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.341 349552 DEBUG oslo_concurrency.lockutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.376 349552 DEBUG nova.storage.rbd_utils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] rbd image 939ae9f2-b89c-4a19-96de-ab4dfc882a35_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.383 349552 DEBUG oslo_concurrency.processutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 939ae9f2-b89c-4a19-96de-ab4dfc882a35_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:08:38 compute-0 systemd[1]: Started libpod-conmon-c7a3b38ebce188ef0eaf8ffd3f63f22a6d4826d6fbe1dd244df9d4cec9299e13.scope.
Dec  5 02:08:38 compute-0 podman[441303]: 2025-12-05 02:08:38.298847003 +0000 UTC m=+0.028418018 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:08:38 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:08:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a400c0708c9030dbdade3d36a7cf889a04e5ee8fc271b535fc6e1bd732e91b7b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:08:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a400c0708c9030dbdade3d36a7cf889a04e5ee8fc271b535fc6e1bd732e91b7b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:08:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a400c0708c9030dbdade3d36a7cf889a04e5ee8fc271b535fc6e1bd732e91b7b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:08:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a400c0708c9030dbdade3d36a7cf889a04e5ee8fc271b535fc6e1bd732e91b7b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:08:38 compute-0 podman[441303]: 2025-12-05 02:08:38.455162564 +0000 UTC m=+0.184733599 container init c7a3b38ebce188ef0eaf8ffd3f63f22a6d4826d6fbe1dd244df9d4cec9299e13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:08:38 compute-0 podman[441303]: 2025-12-05 02:08:38.469139606 +0000 UTC m=+0.198710591 container start c7a3b38ebce188ef0eaf8ffd3f63f22a6d4826d6fbe1dd244df9d4cec9299e13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_maxwell, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  5 02:08:38 compute-0 podman[441303]: 2025-12-05 02:08:38.474925728 +0000 UTC m=+0.204496743 container attach c7a3b38ebce188ef0eaf8ffd3f63f22a6d4826d6fbe1dd244df9d4cec9299e13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_maxwell, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.551 349552 DEBUG nova.objects.instance [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Lazy-loading 'migration_context' on Instance uuid a2605a46-d779-4fc3-aeff-1e040dbcf17d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.645 349552 DEBUG nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  5 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.646 349552 DEBUG nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Ensure instance console log exists: /var/lib/nova/instances/a2605a46-d779-4fc3-aeff-1e040dbcf17d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  5 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.647 349552 DEBUG oslo_concurrency.lockutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.647 349552 DEBUG oslo_concurrency.lockutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.649 349552 DEBUG oslo_concurrency.lockutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.744 349552 DEBUG oslo_concurrency.processutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 939ae9f2-b89c-4a19-96de-ab4dfc882a35_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.362s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:08:38 compute-0 nova_compute[349548]: 2025-12-05 02:08:38.870 349552 DEBUG nova.storage.rbd_utils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] resizing rbd image 939ae9f2-b89c-4a19-96de-ab4dfc882a35_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  5 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.078 349552 DEBUG nova.objects.instance [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Lazy-loading 'migration_context' on Instance uuid 939ae9f2-b89c-4a19-96de-ab4dfc882a35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.094 349552 DEBUG nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  5 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.095 349552 DEBUG nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Ensure instance console log exists: /var/lib/nova/instances/939ae9f2-b89c-4a19-96de-ab4dfc882a35/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  5 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.096 349552 DEBUG oslo_concurrency.lockutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.097 349552 DEBUG oslo_concurrency.lockutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.097 349552 DEBUG oslo_concurrency.lockutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.238 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.239 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4025MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.239 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.240 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.308 349552 DEBUG nova.network.neutron [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Successfully created port: 2ac46e0a-6888-440f-b155-d4b0e8677304 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  5 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.339 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance a2605a46-d779-4fc3-aeff-1e040dbcf17d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.339 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 939ae9f2-b89c-4a19-96de-ab4dfc882a35 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.340 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.340 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]: {
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:    "0": [
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:        {
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:            "devices": [
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:                "/dev/loop3"
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:            ],
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:            "lv_name": "ceph_lv0",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:            "lv_size": "21470642176",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:            "name": "ceph_lv0",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:            "tags": {
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:                "ceph.cluster_name": "ceph",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:                "ceph.crush_device_class": "",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:                "ceph.encrypted": "0",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:                "ceph.osd_id": "0",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:                "ceph.type": "block",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:                "ceph.vdo": "0"
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:            },
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:            "type": "block",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:            "vg_name": "ceph_vg0"
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:        }
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:    ],
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:    "1": [
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:        {
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:            "devices": [
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:                "/dev/loop4"
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:            ],
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:            "lv_name": "ceph_lv1",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:            "lv_size": "21470642176",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:            "name": "ceph_lv1",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:            "tags": {
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:                "ceph.cluster_name": "ceph",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:                "ceph.crush_device_class": "",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:                "ceph.encrypted": "0",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:                "ceph.osd_id": "1",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:                "ceph.type": "block",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:                "ceph.vdo": "0"
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:            },
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:            "type": "block",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:            "vg_name": "ceph_vg1"
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:        }
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:    ],
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:    "2": [
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:        {
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:            "devices": [
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:                "/dev/loop5"
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:            ],
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:            "lv_name": "ceph_lv2",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:            "lv_size": "21470642176",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:            "name": "ceph_lv2",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:            "tags": {
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:                "ceph.cluster_name": "ceph",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:                "ceph.crush_device_class": "",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:                "ceph.encrypted": "0",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:                "ceph.osd_id": "2",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:                "ceph.type": "block",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:                "ceph.vdo": "0"
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:            },
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:            "type": "block",
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:            "vg_name": "ceph_vg2"
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:        }
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]:    ]
Dec  5 02:08:39 compute-0 wonderful_maxwell[441358]: }
Dec  5 02:08:39 compute-0 systemd[1]: libpod-c7a3b38ebce188ef0eaf8ffd3f63f22a6d4826d6fbe1dd244df9d4cec9299e13.scope: Deactivated successfully.
Dec  5 02:08:39 compute-0 podman[441303]: 2025-12-05 02:08:39.408067014 +0000 UTC m=+1.137638119 container died c7a3b38ebce188ef0eaf8ffd3f63f22a6d4826d6fbe1dd244df9d4cec9299e13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.431 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:08:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-a400c0708c9030dbdade3d36a7cf889a04e5ee8fc271b535fc6e1bd732e91b7b-merged.mount: Deactivated successfully.
Dec  5 02:08:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Dec  5 02:08:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Dec  5 02:08:39 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Dec  5 02:08:39 compute-0 podman[441303]: 2025-12-05 02:08:39.516946076 +0000 UTC m=+1.246517081 container remove c7a3b38ebce188ef0eaf8ffd3f63f22a6d4826d6fbe1dd244df9d4cec9299e13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_maxwell, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:08:39 compute-0 systemd[1]: libpod-conmon-c7a3b38ebce188ef0eaf8ffd3f63f22a6d4826d6fbe1dd244df9d4cec9299e13.scope: Deactivated successfully.
Dec  5 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.554 349552 DEBUG nova.network.neutron [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Successfully updated port: 1eebaade-abb1-412c-95f2-2b7240026f85 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  5 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.579 349552 DEBUG oslo_concurrency.lockutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Acquiring lock "refresh_cache-a2605a46-d779-4fc3-aeff-1e040dbcf17d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.580 349552 DEBUG oslo_concurrency.lockutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Acquired lock "refresh_cache-a2605a46-d779-4fc3-aeff-1e040dbcf17d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.581 349552 DEBUG nova.network.neutron [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  5 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.646 349552 DEBUG oslo_concurrency.lockutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Acquiring lock "59e35a32-9023-4e49-be56-9da10df3027f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.647 349552 DEBUG oslo_concurrency.lockutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.674 349552 DEBUG nova.compute.manager [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  5 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.757 349552 DEBUG oslo_concurrency.lockutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:08:39 compute-0 nova_compute[349548]: 2025-12-05 02:08:39.931 349552 DEBUG nova.network.neutron [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  5 02:08:40 compute-0 nova_compute[349548]: 2025-12-05 02:08:40.000 349552 DEBUG nova.compute.manager [req-3fe960b8-eb6b-4df5-9abe-f3b3efad4a9d req-2c81012f-c18d-4147-8ba9-0dc8684a6e52 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Received event network-changed-1eebaade-abb1-412c-95f2-2b7240026f85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:08:40 compute-0 nova_compute[349548]: 2025-12-05 02:08:40.001 349552 DEBUG nova.compute.manager [req-3fe960b8-eb6b-4df5-9abe-f3b3efad4a9d req-2c81012f-c18d-4147-8ba9-0dc8684a6e52 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Refreshing instance network info cache due to event network-changed-1eebaade-abb1-412c-95f2-2b7240026f85. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  5 02:08:40 compute-0 nova_compute[349548]: 2025-12-05 02:08:40.002 349552 DEBUG oslo_concurrency.lockutils [req-3fe960b8-eb6b-4df5-9abe-f3b3efad4a9d req-2c81012f-c18d-4147-8ba9-0dc8684a6e52 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-a2605a46-d779-4fc3-aeff-1e040dbcf17d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:08:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:08:40 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4123846993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:08:40 compute-0 nova_compute[349548]: 2025-12-05 02:08:40.027 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:08:40 compute-0 nova_compute[349548]: 2025-12-05 02:08:40.039 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:08:40 compute-0 nova_compute[349548]: 2025-12-05 02:08:40.064 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:08:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1788: 321 pgs: 321 active+clean; 57 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.0 KiB/s wr, 20 op/s
Dec  5 02:08:40 compute-0 nova_compute[349548]: 2025-12-05 02:08:40.092 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 02:08:40 compute-0 nova_compute[349548]: 2025-12-05 02:08:40.093 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.854s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:08:40 compute-0 nova_compute[349548]: 2025-12-05 02:08:40.094 349552 DEBUG oslo_concurrency.lockutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.337s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:08:40 compute-0 nova_compute[349548]: 2025-12-05 02:08:40.106 349552 DEBUG nova.virt.hardware [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  5 02:08:40 compute-0 nova_compute[349548]: 2025-12-05 02:08:40.107 349552 INFO nova.compute.claims [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  5 02:08:40 compute-0 nova_compute[349548]: 2025-12-05 02:08:40.321 349552 DEBUG oslo_concurrency.processutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:08:40 compute-0 nova_compute[349548]: 2025-12-05 02:08:40.376 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:40 compute-0 podman[441669]: 2025-12-05 02:08:40.627305758 +0000 UTC m=+0.076790504 container create ee04334df6bac8d07669ff02940fb4194b7e6efa41f6095a510cf8daedd241d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  5 02:08:40 compute-0 podman[441669]: 2025-12-05 02:08:40.599585651 +0000 UTC m=+0.049070467 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:08:40 compute-0 systemd[1]: Started libpod-conmon-ee04334df6bac8d07669ff02940fb4194b7e6efa41f6095a510cf8daedd241d3.scope.
Dec  5 02:08:40 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:08:40 compute-0 podman[441669]: 2025-12-05 02:08:40.781643164 +0000 UTC m=+0.231127930 container init ee04334df6bac8d07669ff02940fb4194b7e6efa41f6095a510cf8daedd241d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_maxwell, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:08:40 compute-0 podman[441669]: 2025-12-05 02:08:40.793508476 +0000 UTC m=+0.242993232 container start ee04334df6bac8d07669ff02940fb4194b7e6efa41f6095a510cf8daedd241d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:08:40 compute-0 podman[441669]: 2025-12-05 02:08:40.799461533 +0000 UTC m=+0.248946289 container attach ee04334df6bac8d07669ff02940fb4194b7e6efa41f6095a510cf8daedd241d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_maxwell, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Dec  5 02:08:40 compute-0 infallible_maxwell[441684]: 167 167
Dec  5 02:08:40 compute-0 systemd[1]: libpod-ee04334df6bac8d07669ff02940fb4194b7e6efa41f6095a510cf8daedd241d3.scope: Deactivated successfully.
Dec  5 02:08:40 compute-0 podman[441669]: 2025-12-05 02:08:40.80361849 +0000 UTC m=+0.253103236 container died ee04334df6bac8d07669ff02940fb4194b7e6efa41f6095a510cf8daedd241d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_maxwell, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  5 02:08:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:08:40 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1442380969' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:08:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8f65eadd4a47dfbe4c53b2488e5456cdf0eed285da2ab64516bebf27071ffcc-merged.mount: Deactivated successfully.
Dec  5 02:08:40 compute-0 nova_compute[349548]: 2025-12-05 02:08:40.858 349552 DEBUG oslo_concurrency.processutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:08:40 compute-0 nova_compute[349548]: 2025-12-05 02:08:40.868 349552 DEBUG nova.compute.provider_tree [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:08:40 compute-0 podman[441669]: 2025-12-05 02:08:40.875020491 +0000 UTC m=+0.324505217 container remove ee04334df6bac8d07669ff02940fb4194b7e6efa41f6095a510cf8daedd241d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  5 02:08:40 compute-0 systemd[1]: libpod-conmon-ee04334df6bac8d07669ff02940fb4194b7e6efa41f6095a510cf8daedd241d3.scope: Deactivated successfully.
Dec  5 02:08:40 compute-0 nova_compute[349548]: 2025-12-05 02:08:40.898 349552 DEBUG nova.scheduler.client.report [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:08:40 compute-0 nova_compute[349548]: 2025-12-05 02:08:40.931 349552 DEBUG oslo_concurrency.lockutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:08:40 compute-0 nova_compute[349548]: 2025-12-05 02:08:40.932 349552 DEBUG nova.compute.manager [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.027 349552 DEBUG nova.compute.manager [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.028 349552 DEBUG nova.network.neutron [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.049 349552 INFO nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.077 349552 DEBUG nova.compute.manager [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.085 349552 DEBUG nova.network.neutron [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Successfully updated port: 2ac46e0a-6888-440f-b155-d4b0e8677304 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.123 349552 DEBUG oslo_concurrency.lockutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Acquiring lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.124 349552 DEBUG oslo_concurrency.lockutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Acquired lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.124 349552 DEBUG nova.network.neutron [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  5 02:08:41 compute-0 podman[441709]: 2025-12-05 02:08:41.140634556 +0000 UTC m=+0.094202722 container create b15dab387ff2f1d6f3ec075ae2eef4714bc7e08fdfa00e65500e5d9ca1a0aaf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:08:41 compute-0 podman[441709]: 2025-12-05 02:08:41.103977068 +0000 UTC m=+0.057545304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.206 349552 DEBUG nova.compute.manager [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.210 349552 DEBUG nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.211 349552 INFO nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Creating image(s)#033[00m
Dec  5 02:08:41 compute-0 systemd[1]: Started libpod-conmon-b15dab387ff2f1d6f3ec075ae2eef4714bc7e08fdfa00e65500e5d9ca1a0aaf5.scope.
Dec  5 02:08:41 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:08:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0914bfca76c32e7eff7101de0b49d43e362000bea79fd992edea30373abc6afe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.274 349552 DEBUG nova.storage.rbd_utils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] rbd image 59e35a32-9023-4e49-be56-9da10df3027f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:08:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0914bfca76c32e7eff7101de0b49d43e362000bea79fd992edea30373abc6afe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:08:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0914bfca76c32e7eff7101de0b49d43e362000bea79fd992edea30373abc6afe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:08:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0914bfca76c32e7eff7101de0b49d43e362000bea79fd992edea30373abc6afe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:08:41 compute-0 podman[441709]: 2025-12-05 02:08:41.307788051 +0000 UTC m=+0.261356287 container init b15dab387ff2f1d6f3ec075ae2eef4714bc7e08fdfa00e65500e5d9ca1a0aaf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:08:41 compute-0 podman[441709]: 2025-12-05 02:08:41.334309855 +0000 UTC m=+0.287878051 container start b15dab387ff2f1d6f3ec075ae2eef4714bc7e08fdfa00e65500e5d9ca1a0aaf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  5 02:08:41 compute-0 podman[441709]: 2025-12-05 02:08:41.341428464 +0000 UTC m=+0.294996650 container attach b15dab387ff2f1d6f3ec075ae2eef4714bc7e08fdfa00e65500e5d9ca1a0aaf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_tesla, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.377 349552 DEBUG nova.storage.rbd_utils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] rbd image 59e35a32-9023-4e49-be56-9da10df3027f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.431 349552 DEBUG nova.storage.rbd_utils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] rbd image 59e35a32-9023-4e49-be56-9da10df3027f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.452 349552 DEBUG oslo_concurrency.processutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.482 349552 DEBUG nova.network.neutron [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.488 349552 DEBUG nova.compute.manager [req-7fa8458c-f67c-4b57-b276-aa2a8b3ff6f1 req-4f8b9cd2-2cbf-4ed7-9c7a-4e160d7ec2d3 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Received event network-changed-2ac46e0a-6888-440f-b155-d4b0e8677304 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.488 349552 DEBUG nova.compute.manager [req-7fa8458c-f67c-4b57-b276-aa2a8b3ff6f1 req-4f8b9cd2-2cbf-4ed7-9c7a-4e160d7ec2d3 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Refreshing instance network info cache due to event network-changed-2ac46e0a-6888-440f-b155-d4b0e8677304. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.489 349552 DEBUG oslo_concurrency.lockutils [req-7fa8458c-f67c-4b57-b276-aa2a8b3ff6f1 req-4f8b9cd2-2cbf-4ed7-9c7a-4e160d7ec2d3 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.532 349552 DEBUG oslo_concurrency.processutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.533 349552 DEBUG oslo_concurrency.lockutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Acquiring lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.534 349552 DEBUG oslo_concurrency.lockutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.535 349552 DEBUG oslo_concurrency.lockutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.586 349552 DEBUG nova.storage.rbd_utils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] rbd image 59e35a32-9023-4e49-be56-9da10df3027f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.594 349552 DEBUG oslo_concurrency.processutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 59e35a32-9023-4e49-be56-9da10df3027f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.628 349552 DEBUG nova.policy [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b4745812b7eb47908ded25b1eb7c7328', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dd34a6a62cf94436a2b836fa4f49c4fa', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.665 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.666 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.879 349552 DEBUG nova.network.neutron [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Updating instance_info_cache with network_info: [{"id": "1eebaade-abb1-412c-95f2-2b7240026f85", "address": "fa:16:3e:af:f6:1b", "network": {"id": "5a020a22-53e0-4ddc-b74b-9b343d75de26", "bridge": "br-int", "label": "tempest-ServersTestJSON-124637277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "159039e5ad4a46a7be912cd9756c76c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1eebaade-ab", "ovs_interfaceid": "1eebaade-abb1-412c-95f2-2b7240026f85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.906 349552 DEBUG oslo_concurrency.lockutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Releasing lock "refresh_cache-a2605a46-d779-4fc3-aeff-1e040dbcf17d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.908 349552 DEBUG nova.compute.manager [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Instance network_info: |[{"id": "1eebaade-abb1-412c-95f2-2b7240026f85", "address": "fa:16:3e:af:f6:1b", "network": {"id": "5a020a22-53e0-4ddc-b74b-9b343d75de26", "bridge": "br-int", "label": "tempest-ServersTestJSON-124637277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "159039e5ad4a46a7be912cd9756c76c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1eebaade-ab", "ovs_interfaceid": "1eebaade-abb1-412c-95f2-2b7240026f85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.910 349552 DEBUG oslo_concurrency.lockutils [req-3fe960b8-eb6b-4df5-9abe-f3b3efad4a9d req-2c81012f-c18d-4147-8ba9-0dc8684a6e52 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-a2605a46-d779-4fc3-aeff-1e040dbcf17d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.911 349552 DEBUG nova.network.neutron [req-3fe960b8-eb6b-4df5-9abe-f3b3efad4a9d req-2c81012f-c18d-4147-8ba9-0dc8684a6e52 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Refreshing network info cache for port 1eebaade-abb1-412c-95f2-2b7240026f85 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.926 349552 DEBUG nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Start _get_guest_xml network_info=[{"id": "1eebaade-abb1-412c-95f2-2b7240026f85", "address": "fa:16:3e:af:f6:1b", "network": {"id": "5a020a22-53e0-4ddc-b74b-9b343d75de26", "bridge": "br-int", "label": "tempest-ServersTestJSON-124637277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "159039e5ad4a46a7be912cd9756c76c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1eebaade-ab", "ovs_interfaceid": "1eebaade-abb1-412c-95f2-2b7240026f85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T02:07:35Z,direct_url=<?>,disk_format='qcow2',id=e9091bfb-b431-47c9-a284-79372046956b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T02:07:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'image_id': 'e9091bfb-b431-47c9-a284-79372046956b'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.961 349552 WARNING nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.978 349552 DEBUG nova.virt.libvirt.host [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.980 349552 DEBUG nova.virt.libvirt.host [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.990 349552 DEBUG nova.virt.libvirt.host [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.991 349552 DEBUG nova.virt.libvirt.host [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.993 349552 DEBUG nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.994 349552 DEBUG nova.virt.hardware [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-05T02:07:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T02:07:35Z,direct_url=<?>,disk_format='qcow2',id=e9091bfb-b431-47c9-a284-79372046956b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T02:07:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.995 349552 DEBUG nova.virt.hardware [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.996 349552 DEBUG nova.virt.hardware [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  5 02:08:41 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.997 349552 DEBUG nova.virt.hardware [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:41.999 349552 DEBUG nova.virt.hardware [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.000 349552 DEBUG nova.virt.hardware [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.001 349552 DEBUG nova.virt.hardware [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.002 349552 DEBUG nova.virt.hardware [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.002 349552 DEBUG nova.virt.hardware [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.003 349552 DEBUG nova.virt.hardware [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.003 349552 DEBUG nova.virt.hardware [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.007 349552 DEBUG oslo_concurrency.processutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.034 349552 DEBUG oslo_concurrency.processutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 59e35a32-9023-4e49-be56-9da10df3027f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:08:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1789: 321 pgs: 321 active+clean; 136 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.5 MiB/s wr, 97 op/s
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.098 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.184 349552 DEBUG nova.storage.rbd_utils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] resizing rbd image 59e35a32-9023-4e49-be56-9da10df3027f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.446 349552 DEBUG nova.objects.instance [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lazy-loading 'migration_context' on Instance uuid 59e35a32-9023-4e49-be56-9da10df3027f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.466 349552 DEBUG nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.467 349552 DEBUG nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Ensure instance console log exists: /var/lib/nova/instances/59e35a32-9023-4e49-be56-9da10df3027f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.467 349552 DEBUG oslo_concurrency.lockutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.468 349552 DEBUG oslo_concurrency.lockutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.468 349552 DEBUG oslo_concurrency.lockutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:08:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  5 02:08:42 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2270075970' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.529 349552 DEBUG oslo_concurrency.processutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:08:42 compute-0 mystifying_tesla[441729]: {
Dec  5 02:08:42 compute-0 mystifying_tesla[441729]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 02:08:42 compute-0 mystifying_tesla[441729]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:08:42 compute-0 mystifying_tesla[441729]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 02:08:42 compute-0 mystifying_tesla[441729]:        "osd_id": 0,
Dec  5 02:08:42 compute-0 mystifying_tesla[441729]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:08:42 compute-0 mystifying_tesla[441729]:        "type": "bluestore"
Dec  5 02:08:42 compute-0 mystifying_tesla[441729]:    },
Dec  5 02:08:42 compute-0 mystifying_tesla[441729]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 02:08:42 compute-0 mystifying_tesla[441729]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:08:42 compute-0 mystifying_tesla[441729]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 02:08:42 compute-0 mystifying_tesla[441729]:        "osd_id": 1,
Dec  5 02:08:42 compute-0 mystifying_tesla[441729]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:08:42 compute-0 mystifying_tesla[441729]:        "type": "bluestore"
Dec  5 02:08:42 compute-0 mystifying_tesla[441729]:    },
Dec  5 02:08:42 compute-0 mystifying_tesla[441729]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 02:08:42 compute-0 mystifying_tesla[441729]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:08:42 compute-0 mystifying_tesla[441729]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 02:08:42 compute-0 mystifying_tesla[441729]:        "osd_id": 2,
Dec  5 02:08:42 compute-0 mystifying_tesla[441729]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:08:42 compute-0 mystifying_tesla[441729]:        "type": "bluestore"
Dec  5 02:08:42 compute-0 mystifying_tesla[441729]:    }
Dec  5 02:08:42 compute-0 mystifying_tesla[441729]: }
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.563 349552 DEBUG nova.storage.rbd_utils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] rbd image a2605a46-d779-4fc3-aeff-1e040dbcf17d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.571 349552 DEBUG oslo_concurrency.processutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:08:42 compute-0 systemd[1]: libpod-b15dab387ff2f1d6f3ec075ae2eef4714bc7e08fdfa00e65500e5d9ca1a0aaf5.scope: Deactivated successfully.
Dec  5 02:08:42 compute-0 podman[441709]: 2025-12-05 02:08:42.575838494 +0000 UTC m=+1.529406640 container died b15dab387ff2f1d6f3ec075ae2eef4714bc7e08fdfa00e65500e5d9ca1a0aaf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Dec  5 02:08:42 compute-0 systemd[1]: libpod-b15dab387ff2f1d6f3ec075ae2eef4714bc7e08fdfa00e65500e5d9ca1a0aaf5.scope: Consumed 1.199s CPU time.
Dec  5 02:08:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-0914bfca76c32e7eff7101de0b49d43e362000bea79fd992edea30373abc6afe-merged.mount: Deactivated successfully.
Dec  5 02:08:42 compute-0 podman[441709]: 2025-12-05 02:08:42.653615164 +0000 UTC m=+1.607183320 container remove b15dab387ff2f1d6f3ec075ae2eef4714bc7e08fdfa00e65500e5d9ca1a0aaf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_tesla, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  5 02:08:42 compute-0 systemd[1]: libpod-conmon-b15dab387ff2f1d6f3ec075ae2eef4714bc7e08fdfa00e65500e5d9ca1a0aaf5.scope: Deactivated successfully.
Dec  5 02:08:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 02:08:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:08:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 02:08:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:08:42 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 979fbe6d-41dc-4d89-a08e-1e4cee8a64de does not exist
Dec  5 02:08:42 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev b88c6ea8-1d4b-4f62-bdc3-92f9a8685a93 does not exist
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.832 349552 DEBUG nova.network.neutron [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Updating instance_info_cache with network_info: [{"id": "2ac46e0a-6888-440f-b155-d4b0e8677304", "address": "fa:16:3e:ca:ba:4f", "network": {"id": "77ae1103-3871-4354-8e08-09bb5c0c1ad1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-680696631-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "70b71e0f6ffe47ed86a910f90d71557a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac46e0a-68", "ovs_interfaceid": "2ac46e0a-6888-440f-b155-d4b0e8677304", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.851 349552 DEBUG oslo_concurrency.lockutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Releasing lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.852 349552 DEBUG nova.compute.manager [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Instance network_info: |[{"id": "2ac46e0a-6888-440f-b155-d4b0e8677304", "address": "fa:16:3e:ca:ba:4f", "network": {"id": "77ae1103-3871-4354-8e08-09bb5c0c1ad1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-680696631-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "70b71e0f6ffe47ed86a910f90d71557a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac46e0a-68", "ovs_interfaceid": "2ac46e0a-6888-440f-b155-d4b0e8677304", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.854 349552 DEBUG oslo_concurrency.lockutils [req-7fa8458c-f67c-4b57-b276-aa2a8b3ff6f1 req-4f8b9cd2-2cbf-4ed7-9c7a-4e160d7ec2d3 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.856 349552 DEBUG nova.network.neutron [req-7fa8458c-f67c-4b57-b276-aa2a8b3ff6f1 req-4f8b9cd2-2cbf-4ed7-9c7a-4e160d7ec2d3 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Refreshing network info cache for port 2ac46e0a-6888-440f-b155-d4b0e8677304 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.862 349552 DEBUG nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Start _get_guest_xml network_info=[{"id": "2ac46e0a-6888-440f-b155-d4b0e8677304", "address": "fa:16:3e:ca:ba:4f", "network": {"id": "77ae1103-3871-4354-8e08-09bb5c0c1ad1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-680696631-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "70b71e0f6ffe47ed86a910f90d71557a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac46e0a-68", "ovs_interfaceid": "2ac46e0a-6888-440f-b155-d4b0e8677304", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T02:07:35Z,direct_url=<?>,disk_format='qcow2',id=e9091bfb-b431-47c9-a284-79372046956b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T02:07:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'image_id': 'e9091bfb-b431-47c9-a284-79372046956b'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.872 349552 WARNING nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.886 349552 DEBUG nova.virt.libvirt.host [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.888 349552 DEBUG nova.virt.libvirt.host [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.896 349552 DEBUG nova.virt.libvirt.host [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.898 349552 DEBUG nova.virt.libvirt.host [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.900 349552 DEBUG nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.902 349552 DEBUG nova.virt.hardware [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-05T02:07:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T02:07:35Z,direct_url=<?>,disk_format='qcow2',id=e9091bfb-b431-47c9-a284-79372046956b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T02:07:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.904 349552 DEBUG nova.virt.hardware [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.905 349552 DEBUG nova.virt.hardware [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.906 349552 DEBUG nova.virt.hardware [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.907 349552 DEBUG nova.virt.hardware [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.908 349552 DEBUG nova.virt.hardware [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.910 349552 DEBUG nova.virt.hardware [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.913 349552 DEBUG nova.virt.hardware [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.913 349552 DEBUG nova.virt.hardware [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.914 349552 DEBUG nova.virt.hardware [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.915 349552 DEBUG nova.virt.hardware [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  5 02:08:42 compute-0 nova_compute[349548]: 2025-12-05 02:08:42.919 349552 DEBUG oslo_concurrency.processutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:08:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  5 02:08:43 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1135350042' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  5 02:08:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.049 349552 DEBUG oslo_concurrency.processutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.051 349552 DEBUG nova.virt.libvirt.vif [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T02:08:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1341674106',display_name='tempest-ServersTestJSON-server-1341674106',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1341674106',id=6,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFtAkt1MUQ0UzD5ZNvg4emNMv//Ij9tTGEw8OvSj9D0kv+BMeeC2o2SF/4NX3oBFTRlyP9xb/yjd8SFW8gRLZtLdrfqvo1ZN4HP0TzIFpNkL3M1lCxjV2HbcSKr2zzjZbg==',key_name='tempest-keypair-1342908531',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='159039e5ad4a46a7be912cd9756c76c5',ramdisk_id='',reservation_id='r-1esl3ayq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-244710502',owner_user_name='tempest-ServersTestJSON-244710502-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T02:08:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5e8484f22ce84af99708d2e728179b92',uuid=a2605a46-d779-4fc3-aeff-1e040dbcf17d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1eebaade-abb1-412c-95f2-2b7240026f85", "address": "fa:16:3e:af:f6:1b", "network": {"id": "5a020a22-53e0-4ddc-b74b-9b343d75de26", "bridge": "br-int", "label": "tempest-ServersTestJSON-124637277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "159039e5ad4a46a7be912cd9756c76c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1eebaade-ab", "ovs_interfaceid": "1eebaade-abb1-412c-95f2-2b7240026f85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.052 349552 DEBUG nova.network.os_vif_util [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Converting VIF {"id": "1eebaade-abb1-412c-95f2-2b7240026f85", "address": "fa:16:3e:af:f6:1b", "network": {"id": "5a020a22-53e0-4ddc-b74b-9b343d75de26", "bridge": "br-int", "label": "tempest-ServersTestJSON-124637277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "159039e5ad4a46a7be912cd9756c76c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1eebaade-ab", "ovs_interfaceid": "1eebaade-abb1-412c-95f2-2b7240026f85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.053 349552 DEBUG nova.network.os_vif_util [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:f6:1b,bridge_name='br-int',has_traffic_filtering=True,id=1eebaade-abb1-412c-95f2-2b7240026f85,network=Network(5a020a22-53e0-4ddc-b74b-9b343d75de26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1eebaade-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.054 349552 DEBUG nova.objects.instance [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Lazy-loading 'pci_devices' on Instance uuid a2605a46-d779-4fc3-aeff-1e040dbcf17d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.069 349552 DEBUG nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] End _get_guest_xml xml=<domain type="kvm">
Dec  5 02:08:43 compute-0 nova_compute[349548]:  <uuid>a2605a46-d779-4fc3-aeff-1e040dbcf17d</uuid>
Dec  5 02:08:43 compute-0 nova_compute[349548]:  <name>instance-00000006</name>
Dec  5 02:08:43 compute-0 nova_compute[349548]:  <memory>131072</memory>
Dec  5 02:08:43 compute-0 nova_compute[349548]:  <vcpu>1</vcpu>
Dec  5 02:08:43 compute-0 nova_compute[349548]:  <metadata>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  5 02:08:43 compute-0 nova_compute[349548]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:      <nova:name>tempest-ServersTestJSON-server-1341674106</nova:name>
Dec  5 02:08:43 compute-0 nova_compute[349548]:      <nova:creationTime>2025-12-05 02:08:41</nova:creationTime>
Dec  5 02:08:43 compute-0 nova_compute[349548]:      <nova:flavor name="m1.nano">
Dec  5 02:08:43 compute-0 nova_compute[349548]:        <nova:memory>128</nova:memory>
Dec  5 02:08:43 compute-0 nova_compute[349548]:        <nova:disk>1</nova:disk>
Dec  5 02:08:43 compute-0 nova_compute[349548]:        <nova:swap>0</nova:swap>
Dec  5 02:08:43 compute-0 nova_compute[349548]:        <nova:ephemeral>0</nova:ephemeral>
Dec  5 02:08:43 compute-0 nova_compute[349548]:        <nova:vcpus>1</nova:vcpus>
Dec  5 02:08:43 compute-0 nova_compute[349548]:      </nova:flavor>
Dec  5 02:08:43 compute-0 nova_compute[349548]:      <nova:owner>
Dec  5 02:08:43 compute-0 nova_compute[349548]:        <nova:user uuid="5e8484f22ce84af99708d2e728179b92">tempest-ServersTestJSON-244710502-project-member</nova:user>
Dec  5 02:08:43 compute-0 nova_compute[349548]:        <nova:project uuid="159039e5ad4a46a7be912cd9756c76c5">tempest-ServersTestJSON-244710502</nova:project>
Dec  5 02:08:43 compute-0 nova_compute[349548]:      </nova:owner>
Dec  5 02:08:43 compute-0 nova_compute[349548]:      <nova:root type="image" uuid="e9091bfb-b431-47c9-a284-79372046956b"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:      <nova:ports>
Dec  5 02:08:43 compute-0 nova_compute[349548]:        <nova:port uuid="1eebaade-abb1-412c-95f2-2b7240026f85">
Dec  5 02:08:43 compute-0 nova_compute[349548]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:        </nova:port>
Dec  5 02:08:43 compute-0 nova_compute[349548]:      </nova:ports>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    </nova:instance>
Dec  5 02:08:43 compute-0 nova_compute[349548]:  </metadata>
Dec  5 02:08:43 compute-0 nova_compute[349548]:  <sysinfo type="smbios">
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <system>
Dec  5 02:08:43 compute-0 nova_compute[349548]:      <entry name="manufacturer">RDO</entry>
Dec  5 02:08:43 compute-0 nova_compute[349548]:      <entry name="product">OpenStack Compute</entry>
Dec  5 02:08:43 compute-0 nova_compute[349548]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  5 02:08:43 compute-0 nova_compute[349548]:      <entry name="serial">a2605a46-d779-4fc3-aeff-1e040dbcf17d</entry>
Dec  5 02:08:43 compute-0 nova_compute[349548]:      <entry name="uuid">a2605a46-d779-4fc3-aeff-1e040dbcf17d</entry>
Dec  5 02:08:43 compute-0 nova_compute[349548]:      <entry name="family">Virtual Machine</entry>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    </system>
Dec  5 02:08:43 compute-0 nova_compute[349548]:  </sysinfo>
Dec  5 02:08:43 compute-0 nova_compute[349548]:  <os>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <boot dev="hd"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <smbios mode="sysinfo"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:  </os>
Dec  5 02:08:43 compute-0 nova_compute[349548]:  <features>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <acpi/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <apic/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <vmcoreinfo/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:  </features>
Dec  5 02:08:43 compute-0 nova_compute[349548]:  <clock offset="utc">
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <timer name="pit" tickpolicy="delay"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <timer name="hpet" present="no"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:  </clock>
Dec  5 02:08:43 compute-0 nova_compute[349548]:  <cpu mode="host-model" match="exact">
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <topology sockets="1" cores="1" threads="1"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:  </cpu>
Dec  5 02:08:43 compute-0 nova_compute[349548]:  <devices>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <disk type="network" device="disk">
Dec  5 02:08:43 compute-0 nova_compute[349548]:      <driver type="raw" cache="none"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:      <source protocol="rbd" name="vms/a2605a46-d779-4fc3-aeff-1e040dbcf17d_disk">
Dec  5 02:08:43 compute-0 nova_compute[349548]:        <host name="192.168.122.100" port="6789"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:      </source>
Dec  5 02:08:43 compute-0 nova_compute[349548]:      <auth username="openstack">
Dec  5 02:08:43 compute-0 nova_compute[349548]:        <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:      </auth>
Dec  5 02:08:43 compute-0 nova_compute[349548]:      <target dev="vda" bus="virtio"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    </disk>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <disk type="network" device="cdrom">
Dec  5 02:08:43 compute-0 nova_compute[349548]:      <driver type="raw" cache="none"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:      <source protocol="rbd" name="vms/a2605a46-d779-4fc3-aeff-1e040dbcf17d_disk.config">
Dec  5 02:08:43 compute-0 nova_compute[349548]:        <host name="192.168.122.100" port="6789"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:      </source>
Dec  5 02:08:43 compute-0 nova_compute[349548]:      <auth username="openstack">
Dec  5 02:08:43 compute-0 nova_compute[349548]:        <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:      </auth>
Dec  5 02:08:43 compute-0 nova_compute[349548]:      <target dev="sda" bus="sata"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    </disk>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <interface type="ethernet">
Dec  5 02:08:43 compute-0 nova_compute[349548]:      <mac address="fa:16:3e:af:f6:1b"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:      <model type="virtio"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:      <driver name="vhost" rx_queue_size="512"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:      <mtu size="1442"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:      <target dev="tap1eebaade-ab"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    </interface>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <serial type="pty">
Dec  5 02:08:43 compute-0 nova_compute[349548]:      <log file="/var/lib/nova/instances/a2605a46-d779-4fc3-aeff-1e040dbcf17d/console.log" append="off"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    </serial>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <video>
Dec  5 02:08:43 compute-0 nova_compute[349548]:      <model type="virtio"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    </video>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <input type="tablet" bus="usb"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <rng model="virtio">
Dec  5 02:08:43 compute-0 nova_compute[349548]:      <backend model="random">/dev/urandom</backend>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    </rng>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <controller type="usb" index="0"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    <memballoon model="virtio">
Dec  5 02:08:43 compute-0 nova_compute[349548]:      <stats period="10"/>
Dec  5 02:08:43 compute-0 nova_compute[349548]:    </memballoon>
Dec  5 02:08:43 compute-0 nova_compute[349548]:  </devices>
Dec  5 02:08:43 compute-0 nova_compute[349548]: </domain>
Dec  5 02:08:43 compute-0 nova_compute[349548]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.070 349552 DEBUG nova.compute.manager [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Preparing to wait for external event network-vif-plugged-1eebaade-abb1-412c-95f2-2b7240026f85 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.071 349552 DEBUG oslo_concurrency.lockutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Acquiring lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.071 349552 DEBUG oslo_concurrency.lockutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.071 349552 DEBUG oslo_concurrency.lockutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.072 349552 DEBUG nova.virt.libvirt.vif [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T02:08:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1341674106',display_name='tempest-ServersTestJSON-server-1341674106',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1341674106',id=6,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFtAkt1MUQ0UzD5ZNvg4emNMv//Ij9tTGEw8OvSj9D0kv+BMeeC2o2SF/4NX3oBFTRlyP9xb/yjd8SFW8gRLZtLdrfqvo1ZN4HP0TzIFpNkL3M1lCxjV2HbcSKr2zzjZbg==',key_name='tempest-keypair-1342908531',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='159039e5ad4a46a7be912cd9756c76c5',ramdisk_id='',reservation_id='r-1esl3ayq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-244710502',owner_user_name='tempest-ServersTestJSON-244710502-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T02:08:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5e8484f22ce84af99708d2e728179b92',uuid=a2605a46-d779-4fc3-aeff-1e040dbcf17d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1eebaade-abb1-412c-95f2-2b7240026f85", "address": "fa:16:3e:af:f6:1b", "network": {"id": "5a020a22-53e0-4ddc-b74b-9b343d75de26", "bridge": "br-int", "label": "tempest-ServersTestJSON-124637277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "159039e5ad4a46a7be912cd9756c76c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1eebaade-ab", "ovs_interfaceid": "1eebaade-abb1-412c-95f2-2b7240026f85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.072 349552 DEBUG nova.network.os_vif_util [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Converting VIF {"id": "1eebaade-abb1-412c-95f2-2b7240026f85", "address": "fa:16:3e:af:f6:1b", "network": {"id": "5a020a22-53e0-4ddc-b74b-9b343d75de26", "bridge": "br-int", "label": "tempest-ServersTestJSON-124637277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "159039e5ad4a46a7be912cd9756c76c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1eebaade-ab", "ovs_interfaceid": "1eebaade-abb1-412c-95f2-2b7240026f85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.073 349552 DEBUG nova.network.os_vif_util [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:f6:1b,bridge_name='br-int',has_traffic_filtering=True,id=1eebaade-abb1-412c-95f2-2b7240026f85,network=Network(5a020a22-53e0-4ddc-b74b-9b343d75de26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1eebaade-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.073 349552 DEBUG os_vif [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:f6:1b,bridge_name='br-int',has_traffic_filtering=True,id=1eebaade-abb1-412c-95f2-2b7240026f85,network=Network(5a020a22-53e0-4ddc-b74b-9b343d75de26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1eebaade-ab') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.074 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.074 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.075 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.078 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.079 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1eebaade-ab, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.079 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1eebaade-ab, col_values=(('external_ids', {'iface-id': '1eebaade-abb1-412c-95f2-2b7240026f85', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:af:f6:1b', 'vm-uuid': 'a2605a46-d779-4fc3-aeff-1e040dbcf17d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.082 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.083 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  5 02:08:43 compute-0 NetworkManager[49092]: <info>  [1764900523.0840] manager: (tap1eebaade-ab): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.092 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.093 349552 INFO os_vif [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:f6:1b,bridge_name='br-int',has_traffic_filtering=True,id=1eebaade-abb1-412c-95f2-2b7240026f85,network=Network(5a020a22-53e0-4ddc-b74b-9b343d75de26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1eebaade-ab')#033[00m
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.168 349552 DEBUG nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.169 349552 DEBUG nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.170 349552 DEBUG nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] No VIF found with MAC fa:16:3e:af:f6:1b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.171 349552 INFO nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Using config drive#033[00m
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.216 349552 DEBUG nova.storage.rbd_utils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] rbd image a2605a46-d779-4fc3-aeff-1e040dbcf17d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:08:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  5 02:08:43 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1720028422' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.415 349552 DEBUG oslo_concurrency.processutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.453 349552 DEBUG nova.storage.rbd_utils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] rbd image 939ae9f2-b89c-4a19-96de-ab4dfc882a35_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.463 349552 DEBUG oslo_concurrency.processutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.695 349552 INFO nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Creating config drive at /var/lib/nova/instances/a2605a46-d779-4fc3-aeff-1e040dbcf17d/disk.config#033[00m
Dec  5 02:08:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:08:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.702 349552 DEBUG oslo_concurrency.processutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a2605a46-d779-4fc3-aeff-1e040dbcf17d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf6dj8d_8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.848 349552 DEBUG oslo_concurrency.processutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a2605a46-d779-4fc3-aeff-1e040dbcf17d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf6dj8d_8" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.895 349552 DEBUG nova.storage.rbd_utils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] rbd image a2605a46-d779-4fc3-aeff-1e040dbcf17d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.905 349552 DEBUG oslo_concurrency.processutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a2605a46-d779-4fc3-aeff-1e040dbcf17d/disk.config a2605a46-d779-4fc3-aeff-1e040dbcf17d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:08:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  5 02:08:43 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1668676317' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.964 349552 DEBUG oslo_concurrency.processutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.966 349552 DEBUG nova.virt.libvirt.vif [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T02:08:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-604018291',display_name='tempest-AttachInterfacesUnderV243Test-server-604018291',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-604018291',id=7,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKU6bELVlVCoUJIshERiWUVj0OnvYD2CYxIalQbnWU21bRDwU7WBbW97LN2cH4XlAr/7mmUrM7ksINLIA4cX46Z53k6IEf2IAXFLlXwCAxrx7KcHDeFsx/HWqs2AH5gWDA==',key_name='tempest-keypair-1932183514',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='70b71e0f6ffe47ed86a910f90d71557a',ramdisk_id='',reservation_id='r-agiyf4o6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-532006644',owner_user_name='tempest-AttachInterfacesUnderV243Test-532006644-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T02:08:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3439b5cde2ff4830bb0294f007842282',uuid=939ae9f2-b89c-4a19-96de-ab4dfc882a35,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2ac46e0a-6888-440f-b155-d4b0e8677304", "address": "fa:16:3e:ca:ba:4f", "network": {"id": "77ae1103-3871-4354-8e08-09bb5c0c1ad1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-680696631-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "70b71e0f6ffe47ed86a910f90d71557a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac46e0a-68", "ovs_interfaceid": "2ac46e0a-6888-440f-b155-d4b0e8677304", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.967 349552 DEBUG nova.network.os_vif_util [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Converting VIF {"id": "2ac46e0a-6888-440f-b155-d4b0e8677304", "address": "fa:16:3e:ca:ba:4f", "network": {"id": "77ae1103-3871-4354-8e08-09bb5c0c1ad1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-680696631-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "70b71e0f6ffe47ed86a910f90d71557a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac46e0a-68", "ovs_interfaceid": "2ac46e0a-6888-440f-b155-d4b0e8677304", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.969 349552 DEBUG nova.network.os_vif_util [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ca:ba:4f,bridge_name='br-int',has_traffic_filtering=True,id=2ac46e0a-6888-440f-b155-d4b0e8677304,network=Network(77ae1103-3871-4354-8e08-09bb5c0c1ad1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ac46e0a-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 02:08:43 compute-0 nova_compute[349548]: 2025-12-05 02:08:43.970 349552 DEBUG nova.objects.instance [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Lazy-loading 'pci_devices' on Instance uuid 939ae9f2-b89c-4a19-96de-ab4dfc882a35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.005 349552 DEBUG nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] End _get_guest_xml xml=<domain type="kvm">
Dec  5 02:08:44 compute-0 nova_compute[349548]:  <uuid>939ae9f2-b89c-4a19-96de-ab4dfc882a35</uuid>
Dec  5 02:08:44 compute-0 nova_compute[349548]:  <name>instance-00000007</name>
Dec  5 02:08:44 compute-0 nova_compute[349548]:  <memory>131072</memory>
Dec  5 02:08:44 compute-0 nova_compute[349548]:  <vcpu>1</vcpu>
Dec  5 02:08:44 compute-0 nova_compute[349548]:  <metadata>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  5 02:08:44 compute-0 nova_compute[349548]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:      <nova:name>tempest-AttachInterfacesUnderV243Test-server-604018291</nova:name>
Dec  5 02:08:44 compute-0 nova_compute[349548]:      <nova:creationTime>2025-12-05 02:08:42</nova:creationTime>
Dec  5 02:08:44 compute-0 nova_compute[349548]:      <nova:flavor name="m1.nano">
Dec  5 02:08:44 compute-0 nova_compute[349548]:        <nova:memory>128</nova:memory>
Dec  5 02:08:44 compute-0 nova_compute[349548]:        <nova:disk>1</nova:disk>
Dec  5 02:08:44 compute-0 nova_compute[349548]:        <nova:swap>0</nova:swap>
Dec  5 02:08:44 compute-0 nova_compute[349548]:        <nova:ephemeral>0</nova:ephemeral>
Dec  5 02:08:44 compute-0 nova_compute[349548]:        <nova:vcpus>1</nova:vcpus>
Dec  5 02:08:44 compute-0 nova_compute[349548]:      </nova:flavor>
Dec  5 02:08:44 compute-0 nova_compute[349548]:      <nova:owner>
Dec  5 02:08:44 compute-0 nova_compute[349548]:        <nova:user uuid="3439b5cde2ff4830bb0294f007842282">tempest-AttachInterfacesUnderV243Test-532006644-project-member</nova:user>
Dec  5 02:08:44 compute-0 nova_compute[349548]:        <nova:project uuid="70b71e0f6ffe47ed86a910f90d71557a">tempest-AttachInterfacesUnderV243Test-532006644</nova:project>
Dec  5 02:08:44 compute-0 nova_compute[349548]:      </nova:owner>
Dec  5 02:08:44 compute-0 nova_compute[349548]:      <nova:root type="image" uuid="e9091bfb-b431-47c9-a284-79372046956b"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:      <nova:ports>
Dec  5 02:08:44 compute-0 nova_compute[349548]:        <nova:port uuid="2ac46e0a-6888-440f-b155-d4b0e8677304">
Dec  5 02:08:44 compute-0 nova_compute[349548]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:        </nova:port>
Dec  5 02:08:44 compute-0 nova_compute[349548]:      </nova:ports>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    </nova:instance>
Dec  5 02:08:44 compute-0 nova_compute[349548]:  </metadata>
Dec  5 02:08:44 compute-0 nova_compute[349548]:  <sysinfo type="smbios">
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <system>
Dec  5 02:08:44 compute-0 nova_compute[349548]:      <entry name="manufacturer">RDO</entry>
Dec  5 02:08:44 compute-0 nova_compute[349548]:      <entry name="product">OpenStack Compute</entry>
Dec  5 02:08:44 compute-0 nova_compute[349548]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  5 02:08:44 compute-0 nova_compute[349548]:      <entry name="serial">939ae9f2-b89c-4a19-96de-ab4dfc882a35</entry>
Dec  5 02:08:44 compute-0 nova_compute[349548]:      <entry name="uuid">939ae9f2-b89c-4a19-96de-ab4dfc882a35</entry>
Dec  5 02:08:44 compute-0 nova_compute[349548]:      <entry name="family">Virtual Machine</entry>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    </system>
Dec  5 02:08:44 compute-0 nova_compute[349548]:  </sysinfo>
Dec  5 02:08:44 compute-0 nova_compute[349548]:  <os>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <boot dev="hd"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <smbios mode="sysinfo"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:  </os>
Dec  5 02:08:44 compute-0 nova_compute[349548]:  <features>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <acpi/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <apic/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <vmcoreinfo/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:  </features>
Dec  5 02:08:44 compute-0 nova_compute[349548]:  <clock offset="utc">
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <timer name="pit" tickpolicy="delay"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <timer name="hpet" present="no"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:  </clock>
Dec  5 02:08:44 compute-0 nova_compute[349548]:  <cpu mode="host-model" match="exact">
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <topology sockets="1" cores="1" threads="1"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:  </cpu>
Dec  5 02:08:44 compute-0 nova_compute[349548]:  <devices>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <disk type="network" device="disk">
Dec  5 02:08:44 compute-0 nova_compute[349548]:      <driver type="raw" cache="none"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:      <source protocol="rbd" name="vms/939ae9f2-b89c-4a19-96de-ab4dfc882a35_disk">
Dec  5 02:08:44 compute-0 nova_compute[349548]:        <host name="192.168.122.100" port="6789"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:      </source>
Dec  5 02:08:44 compute-0 nova_compute[349548]:      <auth username="openstack">
Dec  5 02:08:44 compute-0 nova_compute[349548]:        <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:      </auth>
Dec  5 02:08:44 compute-0 nova_compute[349548]:      <target dev="vda" bus="virtio"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    </disk>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <disk type="network" device="cdrom">
Dec  5 02:08:44 compute-0 nova_compute[349548]:      <driver type="raw" cache="none"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:      <source protocol="rbd" name="vms/939ae9f2-b89c-4a19-96de-ab4dfc882a35_disk.config">
Dec  5 02:08:44 compute-0 nova_compute[349548]:        <host name="192.168.122.100" port="6789"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:      </source>
Dec  5 02:08:44 compute-0 nova_compute[349548]:      <auth username="openstack">
Dec  5 02:08:44 compute-0 nova_compute[349548]:        <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:      </auth>
Dec  5 02:08:44 compute-0 nova_compute[349548]:      <target dev="sda" bus="sata"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    </disk>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <interface type="ethernet">
Dec  5 02:08:44 compute-0 nova_compute[349548]:      <mac address="fa:16:3e:ca:ba:4f"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:      <model type="virtio"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:      <driver name="vhost" rx_queue_size="512"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:      <mtu size="1442"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:      <target dev="tap2ac46e0a-68"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    </interface>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <serial type="pty">
Dec  5 02:08:44 compute-0 nova_compute[349548]:      <log file="/var/lib/nova/instances/939ae9f2-b89c-4a19-96de-ab4dfc882a35/console.log" append="off"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    </serial>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <video>
Dec  5 02:08:44 compute-0 nova_compute[349548]:      <model type="virtio"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    </video>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <input type="tablet" bus="usb"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <rng model="virtio">
Dec  5 02:08:44 compute-0 nova_compute[349548]:      <backend model="random">/dev/urandom</backend>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    </rng>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <controller type="usb" index="0"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    <memballoon model="virtio">
Dec  5 02:08:44 compute-0 nova_compute[349548]:      <stats period="10"/>
Dec  5 02:08:44 compute-0 nova_compute[349548]:    </memballoon>
Dec  5 02:08:44 compute-0 nova_compute[349548]:  </devices>
Dec  5 02:08:44 compute-0 nova_compute[349548]: </domain>
Dec  5 02:08:44 compute-0 nova_compute[349548]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  5 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.007 349552 DEBUG nova.compute.manager [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Preparing to wait for external event network-vif-plugged-2ac46e0a-6888-440f-b155-d4b0e8677304 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  5 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.008 349552 DEBUG oslo_concurrency.lockutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Acquiring lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.009 349552 DEBUG oslo_concurrency.lockutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.010 349552 DEBUG oslo_concurrency.lockutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.012 349552 DEBUG nova.virt.libvirt.vif [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T02:08:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-604018291',display_name='tempest-AttachInterfacesUnderV243Test-server-604018291',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-604018291',id=7,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKU6bELVlVCoUJIshERiWUVj0OnvYD2CYxIalQbnWU21bRDwU7WBbW97LN2cH4XlAr/7mmUrM7ksINLIA4cX46Z53k6IEf2IAXFLlXwCAxrx7KcHDeFsx/HWqs2AH5gWDA==',key_name='tempest-keypair-1932183514',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='70b71e0f6ffe47ed86a910f90d71557a',ramdisk_id='',reservation_id='r-agiyf4o6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-532006644',owner_user_name='tempest-AttachInterfacesUnderV243Test-532006644-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T02:08:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3439b5cde2ff4830bb0294f007842282',uuid=939ae9f2-b89c-4a19-96de-ab4dfc882a35,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2ac46e0a-6888-440f-b155-d4b0e8677304", "address": "fa:16:3e:ca:ba:4f", "network": {"id": "77ae1103-3871-4354-8e08-09bb5c0c1ad1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-680696631-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "70b71e0f6ffe47ed86a910f90d71557a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac46e0a-68", "ovs_interfaceid": "2ac46e0a-6888-440f-b155-d4b0e8677304", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  5 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.013 349552 DEBUG nova.network.os_vif_util [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Converting VIF {"id": "2ac46e0a-6888-440f-b155-d4b0e8677304", "address": "fa:16:3e:ca:ba:4f", "network": {"id": "77ae1103-3871-4354-8e08-09bb5c0c1ad1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-680696631-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "70b71e0f6ffe47ed86a910f90d71557a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac46e0a-68", "ovs_interfaceid": "2ac46e0a-6888-440f-b155-d4b0e8677304", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.015 349552 DEBUG nova.network.os_vif_util [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ca:ba:4f,bridge_name='br-int',has_traffic_filtering=True,id=2ac46e0a-6888-440f-b155-d4b0e8677304,network=Network(77ae1103-3871-4354-8e08-09bb5c0c1ad1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ac46e0a-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.017 349552 DEBUG os_vif [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ca:ba:4f,bridge_name='br-int',has_traffic_filtering=True,id=2ac46e0a-6888-440f-b155-d4b0e8677304,network=Network(77ae1103-3871-4354-8e08-09bb5c0c1ad1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ac46e0a-68') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  5 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.018 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.019 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.021 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.026 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.027 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2ac46e0a-68, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.029 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2ac46e0a-68, col_values=(('external_ids', {'iface-id': '2ac46e0a-6888-440f-b155-d4b0e8677304', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ca:ba:4f', 'vm-uuid': '939ae9f2-b89c-4a19-96de-ab4dfc882a35'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:08:44 compute-0 NetworkManager[49092]: <info>  [1764900524.0353] manager: (tap2ac46e0a-68): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Dec  5 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.032 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.037 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  5 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.046 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.049 349552 INFO os_vif [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ca:ba:4f,bridge_name='br-int',has_traffic_filtering=True,id=2ac46e0a-6888-440f-b155-d4b0e8677304,network=Network(77ae1103-3871-4354-8e08-09bb5c0c1ad1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ac46e0a-68')#033[00m
Dec  5 02:08:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1790: 321 pgs: 321 active+clean; 179 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.7 MiB/s wr, 158 op/s
Dec  5 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.109 349552 DEBUG nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  5 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.110 349552 DEBUG nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  5 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.110 349552 DEBUG nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] No VIF found with MAC fa:16:3e:ca:ba:4f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  5 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.111 349552 INFO nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Using config drive#033[00m
Dec  5 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.155 349552 DEBUG nova.storage.rbd_utils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] rbd image 939ae9f2-b89c-4a19-96de-ab4dfc882a35_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.189 349552 DEBUG oslo_concurrency.processutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a2605a46-d779-4fc3-aeff-1e040dbcf17d/disk.config a2605a46-d779-4fc3-aeff-1e040dbcf17d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.285s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.190 349552 INFO nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Deleting local config drive /var/lib/nova/instances/a2605a46-d779-4fc3-aeff-1e040dbcf17d/disk.config because it was imported into RBD.#033[00m
Dec  5 02:08:44 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec  5 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.233 349552 DEBUG nova.network.neutron [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Successfully created port: a240e2ef-1773-4509-ac04-eae1f5d36e08 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  5 02:08:44 compute-0 systemd[1]: Started libvirt secret daemon.
Dec  5 02:08:44 compute-0 kernel: tap1eebaade-ab: entered promiscuous mode
Dec  5 02:08:44 compute-0 NetworkManager[49092]: <info>  [1764900524.3601] manager: (tap1eebaade-ab): new Tun device (/org/freedesktop/NetworkManager/Devices/37)
Dec  5 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.366 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:44 compute-0 ovn_controller[89286]: 2025-12-05T02:08:44Z|00066|binding|INFO|Claiming lport 1eebaade-abb1-412c-95f2-2b7240026f85 for this chassis.
Dec  5 02:08:44 compute-0 ovn_controller[89286]: 2025-12-05T02:08:44Z|00067|binding|INFO|1eebaade-abb1-412c-95f2-2b7240026f85: Claiming fa:16:3e:af:f6:1b 10.100.0.5
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.381 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:f6:1b 10.100.0.5'], port_security=['fa:16:3e:af:f6:1b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'a2605a46-d779-4fc3-aeff-1e040dbcf17d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5a020a22-53e0-4ddc-b74b-9b343d75de26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '159039e5ad4a46a7be912cd9756c76c5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c4f1e166-f717-4795-a420-f74c256dc7dd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec639c0d-4f01-43c3-a93f-8a1059f20fc9, chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=1eebaade-abb1-412c-95f2-2b7240026f85) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.384 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 1eebaade-abb1-412c-95f2-2b7240026f85 in datapath 5a020a22-53e0-4ddc-b74b-9b343d75de26 bound to our chassis#033[00m
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.387 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5a020a22-53e0-4ddc-b74b-9b343d75de26#033[00m
Dec  5 02:08:44 compute-0 ovn_controller[89286]: 2025-12-05T02:08:44Z|00068|binding|INFO|Setting lport 1eebaade-abb1-412c-95f2-2b7240026f85 ovn-installed in OVS
Dec  5 02:08:44 compute-0 ovn_controller[89286]: 2025-12-05T02:08:44Z|00069|binding|INFO|Setting lport 1eebaade-abb1-412c-95f2-2b7240026f85 up in Southbound
Dec  5 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.399 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.403 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.406 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[310034ff-554d-4b3f-8289-c61c5deb6b90]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.407 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5a020a22-51 in ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.410 412744 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5a020a22-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.410 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[17e7adea-8aae-4abb-ac77-9cdac72f2552]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.412 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[7f9c9c77-4181-4a76-8125-dca886ea2368]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:44 compute-0 systemd-machined[138700]: New machine qemu-6-instance-00000006.
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.431 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[507cedab-fbeb-4434-9173-41d897db52dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:44 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Dec  5 02:08:44 compute-0 systemd-udevd[442229]: Network interface NamePolicy= disabled on kernel command line.
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.468 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[d198da03-0739-4f3b-a703-8d0321b7a351]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:44 compute-0 NetworkManager[49092]: <info>  [1764900524.4866] device (tap1eebaade-ab): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  5 02:08:44 compute-0 NetworkManager[49092]: <info>  [1764900524.4875] device (tap1eebaade-ab): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.516 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[c57ddabc-6f54-4f76-bbf8-3bfa1e4a1832]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:44 compute-0 systemd-udevd[442234]: Network interface NamePolicy= disabled on kernel command line.
Dec  5 02:08:44 compute-0 NetworkManager[49092]: <info>  [1764900524.5266] manager: (tap5a020a22-50): new Veth device (/org/freedesktop/NetworkManager/Devices/38)
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.526 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[4fa99a84-8e10-466b-bfc5-6a1f2c9db739]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.567 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[99c80ede-9ded-4f1a-b29d-e5f34b0ba727]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.571 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[435393b3-aa07-4519-972e-3583b6da8545]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:44 compute-0 NetworkManager[49092]: <info>  [1764900524.6035] device (tap5a020a22-50): carrier: link connected
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.611 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[0efc2be5-adf5-43b8-9555-9d7aec77b4dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.636 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[66b13002-6001-4e38-99de-45e87ac59f49]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5a020a22-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:49:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661126, 'reachable_time': 44617, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 442263, 'error': None, 'target': 'ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.659 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[e51ec097-1236-47d8-a55f-482b91305146]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb2:49f1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 661126, 'tstamp': 661126}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 442264, 'error': None, 'target': 'ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.665 349552 DEBUG nova.network.neutron [req-7fa8458c-f67c-4b57-b276-aa2a8b3ff6f1 req-4f8b9cd2-2cbf-4ed7-9c7a-4e160d7ec2d3 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Updated VIF entry in instance network info cache for port 2ac46e0a-6888-440f-b155-d4b0e8677304. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  5 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.666 349552 DEBUG nova.network.neutron [req-7fa8458c-f67c-4b57-b276-aa2a8b3ff6f1 req-4f8b9cd2-2cbf-4ed7-9c7a-4e160d7ec2d3 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Updating instance_info_cache with network_info: [{"id": "2ac46e0a-6888-440f-b155-d4b0e8677304", "address": "fa:16:3e:ca:ba:4f", "network": {"id": "77ae1103-3871-4354-8e08-09bb5c0c1ad1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-680696631-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "70b71e0f6ffe47ed86a910f90d71557a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac46e0a-68", "ovs_interfaceid": "2ac46e0a-6888-440f-b155-d4b0e8677304", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.680 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[0d1f1532-ab36-418d-b2ae-71749de82e66]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5a020a22-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:49:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661126, 'reachable_time': 44617, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 442265, 'error': None, 'target': 'ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.698 349552 DEBUG oslo_concurrency.lockutils [req-7fa8458c-f67c-4b57-b276-aa2a8b3ff6f1 req-4f8b9cd2-2cbf-4ed7-9c7a-4e160d7ec2d3 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.714 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[aebb8eae-8f29-4405-a5a7-28f92576f535]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.809 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[f422ca83-3489-4ed9-bc43-df94ec43e7f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.811 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5a020a22-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.811 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.811 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5a020a22-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:08:44 compute-0 kernel: tap5a020a22-50: entered promiscuous mode
Dec  5 02:08:44 compute-0 NetworkManager[49092]: <info>  [1764900524.8152] manager: (tap5a020a22-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Dec  5 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.817 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.821 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.822 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5a020a22-50, col_values=(('external_ids', {'iface-id': '2395c111-a45b-4516-ba09-9b57be3b16f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.829 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:44 compute-0 ovn_controller[89286]: 2025-12-05T02:08:44Z|00070|binding|INFO|Releasing lport 2395c111-a45b-4516-ba09-9b57be3b16f8 from this chassis (sb_readonly=0)
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.830 287122 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5a020a22-53e0-4ddc-b74b-9b343d75de26.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5a020a22-53e0-4ddc-b74b-9b343d75de26.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.832 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[86781aaf-fcbc-4b60-b4dc-a85c525ab507]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.833 287122 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]: global
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]:    log         /dev/log local0 debug
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]:    log-tag     haproxy-metadata-proxy-5a020a22-53e0-4ddc-b74b-9b343d75de26
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]:    user        root
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]:    group       root
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]:    maxconn     1024
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]:    pidfile     /var/lib/neutron/external/pids/5a020a22-53e0-4ddc-b74b-9b343d75de26.pid.haproxy
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]:    daemon
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]: 
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]: defaults
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]:    log global
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]:    mode http
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]:    option httplog
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]:    option dontlognull
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]:    option http-server-close
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]:    option forwardfor
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]:    retries                 3
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]:    timeout http-request    30s
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]:    timeout connect         30s
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]:    timeout client          32s
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]:    timeout server          32s
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]:    timeout http-keep-alive 30s
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]: 
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]: 
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]: listen listener
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]:    bind 169.254.169.254:80
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]:    server metadata /var/lib/neutron/metadata_proxy
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]:    http-request add-header X-OVN-Network-ID 5a020a22-53e0-4ddc-b74b-9b343d75de26
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  5 02:08:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:44.833 287122 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26', 'env', 'PROCESS_TAG=haproxy-5a020a22-53e0-4ddc-b74b-9b343d75de26', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5a020a22-53e0-4ddc-b74b-9b343d75de26.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  5 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.837 349552 INFO nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Creating config drive at /var/lib/nova/instances/939ae9f2-b89c-4a19-96de-ab4dfc882a35/disk.config#033[00m
Dec  5 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.852 349552 DEBUG oslo_concurrency.processutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/939ae9f2-b89c-4a19-96de-ab4dfc882a35/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2xjzt62c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.879 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.962 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900524.9616396, a2605a46-d779-4fc3-aeff-1e040dbcf17d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.963 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] VM Started (Lifecycle Event)#033[00m
Dec  5 02:08:44 compute-0 nova_compute[349548]: 2025-12-05 02:08:44.989 349552 DEBUG oslo_concurrency.processutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/939ae9f2-b89c-4a19-96de-ab4dfc882a35/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2xjzt62c" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.019 349552 DEBUG nova.storage.rbd_utils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] rbd image 939ae9f2-b89c-4a19-96de-ab4dfc882a35_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.026 349552 DEBUG oslo_concurrency.processutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/939ae9f2-b89c-4a19-96de-ab4dfc882a35/disk.config 939ae9f2-b89c-4a19-96de-ab4dfc882a35_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.056 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.063 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900524.9664264, a2605a46-d779-4fc3-aeff-1e040dbcf17d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.063 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] VM Paused (Lifecycle Event)#033[00m
Dec  5 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.089 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.095 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  5 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.122 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  5 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.285 349552 DEBUG oslo_concurrency.processutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/939ae9f2-b89c-4a19-96de-ab4dfc882a35/disk.config 939ae9f2-b89c-4a19-96de-ab4dfc882a35_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.259s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.286 349552 INFO nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Deleting local config drive /var/lib/nova/instances/939ae9f2-b89c-4a19-96de-ab4dfc882a35/disk.config because it was imported into RBD.#033[00m
Dec  5 02:08:45 compute-0 podman[442384]: 2025-12-05 02:08:45.342163692 +0000 UTC m=+0.102719790 container create eef4a66cf8b19254e63df4d2aa3fb2989b984f39ae2622436844ea78244296d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 02:08:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 02:08:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1723955173' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 02:08:45 compute-0 kernel: tap2ac46e0a-68: entered promiscuous mode
Dec  5 02:08:45 compute-0 NetworkManager[49092]: <info>  [1764900525.3720] manager: (tap2ac46e0a-68): new Tun device (/org/freedesktop/NetworkManager/Devices/40)
Dec  5 02:08:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 02:08:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1723955173' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 02:08:45 compute-0 ovn_controller[89286]: 2025-12-05T02:08:45Z|00071|binding|INFO|Claiming lport 2ac46e0a-6888-440f-b155-d4b0e8677304 for this chassis.
Dec  5 02:08:45 compute-0 ovn_controller[89286]: 2025-12-05T02:08:45Z|00072|binding|INFO|2ac46e0a-6888-440f-b155-d4b0e8677304: Claiming fa:16:3e:ca:ba:4f 10.100.0.11
Dec  5 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.378 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:45 compute-0 podman[442384]: 2025-12-05 02:08:45.298459167 +0000 UTC m=+0.059015275 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  5 02:08:45 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:45.390 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ca:ba:4f 10.100.0.11'], port_security=['fa:16:3e:ca:ba:4f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '939ae9f2-b89c-4a19-96de-ab4dfc882a35', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77ae1103-3871-4354-8e08-09bb5c0c1ad1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '70b71e0f6ffe47ed86a910f90d71557a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fd91b173-28fd-4506-a2d4-b70d7da34ab9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1a9bd25-2abf-40fe-aac7-26f2653ba067, chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=2ac46e0a-6888-440f-b155-d4b0e8677304) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 02:08:45 compute-0 NetworkManager[49092]: <info>  [1764900525.3993] device (tap2ac46e0a-68): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  5 02:08:45 compute-0 ovn_controller[89286]: 2025-12-05T02:08:45Z|00073|binding|INFO|Setting lport 2ac46e0a-6888-440f-b155-d4b0e8677304 ovn-installed in OVS
Dec  5 02:08:45 compute-0 ovn_controller[89286]: 2025-12-05T02:08:45Z|00074|binding|INFO|Setting lport 2ac46e0a-6888-440f-b155-d4b0e8677304 up in Southbound
Dec  5 02:08:45 compute-0 NetworkManager[49092]: <info>  [1764900525.4006] device (tap2ac46e0a-68): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  5 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.401 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:45 compute-0 systemd[1]: Started libpod-conmon-eef4a66cf8b19254e63df4d2aa3fb2989b984f39ae2622436844ea78244296d4.scope.
Dec  5 02:08:45 compute-0 systemd-machined[138700]: New machine qemu-7-instance-00000007.
Dec  5 02:08:45 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:08:45 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Dec  5 02:08:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f12546ccb3acfced01c79d379c89ef48eed6791003c99536b68dbe8f03ece420/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  5 02:08:45 compute-0 podman[442384]: 2025-12-05 02:08:45.473011139 +0000 UTC m=+0.233567327 container init eef4a66cf8b19254e63df4d2aa3fb2989b984f39ae2622436844ea78244296d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  5 02:08:45 compute-0 podman[442384]: 2025-12-05 02:08:45.484969355 +0000 UTC m=+0.245525483 container start eef4a66cf8b19254e63df4d2aa3fb2989b984f39ae2622436844ea78244296d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  5 02:08:45 compute-0 neutron-haproxy-ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26[442413]: [NOTICE]   (442418) : New worker (442425) forked
Dec  5 02:08:45 compute-0 neutron-haproxy-ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26[442413]: [NOTICE]   (442418) : Loading success.
Dec  5 02:08:45 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:45.569 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 2ac46e0a-6888-440f-b155-d4b0e8677304 in datapath 77ae1103-3871-4354-8e08-09bb5c0c1ad1 unbound from our chassis#033[00m
Dec  5 02:08:45 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:45.574 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 77ae1103-3871-4354-8e08-09bb5c0c1ad1#033[00m
Dec  5 02:08:45 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:45.591 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[9f5d8922-cf72-4d0e-9ee7-5cf9896eec94]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:45 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:45.594 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap77ae1103-31 in ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  5 02:08:45 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:45.598 412744 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap77ae1103-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  5 02:08:45 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:45.598 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[5475fdc8-a330-40c9-932e-886fecd16a55]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:45 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:45.601 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[1dd3d0bc-50c6-467a-9ee7-3f33cbe5802d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:45 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:45.620 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[2abcb8d2-0807-4835-b1ff-81d7ecd89264]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:45 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:45.639 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[f95a1d30-3138-4754-9e5f-3ffab40f29c8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:45 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:45.685 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[3258a4f0-c1eb-46ce-8f09-a12287c61464]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:45 compute-0 NetworkManager[49092]: <info>  [1764900525.6992] manager: (tap77ae1103-30): new Veth device (/org/freedesktop/NetworkManager/Devices/41)
Dec  5 02:08:45 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:45.698 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[cc094df5-9e68-4095-aafa-b162bf9af598]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.711 349552 DEBUG nova.compute.manager [req-5b0e231d-b055-4c1a-8d5d-9de2e0b11f8a req-51967660-c59a-406f-94cd-ed7b1bb3d734 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Received event network-vif-plugged-1eebaade-abb1-412c-95f2-2b7240026f85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.711 349552 DEBUG oslo_concurrency.lockutils [req-5b0e231d-b055-4c1a-8d5d-9de2e0b11f8a req-51967660-c59a-406f-94cd-ed7b1bb3d734 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.713 349552 DEBUG oslo_concurrency.lockutils [req-5b0e231d-b055-4c1a-8d5d-9de2e0b11f8a req-51967660-c59a-406f-94cd-ed7b1bb3d734 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.713 349552 DEBUG oslo_concurrency.lockutils [req-5b0e231d-b055-4c1a-8d5d-9de2e0b11f8a req-51967660-c59a-406f-94cd-ed7b1bb3d734 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.714 349552 DEBUG nova.compute.manager [req-5b0e231d-b055-4c1a-8d5d-9de2e0b11f8a req-51967660-c59a-406f-94cd-ed7b1bb3d734 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Processing event network-vif-plugged-1eebaade-abb1-412c-95f2-2b7240026f85 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  5 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.716 349552 DEBUG nova.compute.manager [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  5 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.724 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900525.7241232, a2605a46-d779-4fc3-aeff-1e040dbcf17d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.724 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] VM Resumed (Lifecycle Event)#033[00m
Dec  5 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.729 349552 DEBUG nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  5 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.747 349552 INFO nova.virt.libvirt.driver [-] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Instance spawned successfully.#033[00m
Dec  5 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.747 349552 DEBUG nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  5 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.752 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.759 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  5 02:08:45 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:45.759 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[b7e04232-9da8-4c0e-8bb6-3a144de29ffe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:45 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:45.770 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[a1992c5e-6303-47eb-b988-c9de5f8bcedc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.781 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  5 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.787 349552 DEBUG nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.788 349552 DEBUG nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.790 349552 DEBUG nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.791 349552 DEBUG nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.792 349552 DEBUG nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.793 349552 DEBUG nova.virt.libvirt.driver [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:08:45 compute-0 NetworkManager[49092]: <info>  [1764900525.8159] device (tap77ae1103-30): carrier: link connected
Dec  5 02:08:45 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:45.829 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[d5128596-697c-46cb-922f-feb9a6248884]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.856 349552 INFO nova.compute.manager [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Took 11.23 seconds to spawn the instance on the hypervisor.#033[00m
Dec  5 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.856 349552 DEBUG nova.compute.manager [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:08:45 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:45.864 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[e8384866-f7b6-4db0-bef6-d2a164ff65cc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77ae1103-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:88:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661247, 'reachable_time': 25682, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 442481, 'error': None, 'target': 'ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:45 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:45.900 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[8b7eb39f-c950-4c01-9915-bd7de547198b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe01:883e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 661247, 'tstamp': 661247}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 442485, 'error': None, 'target': 'ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:45 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:45.916 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[5a878b7c-eacd-4160-9cc3-bada338ac048]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap77ae1103-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:01:88:3e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661247, 'reachable_time': 25682, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 442489, 'error': None, 'target': 'ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:45 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:45.954 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[7ee2f2af-9814-48b8-b488-86fe2301cd09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.960 349552 INFO nova.compute.manager [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Took 12.39 seconds to build instance.#033[00m
Dec  5 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.985 349552 DEBUG oslo_concurrency.lockutils [None req-332c8098-cb59-4730-97c1-486b2bfc584b 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.998 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900525.9983442, 939ae9f2-b89c-4a19-96de-ab4dfc882a35 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:08:45 compute-0 nova_compute[349548]: 2025-12-05 02:08:45.998 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] VM Started (Lifecycle Event)#033[00m
Dec  5 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.017 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.023 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900526.0002499, 939ae9f2-b89c-4a19-96de-ab4dfc882a35 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.023 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] VM Paused (Lifecycle Event)#033[00m
Dec  5 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.034 349552 DEBUG nova.network.neutron [req-3fe960b8-eb6b-4df5-9abe-f3b3efad4a9d req-2c81012f-c18d-4147-8ba9-0dc8684a6e52 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Updated VIF entry in instance network info cache for port 1eebaade-abb1-412c-95f2-2b7240026f85. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  5 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.034 349552 DEBUG nova.network.neutron [req-3fe960b8-eb6b-4df5-9abe-f3b3efad4a9d req-2c81012f-c18d-4147-8ba9-0dc8684a6e52 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Updating instance_info_cache with network_info: [{"id": "1eebaade-abb1-412c-95f2-2b7240026f85", "address": "fa:16:3e:af:f6:1b", "network": {"id": "5a020a22-53e0-4ddc-b74b-9b343d75de26", "bridge": "br-int", "label": "tempest-ServersTestJSON-124637277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "159039e5ad4a46a7be912cd9756c76c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1eebaade-ab", "ovs_interfaceid": "1eebaade-abb1-412c-95f2-2b7240026f85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:08:46 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:46.041 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[1423e842-42e5-4067-8d88-3d263262feb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:46 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:46.043 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77ae1103-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:08:46 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:46.044 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 02:08:46 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:46.044 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77ae1103-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.045 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:08:46 compute-0 kernel: tap77ae1103-30: entered promiscuous mode
Dec  5 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.046 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:46 compute-0 NetworkManager[49092]: <info>  [1764900526.0480] manager: (tap77ae1103-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Dec  5 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.050 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.051 349552 DEBUG oslo_concurrency.lockutils [req-3fe960b8-eb6b-4df5-9abe-f3b3efad4a9d req-2c81012f-c18d-4147-8ba9-0dc8684a6e52 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-a2605a46-d779-4fc3-aeff-1e040dbcf17d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:08:46 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:46.054 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap77ae1103-30, col_values=(('external_ids', {'iface-id': '5f3160d9-2dc7-4f0c-9f4e-c46a8a847823'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.055 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:46 compute-0 ovn_controller[89286]: 2025-12-05T02:08:46Z|00075|binding|INFO|Releasing lport 5f3160d9-2dc7-4f0c-9f4e-c46a8a847823 from this chassis (sb_readonly=0)
Dec  5 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.057 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:46 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:46.060 287122 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/77ae1103-3871-4354-8e08-09bb5c0c1ad1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/77ae1103-3871-4354-8e08-09bb5c0c1ad1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  5 02:08:46 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:46.062 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[3663ec57-183e-4d3d-a8b5-30acdf41c952]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:46 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:46.063 287122 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  5 02:08:46 compute-0 ovn_metadata_agent[287107]: global
Dec  5 02:08:46 compute-0 ovn_metadata_agent[287107]:    log         /dev/log local0 debug
Dec  5 02:08:46 compute-0 ovn_metadata_agent[287107]:    log-tag     haproxy-metadata-proxy-77ae1103-3871-4354-8e08-09bb5c0c1ad1
Dec  5 02:08:46 compute-0 ovn_metadata_agent[287107]:    user        root
Dec  5 02:08:46 compute-0 ovn_metadata_agent[287107]:    group       root
Dec  5 02:08:46 compute-0 ovn_metadata_agent[287107]:    maxconn     1024
Dec  5 02:08:46 compute-0 ovn_metadata_agent[287107]:    pidfile     /var/lib/neutron/external/pids/77ae1103-3871-4354-8e08-09bb5c0c1ad1.pid.haproxy
Dec  5 02:08:46 compute-0 ovn_metadata_agent[287107]:    daemon
Dec  5 02:08:46 compute-0 ovn_metadata_agent[287107]: 
Dec  5 02:08:46 compute-0 ovn_metadata_agent[287107]: defaults
Dec  5 02:08:46 compute-0 ovn_metadata_agent[287107]:    log global
Dec  5 02:08:46 compute-0 ovn_metadata_agent[287107]:    mode http
Dec  5 02:08:46 compute-0 ovn_metadata_agent[287107]:    option httplog
Dec  5 02:08:46 compute-0 ovn_metadata_agent[287107]:    option dontlognull
Dec  5 02:08:46 compute-0 ovn_metadata_agent[287107]:    option http-server-close
Dec  5 02:08:46 compute-0 ovn_metadata_agent[287107]:    option forwardfor
Dec  5 02:08:46 compute-0 ovn_metadata_agent[287107]:    retries                 3
Dec  5 02:08:46 compute-0 ovn_metadata_agent[287107]:    timeout http-request    30s
Dec  5 02:08:46 compute-0 ovn_metadata_agent[287107]:    timeout connect         30s
Dec  5 02:08:46 compute-0 ovn_metadata_agent[287107]:    timeout client          32s
Dec  5 02:08:46 compute-0 ovn_metadata_agent[287107]:    timeout server          32s
Dec  5 02:08:46 compute-0 ovn_metadata_agent[287107]:    timeout http-keep-alive 30s
Dec  5 02:08:46 compute-0 ovn_metadata_agent[287107]: 
Dec  5 02:08:46 compute-0 ovn_metadata_agent[287107]: 
Dec  5 02:08:46 compute-0 ovn_metadata_agent[287107]: listen listener
Dec  5 02:08:46 compute-0 ovn_metadata_agent[287107]:    bind 169.254.169.254:80
Dec  5 02:08:46 compute-0 ovn_metadata_agent[287107]:    server metadata /var/lib/neutron/metadata_proxy
Dec  5 02:08:46 compute-0 ovn_metadata_agent[287107]:    http-request add-header X-OVN-Network-ID 77ae1103-3871-4354-8e08-09bb5c0c1ad1
Dec  5 02:08:46 compute-0 ovn_metadata_agent[287107]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  5 02:08:46 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:46.064 287122 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1', 'env', 'PROCESS_TAG=haproxy-77ae1103-3871-4354-8e08-09bb5c0c1ad1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/77ae1103-3871-4354-8e08-09bb5c0c1ad1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  5 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.068 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  5 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.077 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1791: 321 pgs: 321 active+clean; 196 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 6.6 MiB/s wr, 152 op/s
Dec  5 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.092 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  5 02:08:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:08:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:08:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:08:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:08:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:08:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.325 349552 DEBUG nova.compute.manager [req-c0e8edfe-6b01-4821-9803-d326c9104772 req-72d9a3e8-61d3-4511-9896-4a4cbd4389e1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Received event network-vif-plugged-2ac46e0a-6888-440f-b155-d4b0e8677304 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.325 349552 DEBUG oslo_concurrency.lockutils [req-c0e8edfe-6b01-4821-9803-d326c9104772 req-72d9a3e8-61d3-4511-9896-4a4cbd4389e1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.326 349552 DEBUG oslo_concurrency.lockutils [req-c0e8edfe-6b01-4821-9803-d326c9104772 req-72d9a3e8-61d3-4511-9896-4a4cbd4389e1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.326 349552 DEBUG oslo_concurrency.lockutils [req-c0e8edfe-6b01-4821-9803-d326c9104772 req-72d9a3e8-61d3-4511-9896-4a4cbd4389e1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.326 349552 DEBUG nova.compute.manager [req-c0e8edfe-6b01-4821-9803-d326c9104772 req-72d9a3e8-61d3-4511-9896-4a4cbd4389e1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Processing event network-vif-plugged-2ac46e0a-6888-440f-b155-d4b0e8677304 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  5 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.327 349552 DEBUG nova.compute.manager [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  5 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.335 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900526.335343, 939ae9f2-b89c-4a19-96de-ab4dfc882a35 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.336 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] VM Resumed (Lifecycle Event)#033[00m
Dec  5 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.338 349552 DEBUG nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  5 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.347 349552 INFO nova.virt.libvirt.driver [-] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Instance spawned successfully.#033[00m
Dec  5 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.347 349552 DEBUG nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  5 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.372 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.380 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  5 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.386 349552 DEBUG nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.386 349552 DEBUG nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.386 349552 DEBUG nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.387 349552 DEBUG nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.387 349552 DEBUG nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.387 349552 DEBUG nova.virt.libvirt.driver [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.409 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  5 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.450 349552 INFO nova.compute.manager [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Took 8.39 seconds to spawn the instance on the hypervisor.#033[00m
Dec  5 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.451 349552 DEBUG nova.compute.manager [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.537 349552 INFO nova.compute.manager [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Took 10.68 seconds to build instance.#033[00m
Dec  5 02:08:46 compute-0 nova_compute[349548]: 2025-12-05 02:08:46.553 349552 DEBUG oslo_concurrency.lockutils [None req-5f62d794-2ddf-4c35-b6cf-50ab42140cad 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.788s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:08:46 compute-0 podman[442520]: 2025-12-05 02:08:46.590029649 +0000 UTC m=+0.096050584 container create 12faf4c2216d9372536395acf5e9f1614a1c5a76751643d625f5c8a217280b16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  5 02:08:46 compute-0 podman[442520]: 2025-12-05 02:08:46.544674727 +0000 UTC m=+0.050695702 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  5 02:08:46 compute-0 systemd[1]: Started libpod-conmon-12faf4c2216d9372536395acf5e9f1614a1c5a76751643d625f5c8a217280b16.scope.
Dec  5 02:08:46 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec  5 02:08:46 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:08:46 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec  5 02:08:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c66e076ce0b97b1ffb0be792f84404fb2f83ab9c6ac5cd8cc44b4f6206b0bf01/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  5 02:08:46 compute-0 podman[442520]: 2025-12-05 02:08:46.735818365 +0000 UTC m=+0.241839380 container init 12faf4c2216d9372536395acf5e9f1614a1c5a76751643d625f5c8a217280b16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  5 02:08:46 compute-0 podman[442520]: 2025-12-05 02:08:46.752019389 +0000 UTC m=+0.258040354 container start 12faf4c2216d9372536395acf5e9f1614a1c5a76751643d625f5c8a217280b16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  5 02:08:46 compute-0 neutron-haproxy-ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1[442534]: [NOTICE]   (442557) : New worker (442559) forked
Dec  5 02:08:46 compute-0 neutron-haproxy-ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1[442534]: [NOTICE]   (442557) : Loading success.
Dec  5 02:08:47 compute-0 nova_compute[349548]: 2025-12-05 02:08:47.059 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:47 compute-0 nova_compute[349548]: 2025-12-05 02:08:47.933 349552 DEBUG nova.compute.manager [req-528e3ace-205c-4298-a372-54c864c2d233 req-dbc5be51-d750-48da-903f-c521a6f4fbc6 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Received event network-vif-plugged-1eebaade-abb1-412c-95f2-2b7240026f85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:08:47 compute-0 nova_compute[349548]: 2025-12-05 02:08:47.933 349552 DEBUG oslo_concurrency.lockutils [req-528e3ace-205c-4298-a372-54c864c2d233 req-dbc5be51-d750-48da-903f-c521a6f4fbc6 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:08:47 compute-0 nova_compute[349548]: 2025-12-05 02:08:47.934 349552 DEBUG oslo_concurrency.lockutils [req-528e3ace-205c-4298-a372-54c864c2d233 req-dbc5be51-d750-48da-903f-c521a6f4fbc6 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:08:47 compute-0 nova_compute[349548]: 2025-12-05 02:08:47.935 349552 DEBUG oslo_concurrency.lockutils [req-528e3ace-205c-4298-a372-54c864c2d233 req-dbc5be51-d750-48da-903f-c521a6f4fbc6 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:08:47 compute-0 nova_compute[349548]: 2025-12-05 02:08:47.936 349552 DEBUG nova.compute.manager [req-528e3ace-205c-4298-a372-54c864c2d233 req-dbc5be51-d750-48da-903f-c521a6f4fbc6 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] No waiting events found dispatching network-vif-plugged-1eebaade-abb1-412c-95f2-2b7240026f85 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 02:08:47 compute-0 nova_compute[349548]: 2025-12-05 02:08:47.936 349552 WARNING nova.compute.manager [req-528e3ace-205c-4298-a372-54c864c2d233 req-dbc5be51-d750-48da-903f-c521a6f4fbc6 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Received unexpected event network-vif-plugged-1eebaade-abb1-412c-95f2-2b7240026f85 for instance with vm_state active and task_state None.#033[00m
Dec  5 02:08:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:08:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Dec  5 02:08:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Dec  5 02:08:48 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Dec  5 02:08:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1793: 321 pgs: 321 active+clean; 196 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 7.5 MiB/s wr, 184 op/s
Dec  5 02:08:48 compute-0 nova_compute[349548]: 2025-12-05 02:08:48.742 349552 DEBUG nova.network.neutron [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Successfully updated port: a240e2ef-1773-4509-ac04-eae1f5d36e08 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  5 02:08:48 compute-0 nova_compute[349548]: 2025-12-05 02:08:48.762 349552 DEBUG oslo_concurrency.lockutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Acquiring lock "refresh_cache-59e35a32-9023-4e49-be56-9da10df3027f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:08:48 compute-0 nova_compute[349548]: 2025-12-05 02:08:48.763 349552 DEBUG oslo_concurrency.lockutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Acquired lock "refresh_cache-59e35a32-9023-4e49-be56-9da10df3027f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:08:48 compute-0 nova_compute[349548]: 2025-12-05 02:08:48.763 349552 DEBUG nova.network.neutron [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  5 02:08:49 compute-0 nova_compute[349548]: 2025-12-05 02:08:49.034 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:49 compute-0 nova_compute[349548]: 2025-12-05 02:08:49.042 349552 DEBUG nova.network.neutron [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  5 02:08:49 compute-0 nova_compute[349548]: 2025-12-05 02:08:49.147 349552 DEBUG nova.compute.manager [req-7967438d-1106-4f88-b77c-2f4fac985ba0 req-3e828140-6dac-4094-b847-9026455b4c74 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Received event network-vif-plugged-2ac46e0a-6888-440f-b155-d4b0e8677304 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:08:49 compute-0 nova_compute[349548]: 2025-12-05 02:08:49.148 349552 DEBUG oslo_concurrency.lockutils [req-7967438d-1106-4f88-b77c-2f4fac985ba0 req-3e828140-6dac-4094-b847-9026455b4c74 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:08:49 compute-0 nova_compute[349548]: 2025-12-05 02:08:49.149 349552 DEBUG oslo_concurrency.lockutils [req-7967438d-1106-4f88-b77c-2f4fac985ba0 req-3e828140-6dac-4094-b847-9026455b4c74 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:08:49 compute-0 nova_compute[349548]: 2025-12-05 02:08:49.149 349552 DEBUG oslo_concurrency.lockutils [req-7967438d-1106-4f88-b77c-2f4fac985ba0 req-3e828140-6dac-4094-b847-9026455b4c74 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:08:49 compute-0 nova_compute[349548]: 2025-12-05 02:08:49.150 349552 DEBUG nova.compute.manager [req-7967438d-1106-4f88-b77c-2f4fac985ba0 req-3e828140-6dac-4094-b847-9026455b4c74 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] No waiting events found dispatching network-vif-plugged-2ac46e0a-6888-440f-b155-d4b0e8677304 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 02:08:49 compute-0 nova_compute[349548]: 2025-12-05 02:08:49.150 349552 WARNING nova.compute.manager [req-7967438d-1106-4f88-b77c-2f4fac985ba0 req-3e828140-6dac-4094-b847-9026455b4c74 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Received unexpected event network-vif-plugged-2ac46e0a-6888-440f-b155-d4b0e8677304 for instance with vm_state active and task_state None.#033[00m
Dec  5 02:08:49 compute-0 ovn_controller[89286]: 2025-12-05T02:08:49Z|00076|binding|INFO|Releasing lport 2395c111-a45b-4516-ba09-9b57be3b16f8 from this chassis (sb_readonly=0)
Dec  5 02:08:49 compute-0 ovn_controller[89286]: 2025-12-05T02:08:49Z|00077|binding|INFO|Releasing lport 5f3160d9-2dc7-4f0c-9f4e-c46a8a847823 from this chassis (sb_readonly=0)
Dec  5 02:08:49 compute-0 nova_compute[349548]: 2025-12-05 02:08:49.514 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:49 compute-0 ovn_controller[89286]: 2025-12-05T02:08:49Z|00078|binding|INFO|Releasing lport 2395c111-a45b-4516-ba09-9b57be3b16f8 from this chassis (sb_readonly=0)
Dec  5 02:08:49 compute-0 ovn_controller[89286]: 2025-12-05T02:08:49Z|00079|binding|INFO|Releasing lport 5f3160d9-2dc7-4f0c-9f4e-c46a8a847823 from this chassis (sb_readonly=0)
Dec  5 02:08:49 compute-0 nova_compute[349548]: 2025-12-05 02:08:49.743 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:49 compute-0 nova_compute[349548]: 2025-12-05 02:08:49.888 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:49 compute-0 NetworkManager[49092]: <info>  [1764900529.8907] manager: (patch-provnet-f36f4e0f-0425-4742-afb6-bfffeac36335-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Dec  5 02:08:49 compute-0 NetworkManager[49092]: <info>  [1764900529.8979] manager: (patch-br-int-to-provnet-f36f4e0f-0425-4742-afb6-bfffeac36335): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Dec  5 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.057 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:50 compute-0 ovn_controller[89286]: 2025-12-05T02:08:50Z|00080|binding|INFO|Releasing lport 2395c111-a45b-4516-ba09-9b57be3b16f8 from this chassis (sb_readonly=0)
Dec  5 02:08:50 compute-0 ovn_controller[89286]: 2025-12-05T02:08:50Z|00081|binding|INFO|Releasing lport 5f3160d9-2dc7-4f0c-9f4e-c46a8a847823 from this chassis (sb_readonly=0)
Dec  5 02:08:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1794: 321 pgs: 321 active+clean; 196 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 935 KiB/s rd, 6.4 MiB/s wr, 158 op/s
Dec  5 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.095 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.416 349552 DEBUG nova.compute.manager [req-2b42ceb7-cb7b-4c56-9383-b8a043508828 req-9160bea8-3902-4c18-9fd0-723653425c81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Received event network-changed-1eebaade-abb1-412c-95f2-2b7240026f85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.422 349552 DEBUG nova.compute.manager [req-2b42ceb7-cb7b-4c56-9383-b8a043508828 req-9160bea8-3902-4c18-9fd0-723653425c81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Refreshing instance network info cache due to event network-changed-1eebaade-abb1-412c-95f2-2b7240026f85. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  5 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.423 349552 DEBUG oslo_concurrency.lockutils [req-2b42ceb7-cb7b-4c56-9383-b8a043508828 req-9160bea8-3902-4c18-9fd0-723653425c81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-a2605a46-d779-4fc3-aeff-1e040dbcf17d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.424 349552 DEBUG oslo_concurrency.lockutils [req-2b42ceb7-cb7b-4c56-9383-b8a043508828 req-9160bea8-3902-4c18-9fd0-723653425c81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-a2605a46-d779-4fc3-aeff-1e040dbcf17d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.425 349552 DEBUG nova.network.neutron [req-2b42ceb7-cb7b-4c56-9383-b8a043508828 req-9160bea8-3902-4c18-9fd0-723653425c81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Refreshing network info cache for port 1eebaade-abb1-412c-95f2-2b7240026f85 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  5 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.500 349552 DEBUG nova.network.neutron [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Updating instance_info_cache with network_info: [{"id": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "address": "fa:16:3e:16:81:87", "network": {"id": "a9bc378d-2d4b-4990-99ce-02656b1fec0d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2010351729-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd34a6a62cf94436a2b836fa4f49c4fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa240e2ef-17", "ovs_interfaceid": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.527 349552 DEBUG oslo_concurrency.lockutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Releasing lock "refresh_cache-59e35a32-9023-4e49-be56-9da10df3027f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.528 349552 DEBUG nova.compute.manager [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Instance network_info: |[{"id": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "address": "fa:16:3e:16:81:87", "network": {"id": "a9bc378d-2d4b-4990-99ce-02656b1fec0d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2010351729-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd34a6a62cf94436a2b836fa4f49c4fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa240e2ef-17", "ovs_interfaceid": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  5 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.533 349552 DEBUG nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Start _get_guest_xml network_info=[{"id": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "address": "fa:16:3e:16:81:87", "network": {"id": "a9bc378d-2d4b-4990-99ce-02656b1fec0d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2010351729-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd34a6a62cf94436a2b836fa4f49c4fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa240e2ef-17", "ovs_interfaceid": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T02:07:35Z,direct_url=<?>,disk_format='qcow2',id=e9091bfb-b431-47c9-a284-79372046956b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T02:07:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'image_id': 'e9091bfb-b431-47c9-a284-79372046956b'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  5 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.545 349552 WARNING nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.554 349552 DEBUG nova.virt.libvirt.host [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  5 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.555 349552 DEBUG nova.virt.libvirt.host [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  5 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.564 349552 DEBUG nova.virt.libvirt.host [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  5 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.565 349552 DEBUG nova.virt.libvirt.host [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  5 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.566 349552 DEBUG nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  5 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.567 349552 DEBUG nova.virt.hardware [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-05T02:07:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T02:07:35Z,direct_url=<?>,disk_format='qcow2',id=e9091bfb-b431-47c9-a284-79372046956b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T02:07:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  5 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.568 349552 DEBUG nova.virt.hardware [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  5 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.569 349552 DEBUG nova.virt.hardware [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  5 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.570 349552 DEBUG nova.virt.hardware [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  5 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.574 349552 DEBUG nova.virt.hardware [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  5 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.575 349552 DEBUG nova.virt.hardware [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  5 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.577 349552 DEBUG nova.virt.hardware [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  5 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.578 349552 DEBUG nova.virt.hardware [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  5 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.581 349552 DEBUG nova.virt.hardware [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  5 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.583 349552 DEBUG nova.virt.hardware [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  5 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.584 349552 DEBUG nova.virt.hardware [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  5 02:08:50 compute-0 nova_compute[349548]: 2025-12-05 02:08:50.589 349552 DEBUG oslo_concurrency.processutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:08:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  5 02:08:51 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1744661611' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  5 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.116 349552 DEBUG oslo_concurrency.processutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.176 349552 DEBUG nova.storage.rbd_utils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] rbd image 59e35a32-9023-4e49-be56-9da10df3027f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.188 349552 DEBUG oslo_concurrency.processutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.280 349552 DEBUG nova.compute.manager [req-24990104-0dc5-4fef-9a67-abd20f7806d5 req-29397b65-a1cb-4569-a201-aaa3d7fb09e2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Received event network-changed-a240e2ef-1773-4509-ac04-eae1f5d36e08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.282 349552 DEBUG nova.compute.manager [req-24990104-0dc5-4fef-9a67-abd20f7806d5 req-29397b65-a1cb-4569-a201-aaa3d7fb09e2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Refreshing instance network info cache due to event network-changed-a240e2ef-1773-4509-ac04-eae1f5d36e08. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  5 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.283 349552 DEBUG oslo_concurrency.lockutils [req-24990104-0dc5-4fef-9a67-abd20f7806d5 req-29397b65-a1cb-4569-a201-aaa3d7fb09e2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-59e35a32-9023-4e49-be56-9da10df3027f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.284 349552 DEBUG oslo_concurrency.lockutils [req-24990104-0dc5-4fef-9a67-abd20f7806d5 req-29397b65-a1cb-4569-a201-aaa3d7fb09e2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-59e35a32-9023-4e49-be56-9da10df3027f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.285 349552 DEBUG nova.network.neutron [req-24990104-0dc5-4fef-9a67-abd20f7806d5 req-29397b65-a1cb-4569-a201-aaa3d7fb09e2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Refreshing network info cache for port a240e2ef-1773-4509-ac04-eae1f5d36e08 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  5 02:08:51 compute-0 ovn_controller[89286]: 2025-12-05T02:08:51Z|00082|binding|INFO|Releasing lport 2395c111-a45b-4516-ba09-9b57be3b16f8 from this chassis (sb_readonly=0)
Dec  5 02:08:51 compute-0 ovn_controller[89286]: 2025-12-05T02:08:51Z|00083|binding|INFO|Releasing lport 5f3160d9-2dc7-4f0c-9f4e-c46a8a847823 from this chassis (sb_readonly=0)
Dec  5 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.416 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  5 02:08:51 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/427824560' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  5 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.758 349552 DEBUG oslo_concurrency.processutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.759 349552 DEBUG nova.virt.libvirt.vif [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T02:08:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1678320742',display_name='tempest-ServerActionsTestJSON-server-1678320742',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1678320742',id=8,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKmirf5PzEcVuq6RNudVuflcugnc6r3Jy50MVVEH7tkttBe4cf5zv9kQC3Ss53DUYZTE/QaGNMMsby6pKc4tzWxZGKXsndhFMr79gHGA5klSxVz8kWH2nsbelSj8zkK0fg==',key_name='tempest-keypair-1953156472',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dd34a6a62cf94436a2b836fa4f49c4fa',ramdisk_id='',reservation_id='r-i4td7gfo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1914764435',owner_user_name='tempest-ServerActionsTestJSON-1914764435-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T02:08:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b4745812b7eb47908ded25b1eb7c7328',uuid=59e35a32-9023-4e49-be56-9da10df3027f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "address": "fa:16:3e:16:81:87", "network": {"id": "a9bc378d-2d4b-4990-99ce-02656b1fec0d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2010351729-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd34a6a62cf94436a2b836fa4f49c4fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa240e2ef-17", "ovs_interfaceid": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  5 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.760 349552 DEBUG nova.network.os_vif_util [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Converting VIF {"id": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "address": "fa:16:3e:16:81:87", "network": {"id": "a9bc378d-2d4b-4990-99ce-02656b1fec0d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2010351729-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd34a6a62cf94436a2b836fa4f49c4fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa240e2ef-17", "ovs_interfaceid": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.761 349552 DEBUG nova.network.os_vif_util [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:81:87,bridge_name='br-int',has_traffic_filtering=True,id=a240e2ef-1773-4509-ac04-eae1f5d36e08,network=Network(a9bc378d-2d4b-4990-99ce-02656b1fec0d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa240e2ef-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.762 349552 DEBUG nova.objects.instance [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lazy-loading 'pci_devices' on Instance uuid 59e35a32-9023-4e49-be56-9da10df3027f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.781 349552 DEBUG nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] End _get_guest_xml xml=<domain type="kvm">
Dec  5 02:08:51 compute-0 nova_compute[349548]:  <uuid>59e35a32-9023-4e49-be56-9da10df3027f</uuid>
Dec  5 02:08:51 compute-0 nova_compute[349548]:  <name>instance-00000008</name>
Dec  5 02:08:51 compute-0 nova_compute[349548]:  <memory>131072</memory>
Dec  5 02:08:51 compute-0 nova_compute[349548]:  <vcpu>1</vcpu>
Dec  5 02:08:51 compute-0 nova_compute[349548]:  <metadata>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  5 02:08:51 compute-0 nova_compute[349548]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:      <nova:name>tempest-ServerActionsTestJSON-server-1678320742</nova:name>
Dec  5 02:08:51 compute-0 nova_compute[349548]:      <nova:creationTime>2025-12-05 02:08:50</nova:creationTime>
Dec  5 02:08:51 compute-0 nova_compute[349548]:      <nova:flavor name="m1.nano">
Dec  5 02:08:51 compute-0 nova_compute[349548]:        <nova:memory>128</nova:memory>
Dec  5 02:08:51 compute-0 nova_compute[349548]:        <nova:disk>1</nova:disk>
Dec  5 02:08:51 compute-0 nova_compute[349548]:        <nova:swap>0</nova:swap>
Dec  5 02:08:51 compute-0 nova_compute[349548]:        <nova:ephemeral>0</nova:ephemeral>
Dec  5 02:08:51 compute-0 nova_compute[349548]:        <nova:vcpus>1</nova:vcpus>
Dec  5 02:08:51 compute-0 nova_compute[349548]:      </nova:flavor>
Dec  5 02:08:51 compute-0 nova_compute[349548]:      <nova:owner>
Dec  5 02:08:51 compute-0 nova_compute[349548]:        <nova:user uuid="b4745812b7eb47908ded25b1eb7c7328">tempest-ServerActionsTestJSON-1914764435-project-member</nova:user>
Dec  5 02:08:51 compute-0 nova_compute[349548]:        <nova:project uuid="dd34a6a62cf94436a2b836fa4f49c4fa">tempest-ServerActionsTestJSON-1914764435</nova:project>
Dec  5 02:08:51 compute-0 nova_compute[349548]:      </nova:owner>
Dec  5 02:08:51 compute-0 nova_compute[349548]:      <nova:root type="image" uuid="e9091bfb-b431-47c9-a284-79372046956b"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:      <nova:ports>
Dec  5 02:08:51 compute-0 nova_compute[349548]:        <nova:port uuid="a240e2ef-1773-4509-ac04-eae1f5d36e08">
Dec  5 02:08:51 compute-0 nova_compute[349548]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:        </nova:port>
Dec  5 02:08:51 compute-0 nova_compute[349548]:      </nova:ports>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    </nova:instance>
Dec  5 02:08:51 compute-0 nova_compute[349548]:  </metadata>
Dec  5 02:08:51 compute-0 nova_compute[349548]:  <sysinfo type="smbios">
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <system>
Dec  5 02:08:51 compute-0 nova_compute[349548]:      <entry name="manufacturer">RDO</entry>
Dec  5 02:08:51 compute-0 nova_compute[349548]:      <entry name="product">OpenStack Compute</entry>
Dec  5 02:08:51 compute-0 nova_compute[349548]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  5 02:08:51 compute-0 nova_compute[349548]:      <entry name="serial">59e35a32-9023-4e49-be56-9da10df3027f</entry>
Dec  5 02:08:51 compute-0 nova_compute[349548]:      <entry name="uuid">59e35a32-9023-4e49-be56-9da10df3027f</entry>
Dec  5 02:08:51 compute-0 nova_compute[349548]:      <entry name="family">Virtual Machine</entry>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    </system>
Dec  5 02:08:51 compute-0 nova_compute[349548]:  </sysinfo>
Dec  5 02:08:51 compute-0 nova_compute[349548]:  <os>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <boot dev="hd"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <smbios mode="sysinfo"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:  </os>
Dec  5 02:08:51 compute-0 nova_compute[349548]:  <features>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <acpi/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <apic/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <vmcoreinfo/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:  </features>
Dec  5 02:08:51 compute-0 nova_compute[349548]:  <clock offset="utc">
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <timer name="pit" tickpolicy="delay"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <timer name="hpet" present="no"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:  </clock>
Dec  5 02:08:51 compute-0 nova_compute[349548]:  <cpu mode="host-model" match="exact">
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <topology sockets="1" cores="1" threads="1"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:  </cpu>
Dec  5 02:08:51 compute-0 nova_compute[349548]:  <devices>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <disk type="network" device="disk">
Dec  5 02:08:51 compute-0 nova_compute[349548]:      <driver type="raw" cache="none"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:      <source protocol="rbd" name="vms/59e35a32-9023-4e49-be56-9da10df3027f_disk">
Dec  5 02:08:51 compute-0 nova_compute[349548]:        <host name="192.168.122.100" port="6789"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:      </source>
Dec  5 02:08:51 compute-0 nova_compute[349548]:      <auth username="openstack">
Dec  5 02:08:51 compute-0 nova_compute[349548]:        <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:      </auth>
Dec  5 02:08:51 compute-0 nova_compute[349548]:      <target dev="vda" bus="virtio"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    </disk>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <disk type="network" device="cdrom">
Dec  5 02:08:51 compute-0 nova_compute[349548]:      <driver type="raw" cache="none"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:      <source protocol="rbd" name="vms/59e35a32-9023-4e49-be56-9da10df3027f_disk.config">
Dec  5 02:08:51 compute-0 nova_compute[349548]:        <host name="192.168.122.100" port="6789"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:      </source>
Dec  5 02:08:51 compute-0 nova_compute[349548]:      <auth username="openstack">
Dec  5 02:08:51 compute-0 nova_compute[349548]:        <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:      </auth>
Dec  5 02:08:51 compute-0 nova_compute[349548]:      <target dev="sda" bus="sata"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    </disk>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <interface type="ethernet">
Dec  5 02:08:51 compute-0 nova_compute[349548]:      <mac address="fa:16:3e:16:81:87"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:      <model type="virtio"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:      <driver name="vhost" rx_queue_size="512"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:      <mtu size="1442"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:      <target dev="tapa240e2ef-17"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    </interface>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <serial type="pty">
Dec  5 02:08:51 compute-0 nova_compute[349548]:      <log file="/var/lib/nova/instances/59e35a32-9023-4e49-be56-9da10df3027f/console.log" append="off"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    </serial>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <video>
Dec  5 02:08:51 compute-0 nova_compute[349548]:      <model type="virtio"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    </video>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <input type="tablet" bus="usb"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <rng model="virtio">
Dec  5 02:08:51 compute-0 nova_compute[349548]:      <backend model="random">/dev/urandom</backend>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    </rng>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <controller type="usb" index="0"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    <memballoon model="virtio">
Dec  5 02:08:51 compute-0 nova_compute[349548]:      <stats period="10"/>
Dec  5 02:08:51 compute-0 nova_compute[349548]:    </memballoon>
Dec  5 02:08:51 compute-0 nova_compute[349548]:  </devices>
Dec  5 02:08:51 compute-0 nova_compute[349548]: </domain>
Dec  5 02:08:51 compute-0 nova_compute[349548]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  5 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.782 349552 DEBUG nova.compute.manager [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Preparing to wait for external event network-vif-plugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  5 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.782 349552 DEBUG oslo_concurrency.lockutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Acquiring lock "59e35a32-9023-4e49-be56-9da10df3027f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.783 349552 DEBUG oslo_concurrency.lockutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.783 349552 DEBUG oslo_concurrency.lockutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.783 349552 DEBUG nova.virt.libvirt.vif [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T02:08:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1678320742',display_name='tempest-ServerActionsTestJSON-server-1678320742',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1678320742',id=8,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKmirf5PzEcVuq6RNudVuflcugnc6r3Jy50MVVEH7tkttBe4cf5zv9kQC3Ss53DUYZTE/QaGNMMsby6pKc4tzWxZGKXsndhFMr79gHGA5klSxVz8kWH2nsbelSj8zkK0fg==',key_name='tempest-keypair-1953156472',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dd34a6a62cf94436a2b836fa4f49c4fa',ramdisk_id='',reservation_id='r-i4td7gfo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1914764435',owner_user_name='tempest-ServerActionsTestJSON-1914764435-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T02:08:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b4745812b7eb47908ded25b1eb7c7328',uuid=59e35a32-9023-4e49-be56-9da10df3027f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "address": "fa:16:3e:16:81:87", "network": {"id": "a9bc378d-2d4b-4990-99ce-02656b1fec0d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2010351729-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd34a6a62cf94436a2b836fa4f49c4fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa240e2ef-17", "ovs_interfaceid": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  5 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.784 349552 DEBUG nova.network.os_vif_util [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Converting VIF {"id": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "address": "fa:16:3e:16:81:87", "network": {"id": "a9bc378d-2d4b-4990-99ce-02656b1fec0d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2010351729-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd34a6a62cf94436a2b836fa4f49c4fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa240e2ef-17", "ovs_interfaceid": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.784 349552 DEBUG nova.network.os_vif_util [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:81:87,bridge_name='br-int',has_traffic_filtering=True,id=a240e2ef-1773-4509-ac04-eae1f5d36e08,network=Network(a9bc378d-2d4b-4990-99ce-02656b1fec0d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa240e2ef-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.784 349552 DEBUG os_vif [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:81:87,bridge_name='br-int',has_traffic_filtering=True,id=a240e2ef-1773-4509-ac04-eae1f5d36e08,network=Network(a9bc378d-2d4b-4990-99ce-02656b1fec0d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa240e2ef-17') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  5 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.785 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.785 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.785 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.788 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.789 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa240e2ef-17, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.789 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa240e2ef-17, col_values=(('external_ids', {'iface-id': 'a240e2ef-1773-4509-ac04-eae1f5d36e08', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:16:81:87', 'vm-uuid': '59e35a32-9023-4e49-be56-9da10df3027f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:08:51 compute-0 NetworkManager[49092]: <info>  [1764900531.7926] manager: (tapa240e2ef-17): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Dec  5 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.791 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.793 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  5 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.800 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.801 349552 INFO os_vif [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:81:87,bridge_name='br-int',has_traffic_filtering=True,id=a240e2ef-1773-4509-ac04-eae1f5d36e08,network=Network(a9bc378d-2d4b-4990-99ce-02656b1fec0d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa240e2ef-17')#033[00m
Dec  5 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.868 349552 DEBUG nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  5 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.869 349552 DEBUG nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  5 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.869 349552 DEBUG nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] No VIF found with MAC fa:16:3e:16:81:87, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  5 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.869 349552 INFO nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Using config drive#033[00m
Dec  5 02:08:51 compute-0 nova_compute[349548]: 2025-12-05 02:08:51.904 349552 DEBUG nova.storage.rbd_utils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] rbd image 59e35a32-9023-4e49-be56-9da10df3027f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:08:52 compute-0 nova_compute[349548]: 2025-12-05 02:08:52.068 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1795: 321 pgs: 321 active+clean; 196 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.8 MiB/s wr, 217 op/s
Dec  5 02:08:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.105 349552 INFO nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Creating config drive at /var/lib/nova/instances/59e35a32-9023-4e49-be56-9da10df3027f/disk.config#033[00m
Dec  5 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.112 349552 DEBUG oslo_concurrency.processutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/59e35a32-9023-4e49-be56-9da10df3027f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyee4by74 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.172 349552 DEBUG oslo_concurrency.lockutils [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Acquiring lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.173 349552 DEBUG oslo_concurrency.lockutils [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.174 349552 DEBUG oslo_concurrency.lockutils [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Acquiring lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.175 349552 DEBUG oslo_concurrency.lockutils [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.175 349552 DEBUG oslo_concurrency.lockutils [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.178 349552 INFO nova.compute.manager [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Terminating instance#033[00m
Dec  5 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.181 349552 DEBUG nova.compute.manager [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  5 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.245 349552 DEBUG oslo_concurrency.processutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/59e35a32-9023-4e49-be56-9da10df3027f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyee4by74" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:08:53 compute-0 kernel: tap1eebaade-ab (unregistering): left promiscuous mode
Dec  5 02:08:53 compute-0 NetworkManager[49092]: <info>  [1764900533.2973] device (tap1eebaade-ab): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  5 02:08:53 compute-0 ovn_controller[89286]: 2025-12-05T02:08:53Z|00084|binding|INFO|Releasing lport 1eebaade-abb1-412c-95f2-2b7240026f85 from this chassis (sb_readonly=0)
Dec  5 02:08:53 compute-0 ovn_controller[89286]: 2025-12-05T02:08:53Z|00085|binding|INFO|Setting lport 1eebaade-abb1-412c-95f2-2b7240026f85 down in Southbound
Dec  5 02:08:53 compute-0 ovn_controller[89286]: 2025-12-05T02:08:53Z|00086|binding|INFO|Removing iface tap1eebaade-ab ovn-installed in OVS
Dec  5 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.322 349552 DEBUG nova.storage.rbd_utils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] rbd image 59e35a32-9023-4e49-be56-9da10df3027f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.330 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:f6:1b 10.100.0.5'], port_security=['fa:16:3e:af:f6:1b 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'a2605a46-d779-4fc3-aeff-1e040dbcf17d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5a020a22-53e0-4ddc-b74b-9b343d75de26', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '159039e5ad4a46a7be912cd9756c76c5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c4f1e166-f717-4795-a420-f74c256dc7dd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.237'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec639c0d-4f01-43c3-a93f-8a1059f20fc9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=1eebaade-abb1-412c-95f2-2b7240026f85) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.333 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 1eebaade-abb1-412c-95f2-2b7240026f85 in datapath 5a020a22-53e0-4ddc-b74b-9b343d75de26 unbound from our chassis#033[00m
Dec  5 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.337 287122 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5a020a22-53e0-4ddc-b74b-9b343d75de26, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  5 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.339 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[20ee3389-6bf4-47be-98c5-15727193ee4d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.339 287122 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26 namespace which is not needed anymore#033[00m
Dec  5 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.349 349552 DEBUG oslo_concurrency.processutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/59e35a32-9023-4e49-be56-9da10df3027f/disk.config 59e35a32-9023-4e49-be56-9da10df3027f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:08:53 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Dec  5 02:08:53 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 8.357s CPU time.
Dec  5 02:08:53 compute-0 systemd-machined[138700]: Machine qemu-6-instance-00000006 terminated.
Dec  5 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.380 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.414 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.424 349552 INFO nova.virt.libvirt.driver [-] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Instance destroyed successfully.#033[00m
Dec  5 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.425 349552 DEBUG nova.objects.instance [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Lazy-loading 'resources' on Instance uuid a2605a46-d779-4fc3-aeff-1e040dbcf17d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:08:53 compute-0 podman[442674]: 2025-12-05 02:08:53.429534565 +0000 UTC m=+0.118795741 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 02:08:53 compute-0 podman[442672]: 2025-12-05 02:08:53.43793919 +0000 UTC m=+0.119276844 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  5 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.452 349552 DEBUG nova.virt.libvirt.vif [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T02:08:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1341674106',display_name='tempest-ServersTestJSON-server-1341674106',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1341674106',id=6,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFtAkt1MUQ0UzD5ZNvg4emNMv//Ij9tTGEw8OvSj9D0kv+BMeeC2o2SF/4NX3oBFTRlyP9xb/yjd8SFW8gRLZtLdrfqvo1ZN4HP0TzIFpNkL3M1lCxjV2HbcSKr2zzjZbg==',key_name='tempest-keypair-1342908531',keypairs=<?>,launch_index=0,launched_at=2025-12-05T02:08:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='159039e5ad4a46a7be912cd9756c76c5',ramdisk_id='',reservation_id='r-1esl3ayq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-244710502',owner_user_name='tempest-ServersTestJSON-244710502-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-05T02:08:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5e8484f22ce84af99708d2e728179b92',uuid=a2605a46-d779-4fc3-aeff-1e040dbcf17d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1eebaade-abb1-412c-95f2-2b7240026f85", "address": "fa:16:3e:af:f6:1b", "network": {"id": "5a020a22-53e0-4ddc-b74b-9b343d75de26", "bridge": "br-int", "label": "tempest-ServersTestJSON-124637277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "159039e5ad4a46a7be912cd9756c76c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1eebaade-ab", "ovs_interfaceid": "1eebaade-abb1-412c-95f2-2b7240026f85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  5 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.452 349552 DEBUG nova.network.os_vif_util [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Converting VIF {"id": "1eebaade-abb1-412c-95f2-2b7240026f85", "address": "fa:16:3e:af:f6:1b", "network": {"id": "5a020a22-53e0-4ddc-b74b-9b343d75de26", "bridge": "br-int", "label": "tempest-ServersTestJSON-124637277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "159039e5ad4a46a7be912cd9756c76c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1eebaade-ab", "ovs_interfaceid": "1eebaade-abb1-412c-95f2-2b7240026f85", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.453 349552 DEBUG nova.network.os_vif_util [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:f6:1b,bridge_name='br-int',has_traffic_filtering=True,id=1eebaade-abb1-412c-95f2-2b7240026f85,network=Network(5a020a22-53e0-4ddc-b74b-9b343d75de26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1eebaade-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.453 349552 DEBUG os_vif [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:f6:1b,bridge_name='br-int',has_traffic_filtering=True,id=1eebaade-abb1-412c-95f2-2b7240026f85,network=Network(5a020a22-53e0-4ddc-b74b-9b343d75de26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1eebaade-ab') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  5 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.458 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.458 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1eebaade-ab, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.461 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.466 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  5 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.469 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.472 349552 INFO os_vif [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:f6:1b,bridge_name='br-int',has_traffic_filtering=True,id=1eebaade-abb1-412c-95f2-2b7240026f85,network=Network(5a020a22-53e0-4ddc-b74b-9b343d75de26),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1eebaade-ab')#033[00m
Dec  5 02:08:53 compute-0 neutron-haproxy-ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26[442413]: [NOTICE]   (442418) : haproxy version is 2.8.14-c23fe91
Dec  5 02:08:53 compute-0 neutron-haproxy-ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26[442413]: [NOTICE]   (442418) : path to executable is /usr/sbin/haproxy
Dec  5 02:08:53 compute-0 neutron-haproxy-ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26[442413]: [WARNING]  (442418) : Exiting Master process...
Dec  5 02:08:53 compute-0 neutron-haproxy-ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26[442413]: [ALERT]    (442418) : Current worker (442425) exited with code 143 (Terminated)
Dec  5 02:08:53 compute-0 neutron-haproxy-ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26[442413]: [WARNING]  (442418) : All workers exited. Exiting... (0)
Dec  5 02:08:53 compute-0 systemd[1]: libpod-eef4a66cf8b19254e63df4d2aa3fb2989b984f39ae2622436844ea78244296d4.scope: Deactivated successfully.
Dec  5 02:08:53 compute-0 podman[442768]: 2025-12-05 02:08:53.531971746 +0000 UTC m=+0.060893868 container died eef4a66cf8b19254e63df4d2aa3fb2989b984f39ae2622436844ea78244296d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec  5 02:08:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eef4a66cf8b19254e63df4d2aa3fb2989b984f39ae2622436844ea78244296d4-userdata-shm.mount: Deactivated successfully.
Dec  5 02:08:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-f12546ccb3acfced01c79d379c89ef48eed6791003c99536b68dbe8f03ece420-merged.mount: Deactivated successfully.
Dec  5 02:08:53 compute-0 podman[442768]: 2025-12-05 02:08:53.634644894 +0000 UTC m=+0.163567026 container cleanup eef4a66cf8b19254e63df4d2aa3fb2989b984f39ae2622436844ea78244296d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  5 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.645 349552 DEBUG oslo_concurrency.processutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/59e35a32-9023-4e49-be56-9da10df3027f/disk.config 59e35a32-9023-4e49-be56-9da10df3027f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.296s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.646 349552 INFO nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Deleting local config drive /var/lib/nova/instances/59e35a32-9023-4e49-be56-9da10df3027f/disk.config because it was imported into RBD.#033[00m
Dec  5 02:08:53 compute-0 systemd[1]: libpod-conmon-eef4a66cf8b19254e63df4d2aa3fb2989b984f39ae2622436844ea78244296d4.scope: Deactivated successfully.
Dec  5 02:08:53 compute-0 kernel: tapa240e2ef-17: entered promiscuous mode
Dec  5 02:08:53 compute-0 systemd-udevd[442693]: Network interface NamePolicy= disabled on kernel command line.
Dec  5 02:08:53 compute-0 NetworkManager[49092]: <info>  [1764900533.7155] manager: (tapa240e2ef-17): new Tun device (/org/freedesktop/NetworkManager/Devices/46)
Dec  5 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.720 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:53 compute-0 ovn_controller[89286]: 2025-12-05T02:08:53Z|00087|binding|INFO|Claiming lport a240e2ef-1773-4509-ac04-eae1f5d36e08 for this chassis.
Dec  5 02:08:53 compute-0 ovn_controller[89286]: 2025-12-05T02:08:53Z|00088|binding|INFO|a240e2ef-1773-4509-ac04-eae1f5d36e08: Claiming fa:16:3e:16:81:87 10.100.0.10
Dec  5 02:08:53 compute-0 NetworkManager[49092]: <info>  [1764900533.7287] device (tapa240e2ef-17): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  5 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.728 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:81:87 10.100.0.10'], port_security=['fa:16:3e:16:81:87 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '59e35a32-9023-4e49-be56-9da10df3027f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a9bc378d-2d4b-4990-99ce-02656b1fec0d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dd34a6a62cf94436a2b836fa4f49c4fa', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0ad1486e-ab79-4bad-bad5-777f54ed0ef1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=880ae0ff-40ec-4de0-a5e7-7c2cf13ecf72, chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=a240e2ef-1773-4509-ac04-eae1f5d36e08) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 02:08:53 compute-0 NetworkManager[49092]: <info>  [1764900533.7326] device (tapa240e2ef-17): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  5 02:08:53 compute-0 podman[442818]: 2025-12-05 02:08:53.736923971 +0000 UTC m=+0.070593810 container remove eef4a66cf8b19254e63df4d2aa3fb2989b984f39ae2622436844ea78244296d4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  5 02:08:53 compute-0 ovn_controller[89286]: 2025-12-05T02:08:53Z|00089|binding|INFO|Setting lport a240e2ef-1773-4509-ac04-eae1f5d36e08 ovn-installed in OVS
Dec  5 02:08:53 compute-0 ovn_controller[89286]: 2025-12-05T02:08:53Z|00090|binding|INFO|Setting lport a240e2ef-1773-4509-ac04-eae1f5d36e08 up in Southbound
Dec  5 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.747 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.748 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.753 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[daadc2aa-9eb3-4cbe-87ad-1daa3c2ceef9]: (4, ('Fri Dec  5 02:08:53 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26 (eef4a66cf8b19254e63df4d2aa3fb2989b984f39ae2622436844ea78244296d4)\neef4a66cf8b19254e63df4d2aa3fb2989b984f39ae2622436844ea78244296d4\nFri Dec  5 02:08:53 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26 (eef4a66cf8b19254e63df4d2aa3fb2989b984f39ae2622436844ea78244296d4)\neef4a66cf8b19254e63df4d2aa3fb2989b984f39ae2622436844ea78244296d4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.756 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[b811a762-f9eb-4097-b792-8c472bdc842c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.757 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5a020a22-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.757 349552 DEBUG nova.compute.manager [req-24ccabad-56cd-426b-aa2b-0bf498b425f2 req-9bdb6494-60c3-405e-8c7c-83ca88dddb3d a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Received event network-changed-2ac46e0a-6888-440f-b155-d4b0e8677304 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.758 349552 DEBUG nova.compute.manager [req-24ccabad-56cd-426b-aa2b-0bf498b425f2 req-9bdb6494-60c3-405e-8c7c-83ca88dddb3d a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Refreshing instance network info cache due to event network-changed-2ac46e0a-6888-440f-b155-d4b0e8677304. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  5 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.758 349552 DEBUG oslo_concurrency.lockutils [req-24ccabad-56cd-426b-aa2b-0bf498b425f2 req-9bdb6494-60c3-405e-8c7c-83ca88dddb3d a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.758 349552 DEBUG oslo_concurrency.lockutils [req-24ccabad-56cd-426b-aa2b-0bf498b425f2 req-9bdb6494-60c3-405e-8c7c-83ca88dddb3d a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.758 349552 DEBUG nova.network.neutron [req-24ccabad-56cd-426b-aa2b-0bf498b425f2 req-9bdb6494-60c3-405e-8c7c-83ca88dddb3d a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Refreshing network info cache for port 2ac46e0a-6888-440f-b155-d4b0e8677304 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  5 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.759 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:53 compute-0 kernel: tap5a020a22-50: left promiscuous mode
Dec  5 02:08:53 compute-0 systemd-machined[138700]: New machine qemu-8-instance-00000008.
Dec  5 02:08:53 compute-0 nova_compute[349548]: 2025-12-05 02:08:53.779 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.785 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[c950d7ae-d134-478b-adc0-c92e93ed9c7c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:53 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Dec  5 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.801 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[1fa222e3-bae8-461b-9972-1a7193e075fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.802 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[a6c9ba52-ce8f-4031-883a-944969d51b76]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.819 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[f4da9fa7-5711-4383-8f65-7f032f9cafdc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661117, 'reachable_time': 27029, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 442844, 'error': None, 'target': 'ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:53 compute-0 systemd[1]: run-netns-ovnmeta\x2d5a020a22\x2d53e0\x2d4ddc\x2db74b\x2d9b343d75de26.mount: Deactivated successfully.
Dec  5 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.824 287504 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5a020a22-53e0-4ddc-b74b-9b343d75de26 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  5 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.824 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[5e26e179-900e-4b73-b516-3e88d3e72f74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.825 287122 INFO neutron.agent.ovn.metadata.agent [-] Port a240e2ef-1773-4509-ac04-eae1f5d36e08 in datapath a9bc378d-2d4b-4990-99ce-02656b1fec0d unbound from our chassis#033[00m
Dec  5 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.828 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a9bc378d-2d4b-4990-99ce-02656b1fec0d#033[00m
Dec  5 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.843 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[26762fef-8ce7-4680-a7cd-a18133017bdc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.845 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa9bc378d-21 in ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  5 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.847 412744 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa9bc378d-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  5 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.847 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[18a361f2-8e3b-4700-8e57-9dabaf65024c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.849 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[114b1c3e-d8fb-4716-8071-97cab6b5e522]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.863 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[ae5f20bf-24d0-402d-9908-3c9ba5d70cde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.887 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[083c1e46-c3d1-496e-be73-71d5536667df]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.923 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[94066268-2ed7-4010-843b-78cb21f87c77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.930 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[fb5b3db3-89d8-4a63-8950-550d4f539cbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:53 compute-0 NetworkManager[49092]: <info>  [1764900533.9317] manager: (tapa9bc378d-20): new Veth device (/org/freedesktop/NetworkManager/Devices/47)
Dec  5 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.965 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[45d80916-41ce-4a15-9401-c431ff305fe4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.971 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[baf0449d-aced-47e4-a958-9c2b69b3bd1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:53 compute-0 NetworkManager[49092]: <info>  [1764900533.9910] device (tapa9bc378d-20): carrier: link connected
Dec  5 02:08:53 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:53.996 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[81e084ee-3a9b-44cd-9168-52fa03461fe0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:54.028 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[2e90c673-8923-43cc-a2df-d53f6aadcbb9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa9bc378d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c2:fe:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662065, 'reachable_time': 20102, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 442876, 'error': None, 'target': 'ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:54.049 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[da3b7bb7-8b27-4960-b2b8-3460023bd738]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec2:feea'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662065, 'tstamp': 662065}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 442877, 'error': None, 'target': 'ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:54.077 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[6c7f25a6-cc20-493d-8c5f-56a428d0538f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa9bc378d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c2:fe:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662065, 'reachable_time': 20102, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 442878, 'error': None, 'target': 'ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1796: 321 pgs: 321 active+clean; 196 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 1.0 MiB/s wr, 203 op/s
Dec  5 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.106 349552 DEBUG nova.compute.manager [req-752c993f-0c9e-41f2-bbfb-9e88b58a0edf req-55a2f2c8-e7f1-4ea2-899a-a6bc2c1caf2d a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Received event network-vif-unplugged-1eebaade-abb1-412c-95f2-2b7240026f85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.107 349552 DEBUG oslo_concurrency.lockutils [req-752c993f-0c9e-41f2-bbfb-9e88b58a0edf req-55a2f2c8-e7f1-4ea2-899a-a6bc2c1caf2d a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.107 349552 DEBUG oslo_concurrency.lockutils [req-752c993f-0c9e-41f2-bbfb-9e88b58a0edf req-55a2f2c8-e7f1-4ea2-899a-a6bc2c1caf2d a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.107 349552 DEBUG oslo_concurrency.lockutils [req-752c993f-0c9e-41f2-bbfb-9e88b58a0edf req-55a2f2c8-e7f1-4ea2-899a-a6bc2c1caf2d a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.107 349552 DEBUG nova.compute.manager [req-752c993f-0c9e-41f2-bbfb-9e88b58a0edf req-55a2f2c8-e7f1-4ea2-899a-a6bc2c1caf2d a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] No waiting events found dispatching network-vif-unplugged-1eebaade-abb1-412c-95f2-2b7240026f85 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.108 349552 DEBUG nova.compute.manager [req-752c993f-0c9e-41f2-bbfb-9e88b58a0edf req-55a2f2c8-e7f1-4ea2-899a-a6bc2c1caf2d a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Received event network-vif-unplugged-1eebaade-abb1-412c-95f2-2b7240026f85 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  5 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.117 349552 DEBUG nova.network.neutron [req-2b42ceb7-cb7b-4c56-9383-b8a043508828 req-9160bea8-3902-4c18-9fd0-723653425c81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Updated VIF entry in instance network info cache for port 1eebaade-abb1-412c-95f2-2b7240026f85. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  5 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.118 349552 DEBUG nova.network.neutron [req-2b42ceb7-cb7b-4c56-9383-b8a043508828 req-9160bea8-3902-4c18-9fd0-723653425c81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Updating instance_info_cache with network_info: [{"id": "1eebaade-abb1-412c-95f2-2b7240026f85", "address": "fa:16:3e:af:f6:1b", "network": {"id": "5a020a22-53e0-4ddc-b74b-9b343d75de26", "bridge": "br-int", "label": "tempest-ServersTestJSON-124637277-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "159039e5ad4a46a7be912cd9756c76c5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1eebaade-ab", "ovs_interfaceid": "1eebaade-abb1-412c-95f2-2b7240026f85", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:08:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:54.125 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[1fb1b99d-c329-4b56-826b-a16e198c2fd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.140 349552 DEBUG oslo_concurrency.lockutils [req-2b42ceb7-cb7b-4c56-9383-b8a043508828 req-9160bea8-3902-4c18-9fd0-723653425c81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-a2605a46-d779-4fc3-aeff-1e040dbcf17d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:08:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:54.214 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[39896ae5-3631-40e0-b658-c2cf1c90f9f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:54.215 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa9bc378d-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:08:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:54.216 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 02:08:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:54.216 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa9bc378d-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.218 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:54 compute-0 NetworkManager[49092]: <info>  [1764900534.2193] manager: (tapa9bc378d-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Dec  5 02:08:54 compute-0 kernel: tapa9bc378d-20: entered promiscuous mode
Dec  5 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.221 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:54.223 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa9bc378d-20, col_values=(('external_ids', {'iface-id': '3d0916d7-6f03-4daf-8f3b-126228223c53'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.226 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:54 compute-0 ovn_controller[89286]: 2025-12-05T02:08:54Z|00091|binding|INFO|Releasing lport 3d0916d7-6f03-4daf-8f3b-126228223c53 from this chassis (sb_readonly=0)
Dec  5 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.246 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:54.250 287122 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a9bc378d-2d4b-4990-99ce-02656b1fec0d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a9bc378d-2d4b-4990-99ce-02656b1fec0d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  5 02:08:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:54.253 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[a1a86198-57bc-475d-9426-801cd8578d75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:08:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:54.255 287122 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  5 02:08:54 compute-0 ovn_metadata_agent[287107]: global
Dec  5 02:08:54 compute-0 ovn_metadata_agent[287107]:    log         /dev/log local0 debug
Dec  5 02:08:54 compute-0 ovn_metadata_agent[287107]:    log-tag     haproxy-metadata-proxy-a9bc378d-2d4b-4990-99ce-02656b1fec0d
Dec  5 02:08:54 compute-0 ovn_metadata_agent[287107]:    user        root
Dec  5 02:08:54 compute-0 ovn_metadata_agent[287107]:    group       root
Dec  5 02:08:54 compute-0 ovn_metadata_agent[287107]:    maxconn     1024
Dec  5 02:08:54 compute-0 ovn_metadata_agent[287107]:    pidfile     /var/lib/neutron/external/pids/a9bc378d-2d4b-4990-99ce-02656b1fec0d.pid.haproxy
Dec  5 02:08:54 compute-0 ovn_metadata_agent[287107]:    daemon
Dec  5 02:08:54 compute-0 ovn_metadata_agent[287107]: 
Dec  5 02:08:54 compute-0 ovn_metadata_agent[287107]: defaults
Dec  5 02:08:54 compute-0 ovn_metadata_agent[287107]:    log global
Dec  5 02:08:54 compute-0 ovn_metadata_agent[287107]:    mode http
Dec  5 02:08:54 compute-0 ovn_metadata_agent[287107]:    option httplog
Dec  5 02:08:54 compute-0 ovn_metadata_agent[287107]:    option dontlognull
Dec  5 02:08:54 compute-0 ovn_metadata_agent[287107]:    option http-server-close
Dec  5 02:08:54 compute-0 ovn_metadata_agent[287107]:    option forwardfor
Dec  5 02:08:54 compute-0 ovn_metadata_agent[287107]:    retries                 3
Dec  5 02:08:54 compute-0 ovn_metadata_agent[287107]:    timeout http-request    30s
Dec  5 02:08:54 compute-0 ovn_metadata_agent[287107]:    timeout connect         30s
Dec  5 02:08:54 compute-0 ovn_metadata_agent[287107]:    timeout client          32s
Dec  5 02:08:54 compute-0 ovn_metadata_agent[287107]:    timeout server          32s
Dec  5 02:08:54 compute-0 ovn_metadata_agent[287107]:    timeout http-keep-alive 30s
Dec  5 02:08:54 compute-0 ovn_metadata_agent[287107]: 
Dec  5 02:08:54 compute-0 ovn_metadata_agent[287107]: 
Dec  5 02:08:54 compute-0 ovn_metadata_agent[287107]: listen listener
Dec  5 02:08:54 compute-0 ovn_metadata_agent[287107]:    bind 169.254.169.254:80
Dec  5 02:08:54 compute-0 ovn_metadata_agent[287107]:    server metadata /var/lib/neutron/metadata_proxy
Dec  5 02:08:54 compute-0 ovn_metadata_agent[287107]:    http-request add-header X-OVN-Network-ID a9bc378d-2d4b-4990-99ce-02656b1fec0d
Dec  5 02:08:54 compute-0 ovn_metadata_agent[287107]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  5 02:08:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:54.255 287122 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d', 'env', 'PROCESS_TAG=haproxy-a9bc378d-2d4b-4990-99ce-02656b1fec0d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a9bc378d-2d4b-4990-99ce-02656b1fec0d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  5 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.285 349552 INFO nova.virt.libvirt.driver [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Deleting instance files /var/lib/nova/instances/a2605a46-d779-4fc3-aeff-1e040dbcf17d_del#033[00m
Dec  5 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.286 349552 INFO nova.virt.libvirt.driver [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Deletion of /var/lib/nova/instances/a2605a46-d779-4fc3-aeff-1e040dbcf17d_del complete#033[00m
Dec  5 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.369 349552 INFO nova.compute.manager [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Took 1.19 seconds to destroy the instance on the hypervisor.#033[00m
Dec  5 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.370 349552 DEBUG oslo.service.loopingcall [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  5 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.379 349552 DEBUG nova.compute.manager [-] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  5 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.380 349552 DEBUG nova.network.neutron [-] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  5 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.389 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900534.3887367, 59e35a32-9023-4e49-be56-9da10df3027f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.390 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] VM Started (Lifecycle Event)#033[00m
Dec  5 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.409 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.417 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900534.3894296, 59e35a32-9023-4e49-be56-9da10df3027f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.418 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] VM Paused (Lifecycle Event)#033[00m
Dec  5 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.434 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.442 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  5 02:08:54 compute-0 nova_compute[349548]: 2025-12-05 02:08:54.463 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  5 02:08:54 compute-0 podman[442952]: 2025-12-05 02:08:54.713252786 +0000 UTC m=+0.106433575 container create 4c5edeef5f34dfd674818c6df9c9c3d43e543af4bab38484b9e8514164eedd05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 02:08:54 compute-0 podman[442952]: 2025-12-05 02:08:54.670494407 +0000 UTC m=+0.063675276 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  5 02:08:54 compute-0 systemd[1]: Started libpod-conmon-4c5edeef5f34dfd674818c6df9c9c3d43e543af4bab38484b9e8514164eedd05.scope.
Dec  5 02:08:54 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:08:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/834861e6ee78dc388a1bf92deca51436b692390ae47802f4ad88169beea7eb85/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  5 02:08:54 compute-0 podman[442952]: 2025-12-05 02:08:54.90250563 +0000 UTC m=+0.295686429 container init 4c5edeef5f34dfd674818c6df9c9c3d43e543af4bab38484b9e8514164eedd05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec  5 02:08:54 compute-0 podman[442952]: 2025-12-05 02:08:54.915759072 +0000 UTC m=+0.308939861 container start 4c5edeef5f34dfd674818c6df9c9c3d43e543af4bab38484b9e8514164eedd05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  5 02:08:54 compute-0 neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d[442967]: [NOTICE]   (442971) : New worker (442973) forked
Dec  5 02:08:54 compute-0 neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d[442967]: [NOTICE]   (442971) : Loading success.
Dec  5 02:08:55 compute-0 nova_compute[349548]: 2025-12-05 02:08:55.538 349552 DEBUG nova.network.neutron [req-24990104-0dc5-4fef-9a67-abd20f7806d5 req-29397b65-a1cb-4569-a201-aaa3d7fb09e2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Updated VIF entry in instance network info cache for port a240e2ef-1773-4509-ac04-eae1f5d36e08. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  5 02:08:55 compute-0 nova_compute[349548]: 2025-12-05 02:08:55.539 349552 DEBUG nova.network.neutron [req-24990104-0dc5-4fef-9a67-abd20f7806d5 req-29397b65-a1cb-4569-a201-aaa3d7fb09e2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Updating instance_info_cache with network_info: [{"id": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "address": "fa:16:3e:16:81:87", "network": {"id": "a9bc378d-2d4b-4990-99ce-02656b1fec0d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2010351729-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd34a6a62cf94436a2b836fa4f49c4fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa240e2ef-17", "ovs_interfaceid": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:08:55 compute-0 nova_compute[349548]: 2025-12-05 02:08:55.571 349552 DEBUG oslo_concurrency.lockutils [req-24990104-0dc5-4fef-9a67-abd20f7806d5 req-29397b65-a1cb-4569-a201-aaa3d7fb09e2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-59e35a32-9023-4e49-be56-9da10df3027f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:08:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1797: 321 pgs: 321 active+clean; 171 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 35 KiB/s wr, 192 op/s
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.166 349552 DEBUG nova.network.neutron [-] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.194 349552 INFO nova.compute.manager [-] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Took 1.81 seconds to deallocate network for instance.#033[00m
Dec  5 02:08:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:56.203 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:08:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:56.204 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:08:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:08:56.205 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.254 349552 DEBUG oslo_concurrency.lockutils [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.255 349552 DEBUG oslo_concurrency.lockutils [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.376 349552 DEBUG oslo_concurrency.processutils [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.447 349552 DEBUG nova.compute.manager [req-a9b2160b-3c44-486d-932d-d38a11ed629b req-4c2eab4e-b7dd-4f67-a1f6-914b739f4fe8 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Received event network-vif-plugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.449 349552 DEBUG oslo_concurrency.lockutils [req-a9b2160b-3c44-486d-932d-d38a11ed629b req-4c2eab4e-b7dd-4f67-a1f6-914b739f4fe8 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "59e35a32-9023-4e49-be56-9da10df3027f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.450 349552 DEBUG oslo_concurrency.lockutils [req-a9b2160b-3c44-486d-932d-d38a11ed629b req-4c2eab4e-b7dd-4f67-a1f6-914b739f4fe8 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.451 349552 DEBUG oslo_concurrency.lockutils [req-a9b2160b-3c44-486d-932d-d38a11ed629b req-4c2eab4e-b7dd-4f67-a1f6-914b739f4fe8 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.452 349552 DEBUG nova.compute.manager [req-a9b2160b-3c44-486d-932d-d38a11ed629b req-4c2eab4e-b7dd-4f67-a1f6-914b739f4fe8 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Processing event network-vif-plugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.453 349552 DEBUG nova.compute.manager [req-a9b2160b-3c44-486d-932d-d38a11ed629b req-4c2eab4e-b7dd-4f67-a1f6-914b739f4fe8 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Received event network-vif-plugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.454 349552 DEBUG oslo_concurrency.lockutils [req-a9b2160b-3c44-486d-932d-d38a11ed629b req-4c2eab4e-b7dd-4f67-a1f6-914b739f4fe8 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "59e35a32-9023-4e49-be56-9da10df3027f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.455 349552 DEBUG oslo_concurrency.lockutils [req-a9b2160b-3c44-486d-932d-d38a11ed629b req-4c2eab4e-b7dd-4f67-a1f6-914b739f4fe8 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.456 349552 DEBUG oslo_concurrency.lockutils [req-a9b2160b-3c44-486d-932d-d38a11ed629b req-4c2eab4e-b7dd-4f67-a1f6-914b739f4fe8 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.457 349552 DEBUG nova.compute.manager [req-a9b2160b-3c44-486d-932d-d38a11ed629b req-4c2eab4e-b7dd-4f67-a1f6-914b739f4fe8 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] No waiting events found dispatching network-vif-plugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.458 349552 WARNING nova.compute.manager [req-a9b2160b-3c44-486d-932d-d38a11ed629b req-4c2eab4e-b7dd-4f67-a1f6-914b739f4fe8 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Received unexpected event network-vif-plugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 for instance with vm_state building and task_state spawning.#033[00m
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.460 349552 DEBUG nova.compute.manager [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.481 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900536.4651842, 59e35a32-9023-4e49-be56-9da10df3027f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.485 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] VM Resumed (Lifecycle Event)#033[00m
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.491 349552 DEBUG nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.510 349552 INFO nova.virt.libvirt.driver [-] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Instance spawned successfully.#033[00m
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.510 349552 DEBUG nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.525 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.548 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.557 349552 DEBUG nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.558 349552 DEBUG nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.558 349552 DEBUG nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.559 349552 DEBUG nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.560 349552 DEBUG nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.560 349552 DEBUG nova.virt.libvirt.driver [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.584 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  5 02:08:56 compute-0 podman[443002]: 2025-12-05 02:08:56.683251154 +0000 UTC m=+0.091301750 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute)
Dec  5 02:08:56 compute-0 podman[443003]: 2025-12-05 02:08:56.68275571 +0000 UTC m=+0.091006042 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.723 349552 DEBUG nova.compute.manager [req-49587267-7f34-4ff9-bfc6-e7a484f82522 req-403c1748-f9f8-46ce-adc5-b29541546868 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Received event network-vif-plugged-1eebaade-abb1-412c-95f2-2b7240026f85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.723 349552 DEBUG oslo_concurrency.lockutils [req-49587267-7f34-4ff9-bfc6-e7a484f82522 req-403c1748-f9f8-46ce-adc5-b29541546868 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.724 349552 DEBUG oslo_concurrency.lockutils [req-49587267-7f34-4ff9-bfc6-e7a484f82522 req-403c1748-f9f8-46ce-adc5-b29541546868 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.724 349552 DEBUG oslo_concurrency.lockutils [req-49587267-7f34-4ff9-bfc6-e7a484f82522 req-403c1748-f9f8-46ce-adc5-b29541546868 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.725 349552 DEBUG nova.compute.manager [req-49587267-7f34-4ff9-bfc6-e7a484f82522 req-403c1748-f9f8-46ce-adc5-b29541546868 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] No waiting events found dispatching network-vif-plugged-1eebaade-abb1-412c-95f2-2b7240026f85 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.725 349552 WARNING nova.compute.manager [req-49587267-7f34-4ff9-bfc6-e7a484f82522 req-403c1748-f9f8-46ce-adc5-b29541546868 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Received unexpected event network-vif-plugged-1eebaade-abb1-412c-95f2-2b7240026f85 for instance with vm_state deleted and task_state None.#033[00m
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.757 349552 INFO nova.compute.manager [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Took 15.55 seconds to spawn the instance on the hypervisor.#033[00m
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.757 349552 DEBUG nova.compute.manager [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.835 349552 INFO nova.compute.manager [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Took 17.10 seconds to build instance.#033[00m
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.850 349552 DEBUG oslo_concurrency.lockutils [None req-684adbf7-f3bf-40b6-9115-a7d414204f89 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.203s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:08:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:08:56 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/617628783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.884 349552 DEBUG oslo_concurrency.processutils [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.892 349552 DEBUG nova.compute.provider_tree [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.920 349552 DEBUG nova.scheduler.client.report [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:08:56 compute-0 nova_compute[349548]: 2025-12-05 02:08:56.962 349552 DEBUG oslo_concurrency.lockutils [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:08:57 compute-0 nova_compute[349548]: 2025-12-05 02:08:57.009 349552 INFO nova.scheduler.client.report [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Deleted allocations for instance a2605a46-d779-4fc3-aeff-1e040dbcf17d#033[00m
Dec  5 02:08:57 compute-0 nova_compute[349548]: 2025-12-05 02:08:57.069 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:57 compute-0 nova_compute[349548]: 2025-12-05 02:08:57.077 349552 DEBUG oslo_concurrency.lockutils [None req-955344db-15bf-4fde-82ca-054fa70d6785 5e8484f22ce84af99708d2e728179b92 159039e5ad4a46a7be912cd9756c76c5 - - default default] Lock "a2605a46-d779-4fc3-aeff-1e040dbcf17d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.904s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:08:57 compute-0 nova_compute[349548]: 2025-12-05 02:08:57.962 349552 DEBUG nova.network.neutron [req-24ccabad-56cd-426b-aa2b-0bf498b425f2 req-9bdb6494-60c3-405e-8c7c-83ca88dddb3d a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Updated VIF entry in instance network info cache for port 2ac46e0a-6888-440f-b155-d4b0e8677304. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  5 02:08:57 compute-0 nova_compute[349548]: 2025-12-05 02:08:57.962 349552 DEBUG nova.network.neutron [req-24ccabad-56cd-426b-aa2b-0bf498b425f2 req-9bdb6494-60c3-405e-8c7c-83ca88dddb3d a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Updating instance_info_cache with network_info: [{"id": "2ac46e0a-6888-440f-b155-d4b0e8677304", "address": "fa:16:3e:ca:ba:4f", "network": {"id": "77ae1103-3871-4354-8e08-09bb5c0c1ad1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-680696631-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "70b71e0f6ffe47ed86a910f90d71557a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac46e0a-68", "ovs_interfaceid": "2ac46e0a-6888-440f-b155-d4b0e8677304", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:08:57 compute-0 nova_compute[349548]: 2025-12-05 02:08:57.992 349552 DEBUG oslo_concurrency.lockutils [req-24ccabad-56cd-426b-aa2b-0bf498b425f2 req-9bdb6494-60c3-405e-8c7c-83ca88dddb3d a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:08:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:08:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1798: 321 pgs: 321 active+clean; 150 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 4.6 MiB/s rd, 19 KiB/s wr, 197 op/s
Dec  5 02:08:58 compute-0 nova_compute[349548]: 2025-12-05 02:08:58.462 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:08:58 compute-0 nova_compute[349548]: 2025-12-05 02:08:58.678 349552 DEBUG nova.compute.manager [req-f30cd29f-ff82-4031-bc41-63cc7818f4ba req-10c24bfa-7eee-42eb-9432-2176db123970 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Received event network-vif-deleted-1eebaade-abb1-412c-95f2-2b7240026f85 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:08:58 compute-0 podman[443038]: 2025-12-05 02:08:58.739224781 +0000 UTC m=+0.146903539 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, vcs-type=git, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, architecture=x86_64, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., version=9.4, container_name=kepler, distribution-scope=public, io.buildah.version=1.29.0, config_id=edpm)
Dec  5 02:08:59 compute-0 podman[158197]: time="2025-12-05T02:08:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:08:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:08:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45045 "" "Go-http-client/1.1"
Dec  5 02:08:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:08:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9097 "" "Go-http-client/1.1"
Dec  5 02:09:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1799: 321 pgs: 321 active+clean; 150 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 16 KiB/s wr, 165 op/s
Dec  5 02:09:01 compute-0 nova_compute[349548]: 2025-12-05 02:09:01.008 349552 DEBUG nova.compute.manager [req-2ee89b52-fdfc-4751-8dd3-641a03651246 req-ed91affe-ae01-4dd8-8b90-49725a932e86 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Received event network-changed-a240e2ef-1773-4509-ac04-eae1f5d36e08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:09:01 compute-0 nova_compute[349548]: 2025-12-05 02:09:01.010 349552 DEBUG nova.compute.manager [req-2ee89b52-fdfc-4751-8dd3-641a03651246 req-ed91affe-ae01-4dd8-8b90-49725a932e86 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Refreshing instance network info cache due to event network-changed-a240e2ef-1773-4509-ac04-eae1f5d36e08. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  5 02:09:01 compute-0 nova_compute[349548]: 2025-12-05 02:09:01.010 349552 DEBUG oslo_concurrency.lockutils [req-2ee89b52-fdfc-4751-8dd3-641a03651246 req-ed91affe-ae01-4dd8-8b90-49725a932e86 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-59e35a32-9023-4e49-be56-9da10df3027f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:09:01 compute-0 nova_compute[349548]: 2025-12-05 02:09:01.010 349552 DEBUG oslo_concurrency.lockutils [req-2ee89b52-fdfc-4751-8dd3-641a03651246 req-ed91affe-ae01-4dd8-8b90-49725a932e86 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-59e35a32-9023-4e49-be56-9da10df3027f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:09:01 compute-0 nova_compute[349548]: 2025-12-05 02:09:01.011 349552 DEBUG nova.network.neutron [req-2ee89b52-fdfc-4751-8dd3-641a03651246 req-ed91affe-ae01-4dd8-8b90-49725a932e86 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Refreshing network info cache for port a240e2ef-1773-4509-ac04-eae1f5d36e08 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  5 02:09:01 compute-0 openstack_network_exporter[366555]: ERROR   02:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:09:01 compute-0 openstack_network_exporter[366555]: ERROR   02:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:09:01 compute-0 openstack_network_exporter[366555]: ERROR   02:09:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:09:01 compute-0 openstack_network_exporter[366555]: ERROR   02:09:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:09:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:09:01 compute-0 openstack_network_exporter[366555]: ERROR   02:09:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:09:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:09:02 compute-0 nova_compute[349548]: 2025-12-05 02:09:02.070 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1800: 321 pgs: 321 active+clean; 150 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 5.2 MiB/s rd, 16 KiB/s wr, 213 op/s
Dec  5 02:09:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:09:03 compute-0 nova_compute[349548]: 2025-12-05 02:09:03.089 349552 DEBUG nova.network.neutron [req-2ee89b52-fdfc-4751-8dd3-641a03651246 req-ed91affe-ae01-4dd8-8b90-49725a932e86 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Updated VIF entry in instance network info cache for port a240e2ef-1773-4509-ac04-eae1f5d36e08. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  5 02:09:03 compute-0 nova_compute[349548]: 2025-12-05 02:09:03.090 349552 DEBUG nova.network.neutron [req-2ee89b52-fdfc-4751-8dd3-641a03651246 req-ed91affe-ae01-4dd8-8b90-49725a932e86 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Updating instance_info_cache with network_info: [{"id": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "address": "fa:16:3e:16:81:87", "network": {"id": "a9bc378d-2d4b-4990-99ce-02656b1fec0d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2010351729-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd34a6a62cf94436a2b836fa4f49c4fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa240e2ef-17", "ovs_interfaceid": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:09:03 compute-0 nova_compute[349548]: 2025-12-05 02:09:03.111 349552 DEBUG oslo_concurrency.lockutils [req-2ee89b52-fdfc-4751-8dd3-641a03651246 req-ed91affe-ae01-4dd8-8b90-49725a932e86 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-59e35a32-9023-4e49-be56-9da10df3027f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:09:03 compute-0 ovn_controller[89286]: 2025-12-05T02:09:03Z|00092|binding|INFO|Releasing lport 3d0916d7-6f03-4daf-8f3b-126228223c53 from this chassis (sb_readonly=0)
Dec  5 02:09:03 compute-0 ovn_controller[89286]: 2025-12-05T02:09:03Z|00093|binding|INFO|Releasing lport 5f3160d9-2dc7-4f0c-9f4e-c46a8a847823 from this chassis (sb_readonly=0)
Dec  5 02:09:03 compute-0 nova_compute[349548]: 2025-12-05 02:09:03.466 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:03 compute-0 nova_compute[349548]: 2025-12-05 02:09:03.479 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:03 compute-0 nova_compute[349548]: 2025-12-05 02:09:03.648 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1801: 321 pgs: 321 active+clean; 150 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 16 KiB/s wr, 129 op/s
Dec  5 02:09:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1802: 321 pgs: 321 active+clean; 150 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 100 op/s
Dec  5 02:09:07 compute-0 nova_compute[349548]: 2025-12-05 02:09:07.056 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:07 compute-0 nova_compute[349548]: 2025-12-05 02:09:07.072 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:09:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1803: 321 pgs: 321 active+clean; 150 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 92 op/s
Dec  5 02:09:08 compute-0 nova_compute[349548]: 2025-12-05 02:09:08.419 349552 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764900533.418778, a2605a46-d779-4fc3-aeff-1e040dbcf17d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:09:08 compute-0 nova_compute[349548]: 2025-12-05 02:09:08.421 349552 INFO nova.compute.manager [-] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] VM Stopped (Lifecycle Event)#033[00m
Dec  5 02:09:08 compute-0 nova_compute[349548]: 2025-12-05 02:09:08.448 349552 DEBUG nova.compute.manager [None req-93cafbe0-3a2a-4f56-b853-bae93cafa81c - - - - - -] [instance: a2605a46-d779-4fc3-aeff-1e040dbcf17d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:09:08 compute-0 nova_compute[349548]: 2025-12-05 02:09:08.469 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:08 compute-0 podman[443058]: 2025-12-05 02:09:08.684622563 +0000 UTC m=+0.087075552 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  5 02:09:08 compute-0 podman[443057]: 2025-12-05 02:09:08.702162294 +0000 UTC m=+0.091330951 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  5 02:09:08 compute-0 podman[443064]: 2025-12-05 02:09:08.726007863 +0000 UTC m=+0.097526625 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, architecture=x86_64, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, distribution-scope=public, container_name=openstack_network_exporter, managed_by=edpm_ansible)
Dec  5 02:09:08 compute-0 podman[443059]: 2025-12-05 02:09:08.781000173 +0000 UTC m=+0.158708818 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  5 02:09:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1804: 321 pgs: 321 active+clean; 150 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Dec  5 02:09:12 compute-0 nova_compute[349548]: 2025-12-05 02:09:12.074 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1805: 321 pgs: 321 active+clean; 150 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Dec  5 02:09:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:09:13 compute-0 nova_compute[349548]: 2025-12-05 02:09:13.473 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:13 compute-0 nova_compute[349548]: 2025-12-05 02:09:13.600 349552 DEBUG oslo_concurrency.lockutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Acquiring lock "86d3faa9-af9e-47de-bc0f-3e211167604f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:09:13 compute-0 nova_compute[349548]: 2025-12-05 02:09:13.601 349552 DEBUG oslo_concurrency.lockutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Lock "86d3faa9-af9e-47de-bc0f-3e211167604f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:09:13 compute-0 nova_compute[349548]: 2025-12-05 02:09:13.624 349552 DEBUG nova.compute.manager [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  5 02:09:13 compute-0 nova_compute[349548]: 2025-12-05 02:09:13.711 349552 DEBUG oslo_concurrency.lockutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:09:13 compute-0 nova_compute[349548]: 2025-12-05 02:09:13.712 349552 DEBUG oslo_concurrency.lockutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:09:13 compute-0 nova_compute[349548]: 2025-12-05 02:09:13.726 349552 DEBUG nova.virt.hardware [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  5 02:09:13 compute-0 nova_compute[349548]: 2025-12-05 02:09:13.727 349552 INFO nova.compute.claims [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  5 02:09:13 compute-0 nova_compute[349548]: 2025-12-05 02:09:13.885 349552 DEBUG oslo_concurrency.processutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:09:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1806: 321 pgs: 321 active+clean; 150 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 507 KiB/s rd, 16 op/s
Dec  5 02:09:14 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:09:14 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4109865076' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.469 349552 DEBUG oslo_concurrency.processutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.484 349552 DEBUG nova.compute.provider_tree [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.509 349552 DEBUG nova.scheduler.client.report [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.531 349552 DEBUG oslo_concurrency.lockutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.532 349552 DEBUG nova.compute.manager [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  5 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.573 349552 DEBUG nova.compute.manager [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  5 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.573 349552 DEBUG nova.network.neutron [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  5 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.595 349552 INFO nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  5 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.611 349552 DEBUG nova.compute.manager [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  5 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.729 349552 DEBUG nova.compute.manager [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  5 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.731 349552 DEBUG nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  5 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.732 349552 INFO nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Creating image(s)#033[00m
Dec  5 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.773 349552 DEBUG nova.storage.rbd_utils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] rbd image 86d3faa9-af9e-47de-bc0f-3e211167604f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.815 349552 DEBUG nova.storage.rbd_utils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] rbd image 86d3faa9-af9e-47de-bc0f-3e211167604f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.845 349552 DEBUG nova.storage.rbd_utils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] rbd image 86d3faa9-af9e-47de-bc0f-3e211167604f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.853 349552 DEBUG oslo_concurrency.processutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.900 349552 DEBUG nova.policy [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7eb322b6163b466fb7721796e0d10c1f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7771751d84d348319b2c3d632191b59c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  5 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.938 349552 DEBUG oslo_concurrency.processutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.938 349552 DEBUG oslo_concurrency.lockutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Acquiring lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.939 349552 DEBUG oslo_concurrency.lockutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.939 349552 DEBUG oslo_concurrency.lockutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.978 349552 DEBUG nova.storage.rbd_utils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] rbd image 86d3faa9-af9e-47de-bc0f-3e211167604f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:09:14 compute-0 nova_compute[349548]: 2025-12-05 02:09:14.985 349552 DEBUG oslo_concurrency.processutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 86d3faa9-af9e-47de-bc0f-3e211167604f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:09:15 compute-0 nova_compute[349548]: 2025-12-05 02:09:15.402 349552 DEBUG oslo_concurrency.processutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 86d3faa9-af9e-47de-bc0f-3e211167604f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:09:15 compute-0 nova_compute[349548]: 2025-12-05 02:09:15.556 349552 DEBUG nova.storage.rbd_utils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] resizing rbd image 86d3faa9-af9e-47de-bc0f-3e211167604f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  5 02:09:15 compute-0 nova_compute[349548]: 2025-12-05 02:09:15.774 349552 DEBUG nova.objects.instance [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Lazy-loading 'migration_context' on Instance uuid 86d3faa9-af9e-47de-bc0f-3e211167604f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:09:15 compute-0 nova_compute[349548]: 2025-12-05 02:09:15.790 349552 DEBUG nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  5 02:09:15 compute-0 nova_compute[349548]: 2025-12-05 02:09:15.791 349552 DEBUG nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Ensure instance console log exists: /var/lib/nova/instances/86d3faa9-af9e-47de-bc0f-3e211167604f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  5 02:09:15 compute-0 nova_compute[349548]: 2025-12-05 02:09:15.791 349552 DEBUG oslo_concurrency.lockutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:09:15 compute-0 nova_compute[349548]: 2025-12-05 02:09:15.791 349552 DEBUG oslo_concurrency.lockutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:09:15 compute-0 nova_compute[349548]: 2025-12-05 02:09:15.792 349552 DEBUG oslo_concurrency.lockutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:09:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1807: 321 pgs: 321 active+clean; 153 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 71 KiB/s wr, 1 op/s
Dec  5 02:09:16 compute-0 nova_compute[349548]: 2025-12-05 02:09:16.117 349552 DEBUG nova.network.neutron [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Successfully created port: 5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  5 02:09:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:09:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:09:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:09:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:09:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:09:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:09:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:09:16
Dec  5 02:09:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 02:09:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 02:09:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'images', 'default.rgw.control', 'vms', '.mgr', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'volumes', 'default.rgw.meta']
Dec  5 02:09:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 02:09:17 compute-0 nova_compute[349548]: 2025-12-05 02:09:17.077 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:17 compute-0 nova_compute[349548]: 2025-12-05 02:09:17.305 349552 DEBUG nova.network.neutron [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Successfully updated port: 5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  5 02:09:17 compute-0 nova_compute[349548]: 2025-12-05 02:09:17.321 349552 DEBUG oslo_concurrency.lockutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Acquiring lock "refresh_cache-86d3faa9-af9e-47de-bc0f-3e211167604f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:09:17 compute-0 nova_compute[349548]: 2025-12-05 02:09:17.321 349552 DEBUG oslo_concurrency.lockutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Acquired lock "refresh_cache-86d3faa9-af9e-47de-bc0f-3e211167604f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:09:17 compute-0 nova_compute[349548]: 2025-12-05 02:09:17.322 349552 DEBUG nova.network.neutron [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  5 02:09:17 compute-0 nova_compute[349548]: 2025-12-05 02:09:17.584 349552 DEBUG nova.compute.manager [req-b37398ab-98ea-42e2-8849-eaeec8ef8c64 req-04a4bd84-b4d7-4128-839c-1384fa03a437 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Received event network-changed-5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:09:17 compute-0 nova_compute[349548]: 2025-12-05 02:09:17.585 349552 DEBUG nova.compute.manager [req-b37398ab-98ea-42e2-8849-eaeec8ef8c64 req-04a4bd84-b4d7-4128-839c-1384fa03a437 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Refreshing instance network info cache due to event network-changed-5ce2a2f7-a9e2-4922-b684-fefcfe3f6307. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  5 02:09:17 compute-0 nova_compute[349548]: 2025-12-05 02:09:17.587 349552 DEBUG oslo_concurrency.lockutils [req-b37398ab-98ea-42e2-8849-eaeec8ef8c64 req-04a4bd84-b4d7-4128-839c-1384fa03a437 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-86d3faa9-af9e-47de-bc0f-3e211167604f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:09:17 compute-0 nova_compute[349548]: 2025-12-05 02:09:17.710 349552 DEBUG nova.network.neutron [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  5 02:09:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 02:09:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:09:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 02:09:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:09:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:09:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:09:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:09:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:09:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:09:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:09:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:09:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1808: 321 pgs: 321 active+clean; 187 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.3 MiB/s wr, 25 op/s
Dec  5 02:09:18 compute-0 nova_compute[349548]: 2025-12-05 02:09:18.477 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.631 349552 DEBUG nova.network.neutron [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Updating instance_info_cache with network_info: [{"id": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "address": "fa:16:3e:57:08:95", "network": {"id": "f5a068ec-72e0-4934-878b-07d85634c361", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-965896294-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7771751d84d348319b2c3d632191b59c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ce2a2f7-a9", "ovs_interfaceid": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.658 349552 DEBUG oslo_concurrency.lockutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Releasing lock "refresh_cache-86d3faa9-af9e-47de-bc0f-3e211167604f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.659 349552 DEBUG nova.compute.manager [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Instance network_info: |[{"id": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "address": "fa:16:3e:57:08:95", "network": {"id": "f5a068ec-72e0-4934-878b-07d85634c361", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-965896294-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7771751d84d348319b2c3d632191b59c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ce2a2f7-a9", "ovs_interfaceid": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  5 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.661 349552 DEBUG oslo_concurrency.lockutils [req-b37398ab-98ea-42e2-8849-eaeec8ef8c64 req-04a4bd84-b4d7-4128-839c-1384fa03a437 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-86d3faa9-af9e-47de-bc0f-3e211167604f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.662 349552 DEBUG nova.network.neutron [req-b37398ab-98ea-42e2-8849-eaeec8ef8c64 req-04a4bd84-b4d7-4128-839c-1384fa03a437 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Refreshing network info cache for port 5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  5 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.667 349552 DEBUG nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Start _get_guest_xml network_info=[{"id": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "address": "fa:16:3e:57:08:95", "network": {"id": "f5a068ec-72e0-4934-878b-07d85634c361", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-965896294-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7771751d84d348319b2c3d632191b59c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ce2a2f7-a9", "ovs_interfaceid": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T02:07:35Z,direct_url=<?>,disk_format='qcow2',id=e9091bfb-b431-47c9-a284-79372046956b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T02:07:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'image_id': 'e9091bfb-b431-47c9-a284-79372046956b'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  5 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.680 349552 WARNING nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.696 349552 DEBUG nova.virt.libvirt.host [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  5 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.697 349552 DEBUG nova.virt.libvirt.host [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  5 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.705 349552 DEBUG nova.virt.libvirt.host [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  5 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.706 349552 DEBUG nova.virt.libvirt.host [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  5 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.707 349552 DEBUG nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  5 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.707 349552 DEBUG nova.virt.hardware [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-05T02:07:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T02:07:35Z,direct_url=<?>,disk_format='qcow2',id=e9091bfb-b431-47c9-a284-79372046956b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T02:07:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  5 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.709 349552 DEBUG nova.virt.hardware [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  5 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.709 349552 DEBUG nova.virt.hardware [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  5 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.710 349552 DEBUG nova.virt.hardware [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  5 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.711 349552 DEBUG nova.virt.hardware [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  5 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.712 349552 DEBUG nova.virt.hardware [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  5 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.712 349552 DEBUG nova.virt.hardware [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  5 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.713 349552 DEBUG nova.virt.hardware [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  5 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.714 349552 DEBUG nova.virt.hardware [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  5 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.715 349552 DEBUG nova.virt.hardware [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  5 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.715 349552 DEBUG nova.virt.hardware [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  5 02:09:19 compute-0 nova_compute[349548]: 2025-12-05 02:09:19.721 349552 DEBUG oslo_concurrency.processutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:09:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1809: 321 pgs: 321 active+clean; 187 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.3 MiB/s wr, 25 op/s
Dec  5 02:09:20 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  5 02:09:20 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3827995369' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  5 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.318 349552 DEBUG oslo_concurrency.processutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.356 349552 DEBUG nova.storage.rbd_utils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] rbd image 86d3faa9-af9e-47de-bc0f-3e211167604f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.383 349552 DEBUG oslo_concurrency.processutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:09:20 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  5 02:09:20 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1270812209' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  5 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.891 349552 DEBUG oslo_concurrency.processutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.897 349552 DEBUG nova.virt.libvirt.vif [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T02:09:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1615802566',display_name='tempest-ServersTestManualDisk-server-1615802566',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1615802566',id=9,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOcRO97guGa63+bps+A9FhbwCKswROHpaWQg4mABL2o9peSWqfNCYb59UZjb6DzrVFgPcALMXfGD8Zcw0e20RtTOhbatKip3vjrwBqcfA+Ox6W1aF5tJ18LwMyhNTkj73A==',key_name='tempest-keypair-1736515978',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7771751d84d348319b2c3d632191b59c',ramdisk_id='',reservation_id='r-8rl5cwmf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1464391732',owner_user_name='tempest-ServersTestManualDisk-1464391732-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T02:09:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7eb322b6163b466fb7721796e0d10c1f',uuid=86d3faa9-af9e-47de-bc0f-3e211167604f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "address": "fa:16:3e:57:08:95", "network": {"id": "f5a068ec-72e0-4934-878b-07d85634c361", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-965896294-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7771751d84d348319b2c3d632191b59c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ce2a2f7-a9", "ovs_interfaceid": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  5 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.900 349552 DEBUG nova.network.os_vif_util [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Converting VIF {"id": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "address": "fa:16:3e:57:08:95", "network": {"id": "f5a068ec-72e0-4934-878b-07d85634c361", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-965896294-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7771751d84d348319b2c3d632191b59c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ce2a2f7-a9", "ovs_interfaceid": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.905 349552 DEBUG nova.network.os_vif_util [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:08:95,bridge_name='br-int',has_traffic_filtering=True,id=5ce2a2f7-a9e2-4922-b684-fefcfe3f6307,network=Network(f5a068ec-72e0-4934-878b-07d85634c361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ce2a2f7-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.911 349552 DEBUG nova.objects.instance [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Lazy-loading 'pci_devices' on Instance uuid 86d3faa9-af9e-47de-bc0f-3e211167604f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.965 349552 DEBUG nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] End _get_guest_xml xml=<domain type="kvm">
Dec  5 02:09:20 compute-0 nova_compute[349548]:  <uuid>86d3faa9-af9e-47de-bc0f-3e211167604f</uuid>
Dec  5 02:09:20 compute-0 nova_compute[349548]:  <name>instance-00000009</name>
Dec  5 02:09:20 compute-0 nova_compute[349548]:  <memory>131072</memory>
Dec  5 02:09:20 compute-0 nova_compute[349548]:  <vcpu>1</vcpu>
Dec  5 02:09:20 compute-0 nova_compute[349548]:  <metadata>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  5 02:09:20 compute-0 nova_compute[349548]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:      <nova:name>tempest-ServersTestManualDisk-server-1615802566</nova:name>
Dec  5 02:09:20 compute-0 nova_compute[349548]:      <nova:creationTime>2025-12-05 02:09:19</nova:creationTime>
Dec  5 02:09:20 compute-0 nova_compute[349548]:      <nova:flavor name="m1.nano">
Dec  5 02:09:20 compute-0 nova_compute[349548]:        <nova:memory>128</nova:memory>
Dec  5 02:09:20 compute-0 nova_compute[349548]:        <nova:disk>1</nova:disk>
Dec  5 02:09:20 compute-0 nova_compute[349548]:        <nova:swap>0</nova:swap>
Dec  5 02:09:20 compute-0 nova_compute[349548]:        <nova:ephemeral>0</nova:ephemeral>
Dec  5 02:09:20 compute-0 nova_compute[349548]:        <nova:vcpus>1</nova:vcpus>
Dec  5 02:09:20 compute-0 nova_compute[349548]:      </nova:flavor>
Dec  5 02:09:20 compute-0 nova_compute[349548]:      <nova:owner>
Dec  5 02:09:20 compute-0 nova_compute[349548]:        <nova:user uuid="7eb322b6163b466fb7721796e0d10c1f">tempest-ServersTestManualDisk-1464391732-project-member</nova:user>
Dec  5 02:09:20 compute-0 nova_compute[349548]:        <nova:project uuid="7771751d84d348319b2c3d632191b59c">tempest-ServersTestManualDisk-1464391732</nova:project>
Dec  5 02:09:20 compute-0 nova_compute[349548]:      </nova:owner>
Dec  5 02:09:20 compute-0 nova_compute[349548]:      <nova:root type="image" uuid="e9091bfb-b431-47c9-a284-79372046956b"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:      <nova:ports>
Dec  5 02:09:20 compute-0 nova_compute[349548]:        <nova:port uuid="5ce2a2f7-a9e2-4922-b684-fefcfe3f6307">
Dec  5 02:09:20 compute-0 nova_compute[349548]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:        </nova:port>
Dec  5 02:09:20 compute-0 nova_compute[349548]:      </nova:ports>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    </nova:instance>
Dec  5 02:09:20 compute-0 nova_compute[349548]:  </metadata>
Dec  5 02:09:20 compute-0 nova_compute[349548]:  <sysinfo type="smbios">
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <system>
Dec  5 02:09:20 compute-0 nova_compute[349548]:      <entry name="manufacturer">RDO</entry>
Dec  5 02:09:20 compute-0 nova_compute[349548]:      <entry name="product">OpenStack Compute</entry>
Dec  5 02:09:20 compute-0 nova_compute[349548]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  5 02:09:20 compute-0 nova_compute[349548]:      <entry name="serial">86d3faa9-af9e-47de-bc0f-3e211167604f</entry>
Dec  5 02:09:20 compute-0 nova_compute[349548]:      <entry name="uuid">86d3faa9-af9e-47de-bc0f-3e211167604f</entry>
Dec  5 02:09:20 compute-0 nova_compute[349548]:      <entry name="family">Virtual Machine</entry>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    </system>
Dec  5 02:09:20 compute-0 nova_compute[349548]:  </sysinfo>
Dec  5 02:09:20 compute-0 nova_compute[349548]:  <os>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <boot dev="hd"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <smbios mode="sysinfo"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:  </os>
Dec  5 02:09:20 compute-0 nova_compute[349548]:  <features>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <acpi/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <apic/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <vmcoreinfo/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:  </features>
Dec  5 02:09:20 compute-0 nova_compute[349548]:  <clock offset="utc">
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <timer name="pit" tickpolicy="delay"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <timer name="hpet" present="no"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:  </clock>
Dec  5 02:09:20 compute-0 nova_compute[349548]:  <cpu mode="host-model" match="exact">
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <topology sockets="1" cores="1" threads="1"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:  </cpu>
Dec  5 02:09:20 compute-0 nova_compute[349548]:  <devices>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <disk type="network" device="disk">
Dec  5 02:09:20 compute-0 nova_compute[349548]:      <driver type="raw" cache="none"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:      <source protocol="rbd" name="vms/86d3faa9-af9e-47de-bc0f-3e211167604f_disk">
Dec  5 02:09:20 compute-0 nova_compute[349548]:        <host name="192.168.122.100" port="6789"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:      </source>
Dec  5 02:09:20 compute-0 nova_compute[349548]:      <auth username="openstack">
Dec  5 02:09:20 compute-0 nova_compute[349548]:        <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:      </auth>
Dec  5 02:09:20 compute-0 nova_compute[349548]:      <target dev="vda" bus="virtio"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    </disk>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <disk type="network" device="cdrom">
Dec  5 02:09:20 compute-0 nova_compute[349548]:      <driver type="raw" cache="none"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:      <source protocol="rbd" name="vms/86d3faa9-af9e-47de-bc0f-3e211167604f_disk.config">
Dec  5 02:09:20 compute-0 nova_compute[349548]:        <host name="192.168.122.100" port="6789"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:      </source>
Dec  5 02:09:20 compute-0 nova_compute[349548]:      <auth username="openstack">
Dec  5 02:09:20 compute-0 nova_compute[349548]:        <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:      </auth>
Dec  5 02:09:20 compute-0 nova_compute[349548]:      <target dev="sda" bus="sata"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    </disk>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <interface type="ethernet">
Dec  5 02:09:20 compute-0 nova_compute[349548]:      <mac address="fa:16:3e:57:08:95"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:      <model type="virtio"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:      <driver name="vhost" rx_queue_size="512"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:      <mtu size="1442"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:      <target dev="tap5ce2a2f7-a9"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    </interface>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <serial type="pty">
Dec  5 02:09:20 compute-0 nova_compute[349548]:      <log file="/var/lib/nova/instances/86d3faa9-af9e-47de-bc0f-3e211167604f/console.log" append="off"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    </serial>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <video>
Dec  5 02:09:20 compute-0 nova_compute[349548]:      <model type="virtio"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    </video>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <input type="tablet" bus="usb"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <rng model="virtio">
Dec  5 02:09:20 compute-0 nova_compute[349548]:      <backend model="random">/dev/urandom</backend>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    </rng>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <controller type="usb" index="0"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    <memballoon model="virtio">
Dec  5 02:09:20 compute-0 nova_compute[349548]:      <stats period="10"/>
Dec  5 02:09:20 compute-0 nova_compute[349548]:    </memballoon>
Dec  5 02:09:20 compute-0 nova_compute[349548]:  </devices>
Dec  5 02:09:20 compute-0 nova_compute[349548]: </domain>
Dec  5 02:09:20 compute-0 nova_compute[349548]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  5 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.984 349552 DEBUG nova.compute.manager [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Preparing to wait for external event network-vif-plugged-5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  5 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.985 349552 DEBUG oslo_concurrency.lockutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Acquiring lock "86d3faa9-af9e-47de-bc0f-3e211167604f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.985 349552 DEBUG oslo_concurrency.lockutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Lock "86d3faa9-af9e-47de-bc0f-3e211167604f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.985 349552 DEBUG oslo_concurrency.lockutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Lock "86d3faa9-af9e-47de-bc0f-3e211167604f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.987 349552 DEBUG nova.virt.libvirt.vif [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T02:09:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1615802566',display_name='tempest-ServersTestManualDisk-server-1615802566',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1615802566',id=9,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOcRO97guGa63+bps+A9FhbwCKswROHpaWQg4mABL2o9peSWqfNCYb59UZjb6DzrVFgPcALMXfGD8Zcw0e20RtTOhbatKip3vjrwBqcfA+Ox6W1aF5tJ18LwMyhNTkj73A==',key_name='tempest-keypair-1736515978',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7771751d84d348319b2c3d632191b59c',ramdisk_id='',reservation_id='r-8rl5cwmf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1464391732',owner_user_name='tempest-ServersTestManualDisk-1464391732-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T02:09:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7eb322b6163b466fb7721796e0d10c1f',uuid=86d3faa9-af9e-47de-bc0f-3e211167604f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "address": "fa:16:3e:57:08:95", "network": {"id": "f5a068ec-72e0-4934-878b-07d85634c361", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-965896294-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7771751d84d348319b2c3d632191b59c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ce2a2f7-a9", "ovs_interfaceid": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  5 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.987 349552 DEBUG nova.network.os_vif_util [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Converting VIF {"id": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "address": "fa:16:3e:57:08:95", "network": {"id": "f5a068ec-72e0-4934-878b-07d85634c361", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-965896294-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7771751d84d348319b2c3d632191b59c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ce2a2f7-a9", "ovs_interfaceid": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.988 349552 DEBUG nova.network.os_vif_util [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:08:95,bridge_name='br-int',has_traffic_filtering=True,id=5ce2a2f7-a9e2-4922-b684-fefcfe3f6307,network=Network(f5a068ec-72e0-4934-878b-07d85634c361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ce2a2f7-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.989 349552 DEBUG os_vif [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:08:95,bridge_name='br-int',has_traffic_filtering=True,id=5ce2a2f7-a9e2-4922-b684-fefcfe3f6307,network=Network(f5a068ec-72e0-4934-878b-07d85634c361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ce2a2f7-a9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  5 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.990 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.991 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.993 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.997 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.997 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5ce2a2f7-a9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:09:20 compute-0 nova_compute[349548]: 2025-12-05 02:09:20.998 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5ce2a2f7-a9, col_values=(('external_ids', {'iface-id': '5ce2a2f7-a9e2-4922-b684-fefcfe3f6307', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:57:08:95', 'vm-uuid': '86d3faa9-af9e-47de-bc0f-3e211167604f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:09:21 compute-0 NetworkManager[49092]: <info>  [1764900561.0014] manager: (tap5ce2a2f7-a9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Dec  5 02:09:21 compute-0 nova_compute[349548]: 2025-12-05 02:09:21.000 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:21 compute-0 nova_compute[349548]: 2025-12-05 02:09:21.004 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  5 02:09:21 compute-0 nova_compute[349548]: 2025-12-05 02:09:21.011 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:21 compute-0 nova_compute[349548]: 2025-12-05 02:09:21.013 349552 INFO os_vif [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:08:95,bridge_name='br-int',has_traffic_filtering=True,id=5ce2a2f7-a9e2-4922-b684-fefcfe3f6307,network=Network(f5a068ec-72e0-4934-878b-07d85634c361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ce2a2f7-a9')#033[00m
Dec  5 02:09:21 compute-0 nova_compute[349548]: 2025-12-05 02:09:21.086 349552 DEBUG nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  5 02:09:21 compute-0 nova_compute[349548]: 2025-12-05 02:09:21.088 349552 DEBUG nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  5 02:09:21 compute-0 nova_compute[349548]: 2025-12-05 02:09:21.088 349552 DEBUG nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] No VIF found with MAC fa:16:3e:57:08:95, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  5 02:09:21 compute-0 nova_compute[349548]: 2025-12-05 02:09:21.089 349552 INFO nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Using config drive#033[00m
Dec  5 02:09:21 compute-0 nova_compute[349548]: 2025-12-05 02:09:21.139 349552 DEBUG nova.storage.rbd_utils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] rbd image 86d3faa9-af9e-47de-bc0f-3e211167604f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:09:21 compute-0 nova_compute[349548]: 2025-12-05 02:09:21.352 349552 DEBUG nova.network.neutron [req-b37398ab-98ea-42e2-8849-eaeec8ef8c64 req-04a4bd84-b4d7-4128-839c-1384fa03a437 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Updated VIF entry in instance network info cache for port 5ce2a2f7-a9e2-4922-b684-fefcfe3f6307. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  5 02:09:21 compute-0 nova_compute[349548]: 2025-12-05 02:09:21.353 349552 DEBUG nova.network.neutron [req-b37398ab-98ea-42e2-8849-eaeec8ef8c64 req-04a4bd84-b4d7-4128-839c-1384fa03a437 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Updating instance_info_cache with network_info: [{"id": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "address": "fa:16:3e:57:08:95", "network": {"id": "f5a068ec-72e0-4934-878b-07d85634c361", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-965896294-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7771751d84d348319b2c3d632191b59c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ce2a2f7-a9", "ovs_interfaceid": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:09:21 compute-0 nova_compute[349548]: 2025-12-05 02:09:21.367 349552 DEBUG oslo_concurrency.lockutils [req-b37398ab-98ea-42e2-8849-eaeec8ef8c64 req-04a4bd84-b4d7-4128-839c-1384fa03a437 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-86d3faa9-af9e-47de-bc0f-3e211167604f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:09:21 compute-0 nova_compute[349548]: 2025-12-05 02:09:21.614 349552 INFO nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Creating config drive at /var/lib/nova/instances/86d3faa9-af9e-47de-bc0f-3e211167604f/disk.config#033[00m
Dec  5 02:09:21 compute-0 nova_compute[349548]: 2025-12-05 02:09:21.622 349552 DEBUG oslo_concurrency.processutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/86d3faa9-af9e-47de-bc0f-3e211167604f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphwbo5204 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:09:21 compute-0 nova_compute[349548]: 2025-12-05 02:09:21.780 349552 DEBUG oslo_concurrency.processutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/86d3faa9-af9e-47de-bc0f-3e211167604f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphwbo5204" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:09:21 compute-0 nova_compute[349548]: 2025-12-05 02:09:21.817 349552 DEBUG nova.storage.rbd_utils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] rbd image 86d3faa9-af9e-47de-bc0f-3e211167604f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:09:21 compute-0 nova_compute[349548]: 2025-12-05 02:09:21.827 349552 DEBUG oslo_concurrency.processutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/86d3faa9-af9e-47de-bc0f-3e211167604f/disk.config 86d3faa9-af9e-47de-bc0f-3e211167604f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.076 349552 DEBUG oslo_concurrency.processutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/86d3faa9-af9e-47de-bc0f-3e211167604f/disk.config 86d3faa9-af9e-47de-bc0f-3e211167604f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.249s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.078 349552 INFO nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Deleting local config drive /var/lib/nova/instances/86d3faa9-af9e-47de-bc0f-3e211167604f/disk.config because it was imported into RBD.#033[00m
Dec  5 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.079 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1810: 321 pgs: 321 active+clean; 205 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 138 KiB/s rd, 2.5 MiB/s wr, 43 op/s
Dec  5 02:09:22 compute-0 kernel: tap5ce2a2f7-a9: entered promiscuous mode
Dec  5 02:09:22 compute-0 NetworkManager[49092]: <info>  [1764900562.1333] manager: (tap5ce2a2f7-a9): new Tun device (/org/freedesktop/NetworkManager/Devices/50)
Dec  5 02:09:22 compute-0 ovn_controller[89286]: 2025-12-05T02:09:22Z|00094|binding|INFO|Claiming lport 5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 for this chassis.
Dec  5 02:09:22 compute-0 ovn_controller[89286]: 2025-12-05T02:09:22Z|00095|binding|INFO|5ce2a2f7-a9e2-4922-b684-fefcfe3f6307: Claiming fa:16:3e:57:08:95 10.100.0.8
Dec  5 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.140 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:22 compute-0 ovn_controller[89286]: 2025-12-05T02:09:22Z|00096|binding|INFO|Setting lport 5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 ovn-installed in OVS
Dec  5 02:09:22 compute-0 ovn_controller[89286]: 2025-12-05T02:09:22Z|00097|binding|INFO|Setting lport 5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 up in Southbound
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.149 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:08:95 10.100.0.8'], port_security=['fa:16:3e:57:08:95 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '86d3faa9-af9e-47de-bc0f-3e211167604f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5a068ec-72e0-4934-878b-07d85634c361', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7771751d84d348319b2c3d632191b59c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '90f6337f-8150-484e-95c9-0297abbd01b1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0bc16ee-3841-439b-8236-7c21ef336dbd, chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=5ce2a2f7-a9e2-4922-b684-fefcfe3f6307) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.152 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.158 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 in datapath f5a068ec-72e0-4934-878b-07d85634c361 bound to our chassis#033[00m
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.161 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f5a068ec-72e0-4934-878b-07d85634c361#033[00m
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.176 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[c0c94c99-99a5-4b15-9a6d-a86beaf4baf1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.177 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf5a068ec-71 in ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.180 412744 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf5a068ec-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.180 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[19ff237d-9806-4cbb-8905-3a5f2b84a7af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.181 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[7b2799c3-a565-4ada-abb5-eec6d150b7df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:09:22 compute-0 systemd-machined[138700]: New machine qemu-9-instance-00000009.
Dec  5 02:09:22 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.196 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[97721fc3-07f3-477d-b161-63095e8e4c6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:09:22 compute-0 systemd-udevd[443466]: Network interface NamePolicy= disabled on kernel command line.
Dec  5 02:09:22 compute-0 NetworkManager[49092]: <info>  [1764900562.2248] device (tap5ce2a2f7-a9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  5 02:09:22 compute-0 NetworkManager[49092]: <info>  [1764900562.2256] device (tap5ce2a2f7-a9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.245 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[2c6f24ba-deaf-4a07-8176-1699947b2fb9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.279 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[05e56605-c46e-491e-9b81-b54ceb685838]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:09:22 compute-0 NetworkManager[49092]: <info>  [1764900562.2870] manager: (tapf5a068ec-70): new Veth device (/org/freedesktop/NetworkManager/Devices/51)
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.287 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[093b7c9a-3db7-471e-9279-fa430bdab6cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.334 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[16d7e619-d5d1-4e21-bacb-779924431c04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.342 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[a8649c2e-15a0-424c-849a-3d7cd1c3735c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:09:22 compute-0 NetworkManager[49092]: <info>  [1764900562.3764] device (tapf5a068ec-70): carrier: link connected
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.383 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[a1a6912f-ace9-4141-894c-7282373bd766]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.408 349552 DEBUG nova.compute.manager [req-b3cb8dcf-598f-4c06-8f2b-0e0399508a26 req-94f4af22-3b5a-4eef-be69-ee5af86c966e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Received event network-vif-plugged-5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.409 349552 DEBUG oslo_concurrency.lockutils [req-b3cb8dcf-598f-4c06-8f2b-0e0399508a26 req-94f4af22-3b5a-4eef-be69-ee5af86c966e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "86d3faa9-af9e-47de-bc0f-3e211167604f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.409 349552 DEBUG oslo_concurrency.lockutils [req-b3cb8dcf-598f-4c06-8f2b-0e0399508a26 req-94f4af22-3b5a-4eef-be69-ee5af86c966e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "86d3faa9-af9e-47de-bc0f-3e211167604f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.410 349552 DEBUG oslo_concurrency.lockutils [req-b3cb8dcf-598f-4c06-8f2b-0e0399508a26 req-94f4af22-3b5a-4eef-be69-ee5af86c966e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "86d3faa9-af9e-47de-bc0f-3e211167604f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.411 349552 DEBUG nova.compute.manager [req-b3cb8dcf-598f-4c06-8f2b-0e0399508a26 req-94f4af22-3b5a-4eef-be69-ee5af86c966e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Processing event network-vif-plugged-5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.418 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[10f4b1ec-aeea-4680-9659-d1bbccfc1b54]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf5a068ec-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ba:fe:72'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664903, 'reachable_time': 16555, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 443497, 'error': None, 'target': 'ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.442 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[064a2da4-d865-486c-a7f1-c4a36262f667]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feba:fe72'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 664903, 'tstamp': 664903}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 443498, 'error': None, 'target': 'ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.476 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[5a8652f6-b231-4f73-a60a-06d15fc4bdeb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf5a068ec-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ba:fe:72'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664903, 'reachable_time': 16555, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 443499, 'error': None, 'target': 'ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.518 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[c221df4e-2aae-4f86-9dd7-1075a11c2adf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.636 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[0c5f9f2a-2d86-4b4d-b7b1-f27726fba883]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.638 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5a068ec-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.639 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.640 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf5a068ec-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:09:22 compute-0 NetworkManager[49092]: <info>  [1764900562.6453] manager: (tapf5a068ec-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Dec  5 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.646 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:22 compute-0 kernel: tapf5a068ec-70: entered promiscuous mode
Dec  5 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.656 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.659 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf5a068ec-70, col_values=(('external_ids', {'iface-id': '607284a9-7bf5-4106-9085-2fdecab38aa1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.661 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:22 compute-0 ovn_controller[89286]: 2025-12-05T02:09:22Z|00098|binding|INFO|Releasing lport 607284a9-7bf5-4106-9085-2fdecab38aa1 from this chassis (sb_readonly=0)
Dec  5 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.680 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.685 287122 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f5a068ec-72e0-4934-878b-07d85634c361.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f5a068ec-72e0-4934-878b-07d85634c361.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.686 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[63ac9297-e948-436b-b0bb-6263ebedfd20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.691 287122 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]: global
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]:    log         /dev/log local0 debug
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]:    log-tag     haproxy-metadata-proxy-f5a068ec-72e0-4934-878b-07d85634c361
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]:    user        root
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]:    group       root
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]:    maxconn     1024
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]:    pidfile     /var/lib/neutron/external/pids/f5a068ec-72e0-4934-878b-07d85634c361.pid.haproxy
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]:    daemon
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]: 
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]: defaults
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]:    log global
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]:    mode http
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]:    option httplog
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]:    option dontlognull
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]:    option http-server-close
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]:    option forwardfor
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]:    retries                 3
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]:    timeout http-request    30s
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]:    timeout connect         30s
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]:    timeout client          32s
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]:    timeout server          32s
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]:    timeout http-keep-alive 30s
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]: 
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]: 
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]: listen listener
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]:    bind 169.254.169.254:80
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]:    server metadata /var/lib/neutron/metadata_proxy
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]:    http-request add-header X-OVN-Network-ID f5a068ec-72e0-4934-878b-07d85634c361
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  5 02:09:22 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:22.693 287122 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361', 'env', 'PROCESS_TAG=haproxy-f5a068ec-72e0-4934-878b-07d85634c361', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f5a068ec-72e0-4934-878b-07d85634c361.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  5 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.844 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900562.8440719, 86d3faa9-af9e-47de-bc0f-3e211167604f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.845 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] VM Started (Lifecycle Event)#033[00m
Dec  5 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.847 349552 DEBUG nova.compute.manager [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  5 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.852 349552 DEBUG nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  5 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.858 349552 INFO nova.virt.libvirt.driver [-] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Instance spawned successfully.#033[00m
Dec  5 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.859 349552 DEBUG nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  5 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.872 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.881 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  5 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.886 349552 DEBUG nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.888 349552 DEBUG nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.897 349552 DEBUG nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.899 349552 DEBUG nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.901 349552 DEBUG nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.903 349552 DEBUG nova.virt.libvirt.driver [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.909 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  5 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.910 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900562.8441749, 86d3faa9-af9e-47de-bc0f-3e211167604f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.911 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] VM Paused (Lifecycle Event)#033[00m
Dec  5 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.940 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.948 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900562.8512092, 86d3faa9-af9e-47de-bc0f-3e211167604f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.948 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] VM Resumed (Lifecycle Event)#033[00m
Dec  5 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.971 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.976 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  5 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.981 349552 INFO nova.compute.manager [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Took 8.25 seconds to spawn the instance on the hypervisor.#033[00m
Dec  5 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.981 349552 DEBUG nova.compute.manager [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:09:22 compute-0 nova_compute[349548]: 2025-12-05 02:09:22.996 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  5 02:09:23 compute-0 nova_compute[349548]: 2025-12-05 02:09:23.045 349552 INFO nova.compute.manager [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Took 9.36 seconds to build instance.#033[00m
Dec  5 02:09:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:09:23 compute-0 nova_compute[349548]: 2025-12-05 02:09:23.064 349552 DEBUG oslo_concurrency.lockutils [None req-1a0f4801-5a24-4ff1-8c7e-eeeecf65f119 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Lock "86d3faa9-af9e-47de-bc0f-3e211167604f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.463s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:09:23 compute-0 podman[443574]: 2025-12-05 02:09:23.236545161 +0000 UTC m=+0.074151499 container create b603954d7c3042314637702e620f3b742ce9886508c8d700f067c200d2a812b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  5 02:09:23 compute-0 podman[443574]: 2025-12-05 02:09:23.19406906 +0000 UTC m=+0.031675448 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  5 02:09:23 compute-0 systemd[1]: Started libpod-conmon-b603954d7c3042314637702e620f3b742ce9886508c8d700f067c200d2a812b1.scope.
Dec  5 02:09:23 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:09:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e031ad808e71024fd09cbfc3c286046cee526722de24473065b1214afbd5c889/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  5 02:09:23 compute-0 podman[443574]: 2025-12-05 02:09:23.361162414 +0000 UTC m=+0.198768762 container init b603954d7c3042314637702e620f3b742ce9886508c8d700f067c200d2a812b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  5 02:09:23 compute-0 podman[443574]: 2025-12-05 02:09:23.369544539 +0000 UTC m=+0.207150877 container start b603954d7c3042314637702e620f3b742ce9886508c8d700f067c200d2a812b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  5 02:09:23 compute-0 neutron-haproxy-ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361[443588]: [NOTICE]   (443592) : New worker (443594) forked
Dec  5 02:09:23 compute-0 neutron-haproxy-ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361[443588]: [NOTICE]   (443592) : Loading success.
Dec  5 02:09:23 compute-0 ovn_controller[89286]: 2025-12-05T02:09:23Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ca:ba:4f 10.100.0.11
Dec  5 02:09:23 compute-0 ovn_controller[89286]: 2025-12-05T02:09:23Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ca:ba:4f 10.100.0.11
Dec  5 02:09:23 compute-0 podman[443604]: 2025-12-05 02:09:23.69531611 +0000 UTC m=+0.113393709 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 02:09:23 compute-0 podman[443603]: 2025-12-05 02:09:23.695749002 +0000 UTC m=+0.105403855 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec  5 02:09:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1811: 321 pgs: 321 active+clean; 217 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 236 KiB/s rd, 3.6 MiB/s wr, 71 op/s
Dec  5 02:09:24 compute-0 nova_compute[349548]: 2025-12-05 02:09:24.526 349552 DEBUG nova.compute.manager [req-b574ea25-a7ca-48ca-9ecb-f927b2b65d6b req-c1db0f98-5fec-4534-a01d-fb46918cb2a5 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Received event network-vif-plugged-5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:09:24 compute-0 nova_compute[349548]: 2025-12-05 02:09:24.527 349552 DEBUG oslo_concurrency.lockutils [req-b574ea25-a7ca-48ca-9ecb-f927b2b65d6b req-c1db0f98-5fec-4534-a01d-fb46918cb2a5 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "86d3faa9-af9e-47de-bc0f-3e211167604f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:09:24 compute-0 nova_compute[349548]: 2025-12-05 02:09:24.527 349552 DEBUG oslo_concurrency.lockutils [req-b574ea25-a7ca-48ca-9ecb-f927b2b65d6b req-c1db0f98-5fec-4534-a01d-fb46918cb2a5 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "86d3faa9-af9e-47de-bc0f-3e211167604f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:09:24 compute-0 nova_compute[349548]: 2025-12-05 02:09:24.527 349552 DEBUG oslo_concurrency.lockutils [req-b574ea25-a7ca-48ca-9ecb-f927b2b65d6b req-c1db0f98-5fec-4534-a01d-fb46918cb2a5 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "86d3faa9-af9e-47de-bc0f-3e211167604f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:09:24 compute-0 nova_compute[349548]: 2025-12-05 02:09:24.527 349552 DEBUG nova.compute.manager [req-b574ea25-a7ca-48ca-9ecb-f927b2b65d6b req-c1db0f98-5fec-4534-a01d-fb46918cb2a5 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] No waiting events found dispatching network-vif-plugged-5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 02:09:24 compute-0 nova_compute[349548]: 2025-12-05 02:09:24.528 349552 WARNING nova.compute.manager [req-b574ea25-a7ca-48ca-9ecb-f927b2b65d6b req-c1db0f98-5fec-4534-a01d-fb46918cb2a5 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Received unexpected event network-vif-plugged-5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 for instance with vm_state active and task_state None.#033[00m
Dec  5 02:09:26 compute-0 nova_compute[349548]: 2025-12-05 02:09:26.001 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1812: 321 pgs: 321 active+clean; 225 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 362 KiB/s rd, 3.8 MiB/s wr, 81 op/s
Dec  5 02:09:26 compute-0 nova_compute[349548]: 2025-12-05 02:09:26.700 349552 DEBUG nova.compute.manager [req-d6c855d8-92e6-4477-8874-f66366e70da1 req-f7332377-0ba4-4fcf-bfa3-4c33d607d233 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Received event network-changed-5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:09:26 compute-0 nova_compute[349548]: 2025-12-05 02:09:26.701 349552 DEBUG nova.compute.manager [req-d6c855d8-92e6-4477-8874-f66366e70da1 req-f7332377-0ba4-4fcf-bfa3-4c33d607d233 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Refreshing instance network info cache due to event network-changed-5ce2a2f7-a9e2-4922-b684-fefcfe3f6307. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  5 02:09:26 compute-0 nova_compute[349548]: 2025-12-05 02:09:26.702 349552 DEBUG oslo_concurrency.lockutils [req-d6c855d8-92e6-4477-8874-f66366e70da1 req-f7332377-0ba4-4fcf-bfa3-4c33d607d233 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-86d3faa9-af9e-47de-bc0f-3e211167604f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:09:26 compute-0 nova_compute[349548]: 2025-12-05 02:09:26.703 349552 DEBUG oslo_concurrency.lockutils [req-d6c855d8-92e6-4477-8874-f66366e70da1 req-f7332377-0ba4-4fcf-bfa3-4c33d607d233 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-86d3faa9-af9e-47de-bc0f-3e211167604f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:09:26 compute-0 nova_compute[349548]: 2025-12-05 02:09:26.703 349552 DEBUG nova.network.neutron [req-d6c855d8-92e6-4477-8874-f66366e70da1 req-f7332377-0ba4-4fcf-bfa3-4c33d607d233 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Refreshing network info cache for port 5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  5 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001442745979032363 of space, bias 1.0, pg target 0.4328237937097089 quantized to 32 (current 32)
Dec  5 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec  5 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:09:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.080 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.514 349552 DEBUG oslo_concurrency.lockutils [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Acquiring lock "86d3faa9-af9e-47de-bc0f-3e211167604f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.515 349552 DEBUG oslo_concurrency.lockutils [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Lock "86d3faa9-af9e-47de-bc0f-3e211167604f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.516 349552 DEBUG oslo_concurrency.lockutils [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Acquiring lock "86d3faa9-af9e-47de-bc0f-3e211167604f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.517 349552 DEBUG oslo_concurrency.lockutils [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Lock "86d3faa9-af9e-47de-bc0f-3e211167604f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.518 349552 DEBUG oslo_concurrency.lockutils [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Lock "86d3faa9-af9e-47de-bc0f-3e211167604f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.521 349552 INFO nova.compute.manager [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Terminating instance#033[00m
Dec  5 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.523 349552 DEBUG nova.compute.manager [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  5 02:09:27 compute-0 kernel: tap5ce2a2f7-a9 (unregistering): left promiscuous mode
Dec  5 02:09:27 compute-0 NetworkManager[49092]: <info>  [1764900567.6260] device (tap5ce2a2f7-a9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  5 02:09:27 compute-0 ovn_controller[89286]: 2025-12-05T02:09:27Z|00099|binding|INFO|Releasing lport 5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 from this chassis (sb_readonly=0)
Dec  5 02:09:27 compute-0 ovn_controller[89286]: 2025-12-05T02:09:27Z|00100|binding|INFO|Setting lport 5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 down in Southbound
Dec  5 02:09:27 compute-0 ovn_controller[89286]: 2025-12-05T02:09:27Z|00101|binding|INFO|Removing iface tap5ce2a2f7-a9 ovn-installed in OVS
Dec  5 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.644 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.646 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:27.652 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:08:95 10.100.0.8'], port_security=['fa:16:3e:57:08:95 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '86d3faa9-af9e-47de-bc0f-3e211167604f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5a068ec-72e0-4934-878b-07d85634c361', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7771751d84d348319b2c3d632191b59c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '90f6337f-8150-484e-95c9-0297abbd01b1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.229'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0bc16ee-3841-439b-8236-7c21ef336dbd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=5ce2a2f7-a9e2-4922-b684-fefcfe3f6307) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 02:09:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:27.655 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 in datapath f5a068ec-72e0-4934-878b-07d85634c361 unbound from our chassis#033[00m
Dec  5 02:09:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:27.657 287122 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f5a068ec-72e0-4934-878b-07d85634c361, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  5 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.661 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:27.659 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[5e6a0463-faf8-4787-b21d-9e5854c045bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:09:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:27.663 287122 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361 namespace which is not needed anymore#033[00m
Dec  5 02:09:27 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Dec  5 02:09:27 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 5.838s CPU time.
Dec  5 02:09:27 compute-0 systemd-machined[138700]: Machine qemu-9-instance-00000009 terminated.
Dec  5 02:09:27 compute-0 podman[443643]: 2025-12-05 02:09:27.747731826 +0000 UTC m=+0.143903294 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  5 02:09:27 compute-0 podman[443642]: 2025-12-05 02:09:27.750447582 +0000 UTC m=+0.161493937 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  5 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.767 349552 INFO nova.virt.libvirt.driver [-] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Instance destroyed successfully.#033[00m
Dec  5 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.768 349552 DEBUG nova.objects.instance [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Lazy-loading 'resources' on Instance uuid 86d3faa9-af9e-47de-bc0f-3e211167604f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.791 349552 DEBUG nova.virt.libvirt.vif [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T02:09:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1615802566',display_name='tempest-ServersTestManualDisk-server-1615802566',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1615802566',id=9,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOcRO97guGa63+bps+A9FhbwCKswROHpaWQg4mABL2o9peSWqfNCYb59UZjb6DzrVFgPcALMXfGD8Zcw0e20RtTOhbatKip3vjrwBqcfA+Ox6W1aF5tJ18LwMyhNTkj73A==',key_name='tempest-keypair-1736515978',keypairs=<?>,launch_index=0,launched_at=2025-12-05T02:09:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7771751d84d348319b2c3d632191b59c',ramdisk_id='',reservation_id='r-8rl5cwmf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-1464391732',owner_user_name='tempest-ServersTestManualDisk-1464391732-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-05T02:09:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7eb322b6163b466fb7721796e0d10c1f',uuid=86d3faa9-af9e-47de-bc0f-3e211167604f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "address": "fa:16:3e:57:08:95", "network": {"id": "f5a068ec-72e0-4934-878b-07d85634c361", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-965896294-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7771751d84d348319b2c3d632191b59c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ce2a2f7-a9", "ovs_interfaceid": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  5 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.791 349552 DEBUG nova.network.os_vif_util [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Converting VIF {"id": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "address": "fa:16:3e:57:08:95", "network": {"id": "f5a068ec-72e0-4934-878b-07d85634c361", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-965896294-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7771751d84d348319b2c3d632191b59c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ce2a2f7-a9", "ovs_interfaceid": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.792 349552 DEBUG nova.network.os_vif_util [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:08:95,bridge_name='br-int',has_traffic_filtering=True,id=5ce2a2f7-a9e2-4922-b684-fefcfe3f6307,network=Network(f5a068ec-72e0-4934-878b-07d85634c361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ce2a2f7-a9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.792 349552 DEBUG os_vif [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:08:95,bridge_name='br-int',has_traffic_filtering=True,id=5ce2a2f7-a9e2-4922-b684-fefcfe3f6307,network=Network(f5a068ec-72e0-4934-878b-07d85634c361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ce2a2f7-a9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  5 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.794 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.794 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5ce2a2f7-a9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.796 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.800 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  5 02:09:27 compute-0 nova_compute[349548]: 2025-12-05 02:09:27.803 349552 INFO os_vif [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:08:95,bridge_name='br-int',has_traffic_filtering=True,id=5ce2a2f7-a9e2-4922-b684-fefcfe3f6307,network=Network(f5a068ec-72e0-4934-878b-07d85634c361),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ce2a2f7-a9')#033[00m
Dec  5 02:09:27 compute-0 neutron-haproxy-ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361[443588]: [NOTICE]   (443592) : haproxy version is 2.8.14-c23fe91
Dec  5 02:09:27 compute-0 neutron-haproxy-ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361[443588]: [NOTICE]   (443592) : path to executable is /usr/sbin/haproxy
Dec  5 02:09:27 compute-0 neutron-haproxy-ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361[443588]: [WARNING]  (443592) : Exiting Master process...
Dec  5 02:09:27 compute-0 neutron-haproxy-ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361[443588]: [ALERT]    (443592) : Current worker (443594) exited with code 143 (Terminated)
Dec  5 02:09:27 compute-0 neutron-haproxy-ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361[443588]: [WARNING]  (443592) : All workers exited. Exiting... (0)
Dec  5 02:09:27 compute-0 systemd[1]: libpod-b603954d7c3042314637702e620f3b742ce9886508c8d700f067c200d2a812b1.scope: Deactivated successfully.
Dec  5 02:09:27 compute-0 podman[443710]: 2025-12-05 02:09:27.840394024 +0000 UTC m=+0.063025468 container died b603954d7c3042314637702e620f3b742ce9886508c8d700f067c200d2a812b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Dec  5 02:09:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b603954d7c3042314637702e620f3b742ce9886508c8d700f067c200d2a812b1-userdata-shm.mount: Deactivated successfully.
Dec  5 02:09:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-e031ad808e71024fd09cbfc3c286046cee526722de24473065b1214afbd5c889-merged.mount: Deactivated successfully.
Dec  5 02:09:27 compute-0 podman[443710]: 2025-12-05 02:09:27.898621176 +0000 UTC m=+0.121252620 container cleanup b603954d7c3042314637702e620f3b742ce9886508c8d700f067c200d2a812b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  5 02:09:27 compute-0 systemd[1]: libpod-conmon-b603954d7c3042314637702e620f3b742ce9886508c8d700f067c200d2a812b1.scope: Deactivated successfully.
Dec  5 02:09:27 compute-0 podman[443755]: 2025-12-05 02:09:27.997581279 +0000 UTC m=+0.070110046 container remove b603954d7c3042314637702e620f3b742ce9886508c8d700f067c200d2a812b1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  5 02:09:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:28.026 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[41101bde-cb3b-403d-8123-9102592dbd7b]: (4, ('Fri Dec  5 02:09:27 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361 (b603954d7c3042314637702e620f3b742ce9886508c8d700f067c200d2a812b1)\nb603954d7c3042314637702e620f3b742ce9886508c8d700f067c200d2a812b1\nFri Dec  5 02:09:27 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361 (b603954d7c3042314637702e620f3b742ce9886508c8d700f067c200d2a812b1)\nb603954d7c3042314637702e620f3b742ce9886508c8d700f067c200d2a812b1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:09:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:28.028 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[17f6aa88-71a5-48a3-9efc-25157f7b5cba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:09:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:28.029 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5a068ec-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:09:28 compute-0 nova_compute[349548]: 2025-12-05 02:09:28.031 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:28 compute-0 kernel: tapf5a068ec-70: left promiscuous mode
Dec  5 02:09:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:09:28 compute-0 nova_compute[349548]: 2025-12-05 02:09:28.059 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:28 compute-0 nova_compute[349548]: 2025-12-05 02:09:28.061 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:28.065 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[9d386ec2-86ba-440a-8771-97b24bbace7d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:09:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:28.075 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[5d47cae4-5367-415a-ab6b-ff9dde4928b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:09:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:28.076 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[dc3ffe86-6b81-411a-b6b2-b7360c69c2bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:09:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:28.096 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[bd45135b-13ca-4e37-b6f3-9da867f784a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 664893, 'reachable_time': 30665, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 443770, 'error': None, 'target': 'ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:09:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:28.098 287504 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f5a068ec-72e0-4934-878b-07d85634c361 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  5 02:09:28 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:28.099 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[b7aaa2df-29ab-4538-b0cb-f59d49462972]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:09:28 compute-0 systemd[1]: run-netns-ovnmeta\x2df5a068ec\x2d72e0\x2d4934\x2d878b\x2d07d85634c361.mount: Deactivated successfully.
Dec  5 02:09:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1813: 321 pgs: 321 active+clean; 229 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.8 MiB/s wr, 148 op/s
Dec  5 02:09:28 compute-0 nova_compute[349548]: 2025-12-05 02:09:28.654 349552 INFO nova.virt.libvirt.driver [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Deleting instance files /var/lib/nova/instances/86d3faa9-af9e-47de-bc0f-3e211167604f_del#033[00m
Dec  5 02:09:28 compute-0 nova_compute[349548]: 2025-12-05 02:09:28.656 349552 INFO nova.virt.libvirt.driver [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Deletion of /var/lib/nova/instances/86d3faa9-af9e-47de-bc0f-3e211167604f_del complete#033[00m
Dec  5 02:09:28 compute-0 nova_compute[349548]: 2025-12-05 02:09:28.729 349552 INFO nova.compute.manager [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Took 1.20 seconds to destroy the instance on the hypervisor.#033[00m
Dec  5 02:09:28 compute-0 nova_compute[349548]: 2025-12-05 02:09:28.730 349552 DEBUG oslo.service.loopingcall [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  5 02:09:28 compute-0 nova_compute[349548]: 2025-12-05 02:09:28.731 349552 DEBUG nova.compute.manager [-] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  5 02:09:28 compute-0 nova_compute[349548]: 2025-12-05 02:09:28.731 349552 DEBUG nova.network.neutron [-] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  5 02:09:28 compute-0 nova_compute[349548]: 2025-12-05 02:09:28.843 349552 DEBUG nova.network.neutron [req-d6c855d8-92e6-4477-8874-f66366e70da1 req-f7332377-0ba4-4fcf-bfa3-4c33d607d233 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Updated VIF entry in instance network info cache for port 5ce2a2f7-a9e2-4922-b684-fefcfe3f6307. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  5 02:09:28 compute-0 nova_compute[349548]: 2025-12-05 02:09:28.844 349552 DEBUG nova.network.neutron [req-d6c855d8-92e6-4477-8874-f66366e70da1 req-f7332377-0ba4-4fcf-bfa3-4c33d607d233 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Updating instance_info_cache with network_info: [{"id": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "address": "fa:16:3e:57:08:95", "network": {"id": "f5a068ec-72e0-4934-878b-07d85634c361", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-965896294-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7771751d84d348319b2c3d632191b59c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ce2a2f7-a9", "ovs_interfaceid": "5ce2a2f7-a9e2-4922-b684-fefcfe3f6307", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:09:28 compute-0 nova_compute[349548]: 2025-12-05 02:09:28.864 349552 DEBUG oslo_concurrency.lockutils [req-d6c855d8-92e6-4477-8874-f66366e70da1 req-f7332377-0ba4-4fcf-bfa3-4c33d607d233 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-86d3faa9-af9e-47de-bc0f-3e211167604f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:09:29 compute-0 nova_compute[349548]: 2025-12-05 02:09:29.116 349552 DEBUG nova.compute.manager [req-b94fbc1b-6d96-47cc-8b41-206c5f518d0c req-43090a92-906b-4427-9fa1-901a7e4b211e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Received event network-vif-unplugged-5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:09:29 compute-0 nova_compute[349548]: 2025-12-05 02:09:29.119 349552 DEBUG oslo_concurrency.lockutils [req-b94fbc1b-6d96-47cc-8b41-206c5f518d0c req-43090a92-906b-4427-9fa1-901a7e4b211e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "86d3faa9-af9e-47de-bc0f-3e211167604f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:09:29 compute-0 nova_compute[349548]: 2025-12-05 02:09:29.120 349552 DEBUG oslo_concurrency.lockutils [req-b94fbc1b-6d96-47cc-8b41-206c5f518d0c req-43090a92-906b-4427-9fa1-901a7e4b211e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "86d3faa9-af9e-47de-bc0f-3e211167604f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:09:29 compute-0 nova_compute[349548]: 2025-12-05 02:09:29.121 349552 DEBUG oslo_concurrency.lockutils [req-b94fbc1b-6d96-47cc-8b41-206c5f518d0c req-43090a92-906b-4427-9fa1-901a7e4b211e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "86d3faa9-af9e-47de-bc0f-3e211167604f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:09:29 compute-0 nova_compute[349548]: 2025-12-05 02:09:29.123 349552 DEBUG nova.compute.manager [req-b94fbc1b-6d96-47cc-8b41-206c5f518d0c req-43090a92-906b-4427-9fa1-901a7e4b211e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] No waiting events found dispatching network-vif-unplugged-5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 02:09:29 compute-0 nova_compute[349548]: 2025-12-05 02:09:29.124 349552 DEBUG nova.compute.manager [req-b94fbc1b-6d96-47cc-8b41-206c5f518d0c req-43090a92-906b-4427-9fa1-901a7e4b211e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Received event network-vif-unplugged-5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  5 02:09:29 compute-0 nova_compute[349548]: 2025-12-05 02:09:29.125 349552 DEBUG nova.compute.manager [req-b94fbc1b-6d96-47cc-8b41-206c5f518d0c req-43090a92-906b-4427-9fa1-901a7e4b211e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Received event network-vif-plugged-5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:09:29 compute-0 nova_compute[349548]: 2025-12-05 02:09:29.126 349552 DEBUG oslo_concurrency.lockutils [req-b94fbc1b-6d96-47cc-8b41-206c5f518d0c req-43090a92-906b-4427-9fa1-901a7e4b211e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "86d3faa9-af9e-47de-bc0f-3e211167604f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:09:29 compute-0 nova_compute[349548]: 2025-12-05 02:09:29.128 349552 DEBUG oslo_concurrency.lockutils [req-b94fbc1b-6d96-47cc-8b41-206c5f518d0c req-43090a92-906b-4427-9fa1-901a7e4b211e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "86d3faa9-af9e-47de-bc0f-3e211167604f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:09:29 compute-0 nova_compute[349548]: 2025-12-05 02:09:29.129 349552 DEBUG oslo_concurrency.lockutils [req-b94fbc1b-6d96-47cc-8b41-206c5f518d0c req-43090a92-906b-4427-9fa1-901a7e4b211e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "86d3faa9-af9e-47de-bc0f-3e211167604f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:09:29 compute-0 nova_compute[349548]: 2025-12-05 02:09:29.130 349552 DEBUG nova.compute.manager [req-b94fbc1b-6d96-47cc-8b41-206c5f518d0c req-43090a92-906b-4427-9fa1-901a7e4b211e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] No waiting events found dispatching network-vif-plugged-5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 02:09:29 compute-0 nova_compute[349548]: 2025-12-05 02:09:29.131 349552 WARNING nova.compute.manager [req-b94fbc1b-6d96-47cc-8b41-206c5f518d0c req-43090a92-906b-4427-9fa1-901a7e4b211e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Received unexpected event network-vif-plugged-5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 for instance with vm_state active and task_state deleting.#033[00m
Dec  5 02:09:29 compute-0 podman[443774]: 2025-12-05 02:09:29.694101692 +0000 UTC m=+0.108233665 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_id=edpm, io.buildah.version=1.29.0, release=1214.1726694543, release-0.7.12=, version=9.4, container_name=kepler, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git)
Dec  5 02:09:29 compute-0 podman[158197]: time="2025-12-05T02:09:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:09:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:09:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45045 "" "Go-http-client/1.1"
Dec  5 02:09:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:09:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9098 "" "Go-http-client/1.1"
Dec  5 02:09:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1814: 321 pgs: 321 active+clean; 229 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.6 MiB/s wr, 123 op/s
Dec  5 02:09:30 compute-0 nova_compute[349548]: 2025-12-05 02:09:30.601 349552 DEBUG nova.network.neutron [-] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:09:30 compute-0 nova_compute[349548]: 2025-12-05 02:09:30.622 349552 INFO nova.compute.manager [-] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Took 1.89 seconds to deallocate network for instance.#033[00m
Dec  5 02:09:30 compute-0 nova_compute[349548]: 2025-12-05 02:09:30.680 349552 DEBUG oslo_concurrency.lockutils [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:09:30 compute-0 nova_compute[349548]: 2025-12-05 02:09:30.681 349552 DEBUG oslo_concurrency.lockutils [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:09:30 compute-0 nova_compute[349548]: 2025-12-05 02:09:30.802 349552 DEBUG oslo_concurrency.processutils [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:09:31 compute-0 nova_compute[349548]: 2025-12-05 02:09:31.257 349552 DEBUG nova.compute.manager [req-68f2878b-1058-4f92-8068-5b00662a7b1d req-0cf5325b-3d22-4842-8676-6fea7e9265e5 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Received event network-vif-deleted-5ce2a2f7-a9e2-4922-b684-fefcfe3f6307 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:09:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:09:31 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2446295945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:09:31 compute-0 nova_compute[349548]: 2025-12-05 02:09:31.340 349552 DEBUG oslo_concurrency.processutils [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:09:31 compute-0 nova_compute[349548]: 2025-12-05 02:09:31.349 349552 DEBUG nova.compute.provider_tree [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:09:31 compute-0 nova_compute[349548]: 2025-12-05 02:09:31.367 349552 DEBUG nova.scheduler.client.report [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:09:31 compute-0 nova_compute[349548]: 2025-12-05 02:09:31.393 349552 DEBUG oslo_concurrency.lockutils [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:09:31 compute-0 nova_compute[349548]: 2025-12-05 02:09:31.419 349552 INFO nova.scheduler.client.report [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Deleted allocations for instance 86d3faa9-af9e-47de-bc0f-3e211167604f#033[00m
Dec  5 02:09:31 compute-0 openstack_network_exporter[366555]: ERROR   02:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:09:31 compute-0 openstack_network_exporter[366555]: ERROR   02:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:09:31 compute-0 openstack_network_exporter[366555]: ERROR   02:09:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:09:31 compute-0 openstack_network_exporter[366555]: ERROR   02:09:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:09:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:09:31 compute-0 openstack_network_exporter[366555]: ERROR   02:09:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:09:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:09:31 compute-0 nova_compute[349548]: 2025-12-05 02:09:31.492 349552 DEBUG oslo_concurrency.lockutils [None req-9b8e853b-6316-4ef3-a006-67936897c558 7eb322b6163b466fb7721796e0d10c1f 7771751d84d348319b2c3d632191b59c - - default default] Lock "86d3faa9-af9e-47de-bc0f-3e211167604f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.977s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:09:32 compute-0 nova_compute[349548]: 2025-12-05 02:09:32.083 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1815: 321 pgs: 321 active+clean; 209 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.6 MiB/s wr, 175 op/s
Dec  5 02:09:32 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:32.340 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:c8:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:b5:45:4f:f9:d2'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 02:09:32 compute-0 nova_compute[349548]: 2025-12-05 02:09:32.340 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:32 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:32.344 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  5 02:09:32 compute-0 nova_compute[349548]: 2025-12-05 02:09:32.798 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:33 compute-0 ovn_controller[89286]: 2025-12-05T02:09:33Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:16:81:87 10.100.0.10
Dec  5 02:09:33 compute-0 ovn_controller[89286]: 2025-12-05T02:09:33Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:16:81:87 10.100.0.10
Dec  5 02:09:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:09:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1816: 321 pgs: 321 active+clean; 196 MiB data, 355 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.8 MiB/s wr, 175 op/s
Dec  5 02:09:36 compute-0 nova_compute[349548]: 2025-12-05 02:09:36.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:09:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1817: 321 pgs: 321 active+clean; 214 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.5 MiB/s wr, 174 op/s
Dec  5 02:09:37 compute-0 nova_compute[349548]: 2025-12-05 02:09:37.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:09:37 compute-0 nova_compute[349548]: 2025-12-05 02:09:37.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:09:37 compute-0 nova_compute[349548]: 2025-12-05 02:09:37.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 02:09:37 compute-0 nova_compute[349548]: 2025-12-05 02:09:37.086 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:37 compute-0 nova_compute[349548]: 2025-12-05 02:09:37.801 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:09:38 compute-0 nova_compute[349548]: 2025-12-05 02:09:38.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:09:38 compute-0 nova_compute[349548]: 2025-12-05 02:09:38.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 02:09:38 compute-0 nova_compute[349548]: 2025-12-05 02:09:38.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  5 02:09:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1818: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 171 op/s
Dec  5 02:09:38 compute-0 nova_compute[349548]: 2025-12-05 02:09:38.252 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:09:38 compute-0 nova_compute[349548]: 2025-12-05 02:09:38.253 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:09:38 compute-0 nova_compute[349548]: 2025-12-05 02:09:38.253 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  5 02:09:38 compute-0 nova_compute[349548]: 2025-12-05 02:09:38.254 349552 DEBUG nova.objects.instance [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 939ae9f2-b89c-4a19-96de-ab4dfc882a35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:09:39 compute-0 podman[443820]: 2025-12-05 02:09:39.551664682 +0000 UTC m=+0.116854896 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  5 02:09:39 compute-0 podman[443822]: 2025-12-05 02:09:39.572949829 +0000 UTC m=+0.120790067 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, version=9.6, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm)
Dec  5 02:09:39 compute-0 podman[443819]: 2025-12-05 02:09:39.583358821 +0000 UTC m=+0.144278065 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Dec  5 02:09:39 compute-0 podman[443821]: 2025-12-05 02:09:39.609179164 +0000 UTC m=+0.167149216 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125)
Dec  5 02:09:39 compute-0 nova_compute[349548]: 2025-12-05 02:09:39.977 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Updating instance_info_cache with network_info: [{"id": "2ac46e0a-6888-440f-b155-d4b0e8677304", "address": "fa:16:3e:ca:ba:4f", "network": {"id": "77ae1103-3871-4354-8e08-09bb5c0c1ad1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-680696631-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "70b71e0f6ffe47ed86a910f90d71557a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac46e0a-68", "ovs_interfaceid": "2ac46e0a-6888-440f-b155-d4b0e8677304", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:09:39 compute-0 nova_compute[349548]: 2025-12-05 02:09:39.994 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:09:39 compute-0 nova_compute[349548]: 2025-12-05 02:09:39.995 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  5 02:09:39 compute-0 nova_compute[349548]: 2025-12-05 02:09:39.995 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:09:39 compute-0 nova_compute[349548]: 2025-12-05 02:09:39.996 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:09:40 compute-0 nova_compute[349548]: 2025-12-05 02:09:40.023 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:09:40 compute-0 nova_compute[349548]: 2025-12-05 02:09:40.024 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:09:40 compute-0 nova_compute[349548]: 2025-12-05 02:09:40.024 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:09:40 compute-0 nova_compute[349548]: 2025-12-05 02:09:40.024 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 02:09:40 compute-0 nova_compute[349548]: 2025-12-05 02:09:40.025 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:09:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1819: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 840 KiB/s rd, 2.1 MiB/s wr, 104 op/s
Dec  5 02:09:40 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:09:40 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/743466495' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:09:40 compute-0 nova_compute[349548]: 2025-12-05 02:09:40.523 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:09:40 compute-0 nova_compute[349548]: 2025-12-05 02:09:40.619 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:09:40 compute-0 nova_compute[349548]: 2025-12-05 02:09:40.620 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:09:40 compute-0 nova_compute[349548]: 2025-12-05 02:09:40.625 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:09:40 compute-0 nova_compute[349548]: 2025-12-05 02:09:40.625 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:09:41 compute-0 ovn_controller[89286]: 2025-12-05T02:09:41Z|00102|binding|INFO|Releasing lport 3d0916d7-6f03-4daf-8f3b-126228223c53 from this chassis (sb_readonly=0)
Dec  5 02:09:41 compute-0 ovn_controller[89286]: 2025-12-05T02:09:41Z|00103|binding|INFO|Releasing lport 5f3160d9-2dc7-4f0c-9f4e-c46a8a847823 from this chassis (sb_readonly=0)
Dec  5 02:09:41 compute-0 nova_compute[349548]: 2025-12-05 02:09:41.138 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:09:41 compute-0 nova_compute[349548]: 2025-12-05 02:09:41.139 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3627MB free_disk=59.89728927612305GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 02:09:41 compute-0 nova_compute[349548]: 2025-12-05 02:09:41.139 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:09:41 compute-0 nova_compute[349548]: 2025-12-05 02:09:41.140 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:09:41 compute-0 nova_compute[349548]: 2025-12-05 02:09:41.227 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 939ae9f2-b89c-4a19-96de-ab4dfc882a35 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:09:41 compute-0 nova_compute[349548]: 2025-12-05 02:09:41.228 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 59e35a32-9023-4e49-be56-9da10df3027f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:09:41 compute-0 nova_compute[349548]: 2025-12-05 02:09:41.228 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 02:09:41 compute-0 nova_compute[349548]: 2025-12-05 02:09:41.229 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 02:09:41 compute-0 nova_compute[349548]: 2025-12-05 02:09:41.233 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:41 compute-0 nova_compute[349548]: 2025-12-05 02:09:41.310 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:09:41 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:41.348 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8dd76c1c-ab01-42af-b35e-2e870841b6ad, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:09:41 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:09:41 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3169650476' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:09:41 compute-0 nova_compute[349548]: 2025-12-05 02:09:41.829 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:09:41 compute-0 nova_compute[349548]: 2025-12-05 02:09:41.844 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:09:41 compute-0 nova_compute[349548]: 2025-12-05 02:09:41.867 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:09:41 compute-0 nova_compute[349548]: 2025-12-05 02:09:41.901 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 02:09:41 compute-0 nova_compute[349548]: 2025-12-05 02:09:41.902 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.762s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:09:42 compute-0 nova_compute[349548]: 2025-12-05 02:09:42.092 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1820: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 840 KiB/s rd, 2.2 MiB/s wr, 104 op/s
Dec  5 02:09:42 compute-0 nova_compute[349548]: 2025-12-05 02:09:42.764 349552 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764900567.762436, 86d3faa9-af9e-47de-bc0f-3e211167604f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:09:42 compute-0 nova_compute[349548]: 2025-12-05 02:09:42.765 349552 INFO nova.compute.manager [-] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] VM Stopped (Lifecycle Event)#033[00m
Dec  5 02:09:42 compute-0 nova_compute[349548]: 2025-12-05 02:09:42.801 349552 DEBUG nova.compute.manager [None req-7ff89984-929c-48d5-b3f7-6d728247f215 - - - - - -] [instance: 86d3faa9-af9e-47de-bc0f-3e211167604f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:09:42 compute-0 nova_compute[349548]: 2025-12-05 02:09:42.805 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:42 compute-0 nova_compute[349548]: 2025-12-05 02:09:42.973 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:09:42 compute-0 nova_compute[349548]: 2025-12-05 02:09:42.974 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:09:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:09:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1821: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 221 KiB/s rd, 1.2 MiB/s wr, 53 op/s
Dec  5 02:09:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:09:44 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:09:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 02:09:44 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:09:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 02:09:44 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:09:44 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 507f4b1c-554c-426e-afae-a0606ea0ae7c does not exist
Dec  5 02:09:44 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 7f770fcd-d9c9-4a52-ad27-97b4a63de641 does not exist
Dec  5 02:09:44 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev cfa94e93-ce9c-4a56-93c4-3bc8fa06d7b2 does not exist
Dec  5 02:09:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 02:09:44 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 02:09:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 02:09:44 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:09:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:09:44 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:09:44 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:09:44 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:09:44 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:09:45 compute-0 nova_compute[349548]: 2025-12-05 02:09:45.062 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:09:45 compute-0 nova_compute[349548]: 2025-12-05 02:09:45.092 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:09:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 02:09:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/103136362' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 02:09:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 02:09:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/103136362' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 02:09:45 compute-0 podman[444214]: 2025-12-05 02:09:45.430405639 +0000 UTC m=+0.065965190 container create 43d87703764eadecd58e29c49be47e39966fea57674fcd0ded733394db7fa82c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  5 02:09:45 compute-0 podman[444214]: 2025-12-05 02:09:45.403456294 +0000 UTC m=+0.039015805 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:09:45 compute-0 systemd[1]: Started libpod-conmon-43d87703764eadecd58e29c49be47e39966fea57674fcd0ded733394db7fa82c.scope.
Dec  5 02:09:45 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:09:45 compute-0 podman[444214]: 2025-12-05 02:09:45.581948187 +0000 UTC m=+0.217507718 container init 43d87703764eadecd58e29c49be47e39966fea57674fcd0ded733394db7fa82c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  5 02:09:45 compute-0 podman[444214]: 2025-12-05 02:09:45.59883476 +0000 UTC m=+0.234394311 container start 43d87703764eadecd58e29c49be47e39966fea57674fcd0ded733394db7fa82c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_beaver, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  5 02:09:45 compute-0 podman[444214]: 2025-12-05 02:09:45.606067703 +0000 UTC m=+0.241627304 container attach 43d87703764eadecd58e29c49be47e39966fea57674fcd0ded733394db7fa82c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_beaver, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  5 02:09:45 compute-0 clever_beaver[444229]: 167 167
Dec  5 02:09:45 compute-0 systemd[1]: libpod-43d87703764eadecd58e29c49be47e39966fea57674fcd0ded733394db7fa82c.scope: Deactivated successfully.
Dec  5 02:09:45 compute-0 podman[444214]: 2025-12-05 02:09:45.612059881 +0000 UTC m=+0.247619432 container died 43d87703764eadecd58e29c49be47e39966fea57674fcd0ded733394db7fa82c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  5 02:09:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-c209eca1529f4ae98dee9e390e0a531c193d59a1a2485d0af26fac056218c198-merged.mount: Deactivated successfully.
Dec  5 02:09:45 compute-0 podman[444214]: 2025-12-05 02:09:45.691035874 +0000 UTC m=+0.326595395 container remove 43d87703764eadecd58e29c49be47e39966fea57674fcd0ded733394db7fa82c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:09:45 compute-0 systemd[1]: libpod-conmon-43d87703764eadecd58e29c49be47e39966fea57674fcd0ded733394db7fa82c.scope: Deactivated successfully.
Dec  5 02:09:45 compute-0 podman[444255]: 2025-12-05 02:09:45.956130025 +0000 UTC m=+0.072694469 container create 44298cdafa473f4e8cd9dd5ccff9c1bd67cb793020844fbd935a3aca5c89eb1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_babbage, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  5 02:09:46 compute-0 podman[444255]: 2025-12-05 02:09:45.926648598 +0000 UTC m=+0.043213092 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:09:46 compute-0 systemd[1]: Started libpod-conmon-44298cdafa473f4e8cd9dd5ccff9c1bd67cb793020844fbd935a3aca5c89eb1d.scope.
Dec  5 02:09:46 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:09:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb70900c1830ad3fbd998f05e115c1e891d2915c71254a749735c18078eb3994/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:09:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb70900c1830ad3fbd998f05e115c1e891d2915c71254a749735c18078eb3994/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:09:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb70900c1830ad3fbd998f05e115c1e891d2915c71254a749735c18078eb3994/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:09:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb70900c1830ad3fbd998f05e115c1e891d2915c71254a749735c18078eb3994/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:09:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb70900c1830ad3fbd998f05e115c1e891d2915c71254a749735c18078eb3994/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 02:09:46 compute-0 podman[444255]: 2025-12-05 02:09:46.104183065 +0000 UTC m=+0.220747549 container init 44298cdafa473f4e8cd9dd5ccff9c1bd67cb793020844fbd935a3aca5c89eb1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec  5 02:09:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1822: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 179 KiB/s rd, 796 KiB/s wr, 34 op/s
Dec  5 02:09:46 compute-0 podman[444255]: 2025-12-05 02:09:46.132359204 +0000 UTC m=+0.248923658 container start 44298cdafa473f4e8cd9dd5ccff9c1bd67cb793020844fbd935a3aca5c89eb1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  5 02:09:46 compute-0 podman[444255]: 2025-12-05 02:09:46.137816337 +0000 UTC m=+0.254380821 container attach 44298cdafa473f4e8cd9dd5ccff9c1bd67cb793020844fbd935a3aca5c89eb1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_babbage, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:09:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:09:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:09:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:09:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:09:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:09:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:09:47 compute-0 nova_compute[349548]: 2025-12-05 02:09:47.095 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:47 compute-0 gifted_babbage[444269]: --> passed data devices: 0 physical, 3 LVM
Dec  5 02:09:47 compute-0 gifted_babbage[444269]: --> relative data size: 1.0
Dec  5 02:09:47 compute-0 gifted_babbage[444269]: --> All data devices are unavailable
Dec  5 02:09:47 compute-0 systemd[1]: libpod-44298cdafa473f4e8cd9dd5ccff9c1bd67cb793020844fbd935a3aca5c89eb1d.scope: Deactivated successfully.
Dec  5 02:09:47 compute-0 podman[444255]: 2025-12-05 02:09:47.525454392 +0000 UTC m=+1.642018906 container died 44298cdafa473f4e8cd9dd5ccff9c1bd67cb793020844fbd935a3aca5c89eb1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_babbage, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:09:47 compute-0 systemd[1]: libpod-44298cdafa473f4e8cd9dd5ccff9c1bd67cb793020844fbd935a3aca5c89eb1d.scope: Consumed 1.321s CPU time.
Dec  5 02:09:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb70900c1830ad3fbd998f05e115c1e891d2915c71254a749735c18078eb3994-merged.mount: Deactivated successfully.
Dec  5 02:09:47 compute-0 podman[444255]: 2025-12-05 02:09:47.62954018 +0000 UTC m=+1.746104654 container remove 44298cdafa473f4e8cd9dd5ccff9c1bd67cb793020844fbd935a3aca5c89eb1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_babbage, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:09:47 compute-0 systemd[1]: libpod-conmon-44298cdafa473f4e8cd9dd5ccff9c1bd67cb793020844fbd935a3aca5c89eb1d.scope: Deactivated successfully.
Dec  5 02:09:47 compute-0 nova_compute[349548]: 2025-12-05 02:09:47.808 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:09:48 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Dec  5 02:09:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:09:48.070093) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  5 02:09:48 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Dec  5 02:09:48 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900588070142, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 902, "num_deletes": 257, "total_data_size": 1145184, "memory_usage": 1163296, "flush_reason": "Manual Compaction"}
Dec  5 02:09:48 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Dec  5 02:09:48 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900588080372, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 1133727, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36795, "largest_seqno": 37696, "table_properties": {"data_size": 1129197, "index_size": 2118, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10112, "raw_average_key_size": 19, "raw_value_size": 1119931, "raw_average_value_size": 2153, "num_data_blocks": 94, "num_entries": 520, "num_filter_entries": 520, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764900516, "oldest_key_time": 1764900516, "file_creation_time": 1764900588, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Dec  5 02:09:48 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 10313 microseconds, and 3680 cpu microseconds.
Dec  5 02:09:48 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 02:09:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:09:48.080413) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 1133727 bytes OK
Dec  5 02:09:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:09:48.080427) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Dec  5 02:09:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:09:48.082344) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Dec  5 02:09:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:09:48.082355) EVENT_LOG_v1 {"time_micros": 1764900588082351, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  5 02:09:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:09:48.082370) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  5 02:09:48 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 1140737, prev total WAL file size 1140737, number of live WAL files 2.
Dec  5 02:09:48 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:09:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:09:48.083225) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323535' seq:72057594037927935, type:22 .. '6C6F676D0031353038' seq:0, type:0; will stop at (end)
Dec  5 02:09:48 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  5 02:09:48 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(1107KB)], [83(8386KB)]
Dec  5 02:09:48 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900588083277, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 9721784, "oldest_snapshot_seqno": -1}
Dec  5 02:09:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1823: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 74 KiB/s rd, 22 KiB/s wr, 8 op/s
Dec  5 02:09:48 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 5659 keys, 9618790 bytes, temperature: kUnknown
Dec  5 02:09:48 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900588156863, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 9618790, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9579864, "index_size": 23648, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14213, "raw_key_size": 143771, "raw_average_key_size": 25, "raw_value_size": 9476413, "raw_average_value_size": 1674, "num_data_blocks": 971, "num_entries": 5659, "num_filter_entries": 5659, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764900588, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Dec  5 02:09:48 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 02:09:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:09:48.158100) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 9618790 bytes
Dec  5 02:09:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:09:48.161129) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 131.8 rd, 130.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 8.2 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(17.1) write-amplify(8.5) OK, records in: 6189, records dropped: 530 output_compression: NoCompression
Dec  5 02:09:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:09:48.161172) EVENT_LOG_v1 {"time_micros": 1764900588161154, "job": 48, "event": "compaction_finished", "compaction_time_micros": 73761, "compaction_time_cpu_micros": 40880, "output_level": 6, "num_output_files": 1, "total_output_size": 9618790, "num_input_records": 6189, "num_output_records": 5659, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  5 02:09:48 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:09:48 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900588163096, "job": 48, "event": "table_file_deletion", "file_number": 85}
Dec  5 02:09:48 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:09:48 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900588167876, "job": 48, "event": "table_file_deletion", "file_number": 83}
Dec  5 02:09:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:09:48.082980) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:09:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:09:48.168173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:09:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:09:48.168183) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:09:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:09:48.168188) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:09:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:09:48.168193) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:09:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:09:48.168198) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:09:48 compute-0 podman[444446]: 2025-12-05 02:09:48.893827556 +0000 UTC m=+0.086725072 container create 0c1c9ffff881008654d0e2c6791fb576dfa563ba46688aad46f88805a764a987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_haibt, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  5 02:09:48 compute-0 podman[444446]: 2025-12-05 02:09:48.862836667 +0000 UTC m=+0.055734183 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:09:48 compute-0 systemd[1]: Started libpod-conmon-0c1c9ffff881008654d0e2c6791fb576dfa563ba46688aad46f88805a764a987.scope.
Dec  5 02:09:49 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:09:49 compute-0 podman[444446]: 2025-12-05 02:09:49.05521673 +0000 UTC m=+0.248114306 container init 0c1c9ffff881008654d0e2c6791fb576dfa563ba46688aad46f88805a764a987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  5 02:09:49 compute-0 podman[444446]: 2025-12-05 02:09:49.072615547 +0000 UTC m=+0.265513063 container start 0c1c9ffff881008654d0e2c6791fb576dfa563ba46688aad46f88805a764a987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_haibt, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:09:49 compute-0 eager_haibt[444461]: 167 167
Dec  5 02:09:49 compute-0 podman[444446]: 2025-12-05 02:09:49.080632022 +0000 UTC m=+0.273529588 container attach 0c1c9ffff881008654d0e2c6791fb576dfa563ba46688aad46f88805a764a987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  5 02:09:49 compute-0 systemd[1]: libpod-0c1c9ffff881008654d0e2c6791fb576dfa563ba46688aad46f88805a764a987.scope: Deactivated successfully.
Dec  5 02:09:49 compute-0 podman[444446]: 2025-12-05 02:09:49.083834442 +0000 UTC m=+0.276731918 container died 0c1c9ffff881008654d0e2c6791fb576dfa563ba46688aad46f88805a764a987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  5 02:09:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-c98996a6939f00fe69cc1db8392d6aacbc1e99cd585e003e46c1556617997079-merged.mount: Deactivated successfully.
Dec  5 02:09:49 compute-0 podman[444446]: 2025-12-05 02:09:49.163491975 +0000 UTC m=+0.356389491 container remove 0c1c9ffff881008654d0e2c6791fb576dfa563ba46688aad46f88805a764a987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  5 02:09:49 compute-0 systemd[1]: libpod-conmon-0c1c9ffff881008654d0e2c6791fb576dfa563ba46688aad46f88805a764a987.scope: Deactivated successfully.
Dec  5 02:09:49 compute-0 podman[444486]: 2025-12-05 02:09:49.492766434 +0000 UTC m=+0.109215222 container create 736a3cc9ebfaaf25da2ca9de354a1e5ba9febfe77a485bf025460c5eb215e766 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_herschel, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:09:49 compute-0 podman[444486]: 2025-12-05 02:09:49.455694635 +0000 UTC m=+0.072143473 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:09:49 compute-0 systemd[1]: Started libpod-conmon-736a3cc9ebfaaf25da2ca9de354a1e5ba9febfe77a485bf025460c5eb215e766.scope.
Dec  5 02:09:49 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:09:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/380ef00d82ea0efe88f9aa2acc87765f841530d53e4ce1167cd8de34170728cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:09:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/380ef00d82ea0efe88f9aa2acc87765f841530d53e4ce1167cd8de34170728cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:09:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/380ef00d82ea0efe88f9aa2acc87765f841530d53e4ce1167cd8de34170728cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:09:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/380ef00d82ea0efe88f9aa2acc87765f841530d53e4ce1167cd8de34170728cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:09:49 compute-0 podman[444486]: 2025-12-05 02:09:49.68633986 +0000 UTC m=+0.302788708 container init 736a3cc9ebfaaf25da2ca9de354a1e5ba9febfe77a485bf025460c5eb215e766 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_herschel, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  5 02:09:49 compute-0 podman[444486]: 2025-12-05 02:09:49.718049099 +0000 UTC m=+0.334497877 container start 736a3cc9ebfaaf25da2ca9de354a1e5ba9febfe77a485bf025460c5eb215e766 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_herschel, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:09:49 compute-0 podman[444486]: 2025-12-05 02:09:49.725507948 +0000 UTC m=+0.341956706 container attach 736a3cc9ebfaaf25da2ca9de354a1e5ba9febfe77a485bf025460c5eb215e766 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_herschel, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  5 02:09:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1824: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 13 KiB/s wr, 0 op/s
Dec  5 02:09:50 compute-0 festive_herschel[444503]: {
Dec  5 02:09:50 compute-0 festive_herschel[444503]:    "0": [
Dec  5 02:09:50 compute-0 festive_herschel[444503]:        {
Dec  5 02:09:50 compute-0 festive_herschel[444503]:            "devices": [
Dec  5 02:09:50 compute-0 festive_herschel[444503]:                "/dev/loop3"
Dec  5 02:09:50 compute-0 festive_herschel[444503]:            ],
Dec  5 02:09:50 compute-0 festive_herschel[444503]:            "lv_name": "ceph_lv0",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:            "lv_size": "21470642176",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:            "name": "ceph_lv0",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:            "tags": {
Dec  5 02:09:50 compute-0 festive_herschel[444503]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:                "ceph.cluster_name": "ceph",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:                "ceph.crush_device_class": "",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:                "ceph.encrypted": "0",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:                "ceph.osd_id": "0",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:                "ceph.type": "block",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:                "ceph.vdo": "0"
Dec  5 02:09:50 compute-0 festive_herschel[444503]:            },
Dec  5 02:09:50 compute-0 festive_herschel[444503]:            "type": "block",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:            "vg_name": "ceph_vg0"
Dec  5 02:09:50 compute-0 festive_herschel[444503]:        }
Dec  5 02:09:50 compute-0 festive_herschel[444503]:    ],
Dec  5 02:09:50 compute-0 festive_herschel[444503]:    "1": [
Dec  5 02:09:50 compute-0 festive_herschel[444503]:        {
Dec  5 02:09:50 compute-0 festive_herschel[444503]:            "devices": [
Dec  5 02:09:50 compute-0 festive_herschel[444503]:                "/dev/loop4"
Dec  5 02:09:50 compute-0 festive_herschel[444503]:            ],
Dec  5 02:09:50 compute-0 festive_herschel[444503]:            "lv_name": "ceph_lv1",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:            "lv_size": "21470642176",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:            "name": "ceph_lv1",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:            "tags": {
Dec  5 02:09:50 compute-0 festive_herschel[444503]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:                "ceph.cluster_name": "ceph",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:                "ceph.crush_device_class": "",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:                "ceph.encrypted": "0",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:                "ceph.osd_id": "1",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:                "ceph.type": "block",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:                "ceph.vdo": "0"
Dec  5 02:09:50 compute-0 festive_herschel[444503]:            },
Dec  5 02:09:50 compute-0 festive_herschel[444503]:            "type": "block",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:            "vg_name": "ceph_vg1"
Dec  5 02:09:50 compute-0 festive_herschel[444503]:        }
Dec  5 02:09:50 compute-0 festive_herschel[444503]:    ],
Dec  5 02:09:50 compute-0 festive_herschel[444503]:    "2": [
Dec  5 02:09:50 compute-0 festive_herschel[444503]:        {
Dec  5 02:09:50 compute-0 festive_herschel[444503]:            "devices": [
Dec  5 02:09:50 compute-0 festive_herschel[444503]:                "/dev/loop5"
Dec  5 02:09:50 compute-0 festive_herschel[444503]:            ],
Dec  5 02:09:50 compute-0 festive_herschel[444503]:            "lv_name": "ceph_lv2",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:            "lv_size": "21470642176",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:            "name": "ceph_lv2",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:            "tags": {
Dec  5 02:09:50 compute-0 festive_herschel[444503]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:                "ceph.cluster_name": "ceph",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:                "ceph.crush_device_class": "",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:                "ceph.encrypted": "0",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:                "ceph.osd_id": "2",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:                "ceph.type": "block",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:                "ceph.vdo": "0"
Dec  5 02:09:50 compute-0 festive_herschel[444503]:            },
Dec  5 02:09:50 compute-0 festive_herschel[444503]:            "type": "block",
Dec  5 02:09:50 compute-0 festive_herschel[444503]:            "vg_name": "ceph_vg2"
Dec  5 02:09:50 compute-0 festive_herschel[444503]:        }
Dec  5 02:09:50 compute-0 festive_herschel[444503]:    ]
Dec  5 02:09:50 compute-0 festive_herschel[444503]: }
Dec  5 02:09:50 compute-0 systemd[1]: libpod-736a3cc9ebfaaf25da2ca9de354a1e5ba9febfe77a485bf025460c5eb215e766.scope: Deactivated successfully.
Dec  5 02:09:50 compute-0 podman[444486]: 2025-12-05 02:09:50.647354607 +0000 UTC m=+1.263803365 container died 736a3cc9ebfaaf25da2ca9de354a1e5ba9febfe77a485bf025460c5eb215e766 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  5 02:09:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-380ef00d82ea0efe88f9aa2acc87765f841530d53e4ce1167cd8de34170728cc-merged.mount: Deactivated successfully.
Dec  5 02:09:50 compute-0 podman[444486]: 2025-12-05 02:09:50.754815719 +0000 UTC m=+1.371264507 container remove 736a3cc9ebfaaf25da2ca9de354a1e5ba9febfe77a485bf025460c5eb215e766 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  5 02:09:50 compute-0 systemd[1]: libpod-conmon-736a3cc9ebfaaf25da2ca9de354a1e5ba9febfe77a485bf025460c5eb215e766.scope: Deactivated successfully.
Dec  5 02:09:51 compute-0 podman[444663]: 2025-12-05 02:09:51.918345221 +0000 UTC m=+0.063544402 container create c724ea35775044323311f17570eb743af89e5e201dfc7fe56708ce1cfe75cc5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_murdock, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:09:51 compute-0 systemd[1]: Started libpod-conmon-c724ea35775044323311f17570eb743af89e5e201dfc7fe56708ce1cfe75cc5b.scope.
Dec  5 02:09:51 compute-0 podman[444663]: 2025-12-05 02:09:51.895763208 +0000 UTC m=+0.040962379 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:09:52 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:09:52 compute-0 nova_compute[349548]: 2025-12-05 02:09:52.013 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:52 compute-0 podman[444663]: 2025-12-05 02:09:52.033385315 +0000 UTC m=+0.178584526 container init c724ea35775044323311f17570eb743af89e5e201dfc7fe56708ce1cfe75cc5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_murdock, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  5 02:09:52 compute-0 podman[444663]: 2025-12-05 02:09:52.049088615 +0000 UTC m=+0.194287826 container start c724ea35775044323311f17570eb743af89e5e201dfc7fe56708ce1cfe75cc5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:09:52 compute-0 podman[444663]: 2025-12-05 02:09:52.056216035 +0000 UTC m=+0.201415246 container attach c724ea35775044323311f17570eb743af89e5e201dfc7fe56708ce1cfe75cc5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:09:52 compute-0 admiring_murdock[444678]: 167 167
Dec  5 02:09:52 compute-0 systemd[1]: libpod-c724ea35775044323311f17570eb743af89e5e201dfc7fe56708ce1cfe75cc5b.scope: Deactivated successfully.
Dec  5 02:09:52 compute-0 podman[444663]: 2025-12-05 02:09:52.061713679 +0000 UTC m=+0.206912860 container died c724ea35775044323311f17570eb743af89e5e201dfc7fe56708ce1cfe75cc5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_murdock, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  5 02:09:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1550ea99a5c93a50b29fa34b54f9760e2c92222d3c699c1fcace7d59daa0003-merged.mount: Deactivated successfully.
Dec  5 02:09:52 compute-0 nova_compute[349548]: 2025-12-05 02:09:52.101 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1825: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 13 KiB/s wr, 0 op/s
Dec  5 02:09:52 compute-0 podman[444663]: 2025-12-05 02:09:52.119814058 +0000 UTC m=+0.265013239 container remove c724ea35775044323311f17570eb743af89e5e201dfc7fe56708ce1cfe75cc5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_murdock, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:09:52 compute-0 systemd[1]: libpod-conmon-c724ea35775044323311f17570eb743af89e5e201dfc7fe56708ce1cfe75cc5b.scope: Deactivated successfully.
Dec  5 02:09:52 compute-0 podman[444702]: 2025-12-05 02:09:52.358124088 +0000 UTC m=+0.072413261 container create 8a272fe825a97a820a907735da859e15eb1a77cbd60a42908b6eb972826e55c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_clarke, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  5 02:09:52 compute-0 systemd[1]: Started libpod-conmon-8a272fe825a97a820a907735da859e15eb1a77cbd60a42908b6eb972826e55c8.scope.
Dec  5 02:09:52 compute-0 podman[444702]: 2025-12-05 02:09:52.334547867 +0000 UTC m=+0.048837060 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:09:52 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:09:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe6b7f7d33d57c601afdf4fab07022be39464421b3eda4609a3e1faf18fc10fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:09:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe6b7f7d33d57c601afdf4fab07022be39464421b3eda4609a3e1faf18fc10fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:09:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe6b7f7d33d57c601afdf4fab07022be39464421b3eda4609a3e1faf18fc10fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:09:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe6b7f7d33d57c601afdf4fab07022be39464421b3eda4609a3e1faf18fc10fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:09:52 compute-0 podman[444702]: 2025-12-05 02:09:52.505371565 +0000 UTC m=+0.219660818 container init 8a272fe825a97a820a907735da859e15eb1a77cbd60a42908b6eb972826e55c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Dec  5 02:09:52 compute-0 podman[444702]: 2025-12-05 02:09:52.541831777 +0000 UTC m=+0.256120970 container start 8a272fe825a97a820a907735da859e15eb1a77cbd60a42908b6eb972826e55c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_clarke, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  5 02:09:52 compute-0 podman[444702]: 2025-12-05 02:09:52.548660868 +0000 UTC m=+0.262950121 container attach 8a272fe825a97a820a907735da859e15eb1a77cbd60a42908b6eb972826e55c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:09:52 compute-0 nova_compute[349548]: 2025-12-05 02:09:52.811 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:09:53 compute-0 trusting_clarke[444717]: {
Dec  5 02:09:53 compute-0 trusting_clarke[444717]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 02:09:53 compute-0 trusting_clarke[444717]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:09:53 compute-0 trusting_clarke[444717]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 02:09:53 compute-0 trusting_clarke[444717]:        "osd_id": 0,
Dec  5 02:09:53 compute-0 trusting_clarke[444717]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:09:53 compute-0 trusting_clarke[444717]:        "type": "bluestore"
Dec  5 02:09:53 compute-0 trusting_clarke[444717]:    },
Dec  5 02:09:53 compute-0 trusting_clarke[444717]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 02:09:53 compute-0 trusting_clarke[444717]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:09:53 compute-0 trusting_clarke[444717]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 02:09:53 compute-0 trusting_clarke[444717]:        "osd_id": 1,
Dec  5 02:09:53 compute-0 trusting_clarke[444717]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:09:53 compute-0 trusting_clarke[444717]:        "type": "bluestore"
Dec  5 02:09:53 compute-0 trusting_clarke[444717]:    },
Dec  5 02:09:53 compute-0 trusting_clarke[444717]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 02:09:53 compute-0 trusting_clarke[444717]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:09:53 compute-0 trusting_clarke[444717]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 02:09:53 compute-0 trusting_clarke[444717]:        "osd_id": 2,
Dec  5 02:09:53 compute-0 trusting_clarke[444717]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:09:53 compute-0 trusting_clarke[444717]:        "type": "bluestore"
Dec  5 02:09:53 compute-0 trusting_clarke[444717]:    }
Dec  5 02:09:53 compute-0 trusting_clarke[444717]: }
Dec  5 02:09:53 compute-0 systemd[1]: libpod-8a272fe825a97a820a907735da859e15eb1a77cbd60a42908b6eb972826e55c8.scope: Deactivated successfully.
Dec  5 02:09:53 compute-0 systemd[1]: libpod-8a272fe825a97a820a907735da859e15eb1a77cbd60a42908b6eb972826e55c8.scope: Consumed 1.097s CPU time.
Dec  5 02:09:53 compute-0 podman[444702]: 2025-12-05 02:09:53.644326729 +0000 UTC m=+1.358615922 container died 8a272fe825a97a820a907735da859e15eb1a77cbd60a42908b6eb972826e55c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  5 02:09:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe6b7f7d33d57c601afdf4fab07022be39464421b3eda4609a3e1faf18fc10fc-merged.mount: Deactivated successfully.
Dec  5 02:09:53 compute-0 podman[444702]: 2025-12-05 02:09:53.724503937 +0000 UTC m=+1.438793090 container remove 8a272fe825a97a820a907735da859e15eb1a77cbd60a42908b6eb972826e55c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_clarke, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  5 02:09:53 compute-0 systemd[1]: libpod-conmon-8a272fe825a97a820a907735da859e15eb1a77cbd60a42908b6eb972826e55c8.scope: Deactivated successfully.
Dec  5 02:09:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 02:09:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:09:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 02:09:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:09:53 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev c41b4246-d5e3-4803-acf9-602258eb2971 does not exist
Dec  5 02:09:53 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 2f74545f-d479-4cfe-80d3-89e00adab0be does not exist
Dec  5 02:09:53 compute-0 podman[444760]: 2025-12-05 02:09:53.865475678 +0000 UTC m=+0.088864742 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec  5 02:09:53 compute-0 podman[444761]: 2025-12-05 02:09:53.902353332 +0000 UTC m=+0.121530588 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 02:09:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1826: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 426 B/s wr, 0 op/s
Dec  5 02:09:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:09:54 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:09:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1827: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.7 KiB/s wr, 0 op/s
Dec  5 02:09:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:56.205 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:09:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:56.206 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:09:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:09:56.207 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:09:57 compute-0 nova_compute[349548]: 2025-12-05 02:09:57.102 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:57 compute-0 nova_compute[349548]: 2025-12-05 02:09:57.585 349552 DEBUG nova.objects.instance [None req-66ba1571-6b32-4c06-a012-9edeb15b9cae 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Lazy-loading 'flavor' on Instance uuid 939ae9f2-b89c-4a19-96de-ab4dfc882a35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:09:57 compute-0 nova_compute[349548]: 2025-12-05 02:09:57.638 349552 DEBUG oslo_concurrency.lockutils [None req-66ba1571-6b32-4c06-a012-9edeb15b9cae 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Acquiring lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:09:57 compute-0 nova_compute[349548]: 2025-12-05 02:09:57.638 349552 DEBUG oslo_concurrency.lockutils [None req-66ba1571-6b32-4c06-a012-9edeb15b9cae 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Acquired lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:09:57 compute-0 nova_compute[349548]: 2025-12-05 02:09:57.815 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:09:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:09:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1828: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 3.5 KiB/s wr, 1 op/s
Dec  5 02:09:58 compute-0 podman[444848]: 2025-12-05 02:09:58.731745437 +0000 UTC m=+0.128633256 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  5 02:09:58 compute-0 podman[444849]: 2025-12-05 02:09:58.733212369 +0000 UTC m=+0.131567459 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 02:09:59 compute-0 podman[158197]: time="2025-12-05T02:09:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:09:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:09:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45045 "" "Go-http-client/1.1"
Dec  5 02:09:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:09:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9108 "" "Go-http-client/1.1"
Dec  5 02:10:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1829: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 3.5 KiB/s wr, 1 op/s
Dec  5 02:10:00 compute-0 podman[444885]: 2025-12-05 02:10:00.723175278 +0000 UTC m=+0.122476281 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, name=ubi9, release=1214.1726694543, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, version=9.4, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git)
Dec  5 02:10:00 compute-0 nova_compute[349548]: 2025-12-05 02:10:00.892 349552 DEBUG nova.network.neutron [None req-66ba1571-6b32-4c06-a012-9edeb15b9cae 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  5 02:10:01 compute-0 nova_compute[349548]: 2025-12-05 02:10:01.045 349552 DEBUG nova.compute.manager [req-75a9498b-c287-4f54-8603-345e4d5bcd94 req-9e0c7872-4283-4f5b-9c26-41599d194a39 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Received event network-changed-2ac46e0a-6888-440f-b155-d4b0e8677304 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:10:01 compute-0 nova_compute[349548]: 2025-12-05 02:10:01.046 349552 DEBUG nova.compute.manager [req-75a9498b-c287-4f54-8603-345e4d5bcd94 req-9e0c7872-4283-4f5b-9c26-41599d194a39 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Refreshing instance network info cache due to event network-changed-2ac46e0a-6888-440f-b155-d4b0e8677304. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  5 02:10:01 compute-0 nova_compute[349548]: 2025-12-05 02:10:01.046 349552 DEBUG oslo_concurrency.lockutils [req-75a9498b-c287-4f54-8603-345e4d5bcd94 req-9e0c7872-4283-4f5b-9c26-41599d194a39 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:10:01 compute-0 openstack_network_exporter[366555]: ERROR   02:10:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:10:01 compute-0 openstack_network_exporter[366555]: ERROR   02:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:10:01 compute-0 openstack_network_exporter[366555]: ERROR   02:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:10:01 compute-0 openstack_network_exporter[366555]: ERROR   02:10:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:10:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:10:01 compute-0 openstack_network_exporter[366555]: ERROR   02:10:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:10:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:10:01 compute-0 nova_compute[349548]: 2025-12-05 02:10:01.679 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:02 compute-0 nova_compute[349548]: 2025-12-05 02:10:02.105 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1830: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 5.2 KiB/s wr, 1 op/s
Dec  5 02:10:02 compute-0 nova_compute[349548]: 2025-12-05 02:10:02.820 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:10:03 compute-0 nova_compute[349548]: 2025-12-05 02:10:03.929 349552 DEBUG nova.network.neutron [None req-66ba1571-6b32-4c06-a012-9edeb15b9cae 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Updating instance_info_cache with network_info: [{"id": "2ac46e0a-6888-440f-b155-d4b0e8677304", "address": "fa:16:3e:ca:ba:4f", "network": {"id": "77ae1103-3871-4354-8e08-09bb5c0c1ad1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-680696631-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "70b71e0f6ffe47ed86a910f90d71557a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac46e0a-68", "ovs_interfaceid": "2ac46e0a-6888-440f-b155-d4b0e8677304", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:10:03 compute-0 nova_compute[349548]: 2025-12-05 02:10:03.961 349552 DEBUG oslo_concurrency.lockutils [None req-66ba1571-6b32-4c06-a012-9edeb15b9cae 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Releasing lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:10:03 compute-0 nova_compute[349548]: 2025-12-05 02:10:03.962 349552 DEBUG nova.compute.manager [None req-66ba1571-6b32-4c06-a012-9edeb15b9cae 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Dec  5 02:10:03 compute-0 nova_compute[349548]: 2025-12-05 02:10:03.963 349552 DEBUG nova.compute.manager [None req-66ba1571-6b32-4c06-a012-9edeb15b9cae 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] network_info to inject: |[{"id": "2ac46e0a-6888-440f-b155-d4b0e8677304", "address": "fa:16:3e:ca:ba:4f", "network": {"id": "77ae1103-3871-4354-8e08-09bb5c0c1ad1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-680696631-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "70b71e0f6ffe47ed86a910f90d71557a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac46e0a-68", "ovs_interfaceid": "2ac46e0a-6888-440f-b155-d4b0e8677304", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Dec  5 02:10:03 compute-0 nova_compute[349548]: 2025-12-05 02:10:03.965 349552 DEBUG oslo_concurrency.lockutils [req-75a9498b-c287-4f54-8603-345e4d5bcd94 req-9e0c7872-4283-4f5b-9c26-41599d194a39 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:10:03 compute-0 nova_compute[349548]: 2025-12-05 02:10:03.966 349552 DEBUG nova.network.neutron [req-75a9498b-c287-4f54-8603-345e4d5bcd94 req-9e0c7872-4283-4f5b-9c26-41599d194a39 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Refreshing network info cache for port 2ac46e0a-6888-440f-b155-d4b0e8677304 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  5 02:10:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1831: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 5.1 KiB/s wr, 1 op/s
Dec  5 02:10:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1832: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 6.7 KiB/s wr, 1 op/s
Dec  5 02:10:06 compute-0 nova_compute[349548]: 2025-12-05 02:10:06.345 349552 DEBUG nova.objects.instance [None req-462b6329-01a2-45a9-b19f-01d82fb4c16c 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Lazy-loading 'flavor' on Instance uuid 939ae9f2-b89c-4a19-96de-ab4dfc882a35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:10:06 compute-0 nova_compute[349548]: 2025-12-05 02:10:06.349 349552 DEBUG oslo_concurrency.lockutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Acquiring lock "3391e1ba-0e6b-4113-b402-027e997b3cb9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:10:06 compute-0 nova_compute[349548]: 2025-12-05 02:10:06.349 349552 DEBUG oslo_concurrency.lockutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Lock "3391e1ba-0e6b-4113-b402-027e997b3cb9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:10:06 compute-0 nova_compute[349548]: 2025-12-05 02:10:06.380 349552 DEBUG oslo_concurrency.lockutils [None req-462b6329-01a2-45a9-b19f-01d82fb4c16c 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Acquiring lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:10:06 compute-0 nova_compute[349548]: 2025-12-05 02:10:06.392 349552 DEBUG nova.compute.manager [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  5 02:10:06 compute-0 nova_compute[349548]: 2025-12-05 02:10:06.532 349552 DEBUG oslo_concurrency.lockutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:10:06 compute-0 nova_compute[349548]: 2025-12-05 02:10:06.534 349552 DEBUG oslo_concurrency.lockutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:10:06 compute-0 nova_compute[349548]: 2025-12-05 02:10:06.548 349552 DEBUG nova.virt.hardware [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  5 02:10:06 compute-0 nova_compute[349548]: 2025-12-05 02:10:06.550 349552 INFO nova.compute.claims [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  5 02:10:06 compute-0 nova_compute[349548]: 2025-12-05 02:10:06.710 349552 DEBUG oslo_concurrency.processutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.100 349552 DEBUG nova.network.neutron [req-75a9498b-c287-4f54-8603-345e4d5bcd94 req-9e0c7872-4283-4f5b-9c26-41599d194a39 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Updated VIF entry in instance network info cache for port 2ac46e0a-6888-440f-b155-d4b0e8677304. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  5 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.103 349552 DEBUG nova.network.neutron [req-75a9498b-c287-4f54-8603-345e4d5bcd94 req-9e0c7872-4283-4f5b-9c26-41599d194a39 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Updating instance_info_cache with network_info: [{"id": "2ac46e0a-6888-440f-b155-d4b0e8677304", "address": "fa:16:3e:ca:ba:4f", "network": {"id": "77ae1103-3871-4354-8e08-09bb5c0c1ad1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-680696631-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "70b71e0f6ffe47ed86a910f90d71557a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac46e0a-68", "ovs_interfaceid": "2ac46e0a-6888-440f-b155-d4b0e8677304", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.108 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.136 349552 DEBUG oslo_concurrency.lockutils [req-75a9498b-c287-4f54-8603-345e4d5bcd94 req-9e0c7872-4283-4f5b-9c26-41599d194a39 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.138 349552 DEBUG oslo_concurrency.lockutils [None req-462b6329-01a2-45a9-b19f-01d82fb4c16c 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Acquired lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:10:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:10:07 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3826563883' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.233 349552 DEBUG oslo_concurrency.processutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.249 349552 DEBUG nova.compute.provider_tree [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.273 349552 DEBUG nova.scheduler.client.report [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.304 349552 DEBUG oslo_concurrency.lockutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.306 349552 DEBUG nova.compute.manager [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  5 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.376 349552 DEBUG nova.compute.manager [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  5 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.377 349552 DEBUG nova.network.neutron [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  5 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.415 349552 INFO nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  5 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.462 349552 DEBUG nova.compute.manager [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  5 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.563 349552 DEBUG nova.compute.manager [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  5 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.566 349552 DEBUG nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  5 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.567 349552 INFO nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Creating image(s)#033[00m
Dec  5 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.618 349552 DEBUG nova.storage.rbd_utils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] rbd image 3391e1ba-0e6b-4113-b402-027e997b3cb9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.677 349552 DEBUG nova.storage.rbd_utils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] rbd image 3391e1ba-0e6b-4113-b402-027e997b3cb9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.738 349552 DEBUG nova.storage.rbd_utils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] rbd image 3391e1ba-0e6b-4113-b402-027e997b3cb9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.749 349552 DEBUG oslo_concurrency.processutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.824 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.846 349552 DEBUG oslo_concurrency.processutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.848 349552 DEBUG oslo_concurrency.lockutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Acquiring lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.849 349552 DEBUG oslo_concurrency.lockutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.850 349552 DEBUG oslo_concurrency.lockutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.900 349552 DEBUG nova.storage.rbd_utils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] rbd image 3391e1ba-0e6b-4113-b402-027e997b3cb9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.920 349552 DEBUG oslo_concurrency.processutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 3391e1ba-0e6b-4113-b402-027e997b3cb9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:10:07 compute-0 nova_compute[349548]: 2025-12-05 02:10:07.951 349552 DEBUG nova.policy [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f18ce80284524cbb9497cac2c6e6bf32', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f120ce30568246929ef2dc1a9f0bd0c7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  5 02:10:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:10:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1833: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 15 KiB/s wr, 2 op/s
Dec  5 02:10:08 compute-0 nova_compute[349548]: 2025-12-05 02:10:08.265 349552 DEBUG oslo_concurrency.lockutils [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Acquiring lock "59e35a32-9023-4e49-be56-9da10df3027f" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:10:08 compute-0 nova_compute[349548]: 2025-12-05 02:10:08.266 349552 DEBUG oslo_concurrency.lockutils [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:10:08 compute-0 nova_compute[349548]: 2025-12-05 02:10:08.269 349552 INFO nova.compute.manager [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Rebooting instance#033[00m
Dec  5 02:10:08 compute-0 nova_compute[349548]: 2025-12-05 02:10:08.296 349552 DEBUG oslo_concurrency.lockutils [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Acquiring lock "refresh_cache-59e35a32-9023-4e49-be56-9da10df3027f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:10:08 compute-0 nova_compute[349548]: 2025-12-05 02:10:08.296 349552 DEBUG oslo_concurrency.lockutils [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Acquired lock "refresh_cache-59e35a32-9023-4e49-be56-9da10df3027f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:10:08 compute-0 nova_compute[349548]: 2025-12-05 02:10:08.297 349552 DEBUG nova.network.neutron [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  5 02:10:08 compute-0 nova_compute[349548]: 2025-12-05 02:10:08.435 349552 DEBUG oslo_concurrency.processutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 3391e1ba-0e6b-4113-b402-027e997b3cb9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:10:08 compute-0 nova_compute[349548]: 2025-12-05 02:10:08.582 349552 DEBUG nova.storage.rbd_utils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] resizing rbd image 3391e1ba-0e6b-4113-b402-027e997b3cb9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  5 02:10:08 compute-0 nova_compute[349548]: 2025-12-05 02:10:08.818 349552 DEBUG nova.objects.instance [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Lazy-loading 'migration_context' on Instance uuid 3391e1ba-0e6b-4113-b402-027e997b3cb9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:10:08 compute-0 nova_compute[349548]: 2025-12-05 02:10:08.836 349552 DEBUG nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  5 02:10:08 compute-0 nova_compute[349548]: 2025-12-05 02:10:08.836 349552 DEBUG nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Ensure instance console log exists: /var/lib/nova/instances/3391e1ba-0e6b-4113-b402-027e997b3cb9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  5 02:10:08 compute-0 nova_compute[349548]: 2025-12-05 02:10:08.837 349552 DEBUG oslo_concurrency.lockutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:10:08 compute-0 nova_compute[349548]: 2025-12-05 02:10:08.837 349552 DEBUG oslo_concurrency.lockutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:10:08 compute-0 nova_compute[349548]: 2025-12-05 02:10:08.838 349552 DEBUG oslo_concurrency.lockutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:10:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1834: 321 pgs: 321 active+clean; 216 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s wr, 1 op/s
Dec  5 02:10:10 compute-0 nova_compute[349548]: 2025-12-05 02:10:10.261 349552 DEBUG nova.network.neutron [None req-462b6329-01a2-45a9-b19f-01d82fb4c16c 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  5 02:10:10 compute-0 nova_compute[349548]: 2025-12-05 02:10:10.500 349552 DEBUG nova.compute.manager [req-06a18c23-30b8-4680-9ca3-f4b33a766b4e req-e4f00246-7028-4fa6-b2f1-0f915a73aadd a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Received event network-changed-2ac46e0a-6888-440f-b155-d4b0e8677304 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:10:10 compute-0 nova_compute[349548]: 2025-12-05 02:10:10.501 349552 DEBUG nova.compute.manager [req-06a18c23-30b8-4680-9ca3-f4b33a766b4e req-e4f00246-7028-4fa6-b2f1-0f915a73aadd a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Refreshing instance network info cache due to event network-changed-2ac46e0a-6888-440f-b155-d4b0e8677304. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  5 02:10:10 compute-0 nova_compute[349548]: 2025-12-05 02:10:10.501 349552 DEBUG oslo_concurrency.lockutils [req-06a18c23-30b8-4680-9ca3-f4b33a766b4e req-e4f00246-7028-4fa6-b2f1-0f915a73aadd a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:10:10 compute-0 podman[445091]: 2025-12-05 02:10:10.709042346 +0000 UTC m=+0.108321833 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  5 02:10:10 compute-0 podman[445092]: 2025-12-05 02:10:10.713950074 +0000 UTC m=+0.109690542 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 02:10:10 compute-0 podman[445094]: 2025-12-05 02:10:10.745336346 +0000 UTC m=+0.127529963 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.33.7, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, release=1755695350, config_id=edpm, architecture=x86_64, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Dec  5 02:10:10 compute-0 podman[445093]: 2025-12-05 02:10:10.750406388 +0000 UTC m=+0.139818368 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:10:11 compute-0 nova_compute[349548]: 2025-12-05 02:10:11.250 349552 DEBUG nova.network.neutron [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Successfully created port: 26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  5 02:10:11 compute-0 nova_compute[349548]: 2025-12-05 02:10:11.400 349552 DEBUG nova.network.neutron [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Updating instance_info_cache with network_info: [{"id": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "address": "fa:16:3e:16:81:87", "network": {"id": "a9bc378d-2d4b-4990-99ce-02656b1fec0d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2010351729-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd34a6a62cf94436a2b836fa4f49c4fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa240e2ef-17", "ovs_interfaceid": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.113 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1835: 321 pgs: 321 active+clean; 250 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.5 MiB/s wr, 29 op/s
Dec  5 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.133 349552 DEBUG oslo_concurrency.lockutils [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Releasing lock "refresh_cache-59e35a32-9023-4e49-be56-9da10df3027f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.136 349552 DEBUG nova.compute.manager [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:10:12 compute-0 kernel: tapa240e2ef-17 (unregistering): left promiscuous mode
Dec  5 02:10:12 compute-0 NetworkManager[49092]: <info>  [1764900612.4742] device (tapa240e2ef-17): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  5 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.493 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:12 compute-0 ovn_controller[89286]: 2025-12-05T02:10:12Z|00104|binding|INFO|Releasing lport a240e2ef-1773-4509-ac04-eae1f5d36e08 from this chassis (sb_readonly=0)
Dec  5 02:10:12 compute-0 ovn_controller[89286]: 2025-12-05T02:10:12Z|00105|binding|INFO|Setting lport a240e2ef-1773-4509-ac04-eae1f5d36e08 down in Southbound
Dec  5 02:10:12 compute-0 ovn_controller[89286]: 2025-12-05T02:10:12Z|00106|binding|INFO|Removing iface tapa240e2ef-17 ovn-installed in OVS
Dec  5 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.496 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:12 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:12.503 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:81:87 10.100.0.10'], port_security=['fa:16:3e:16:81:87 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '59e35a32-9023-4e49-be56-9da10df3027f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a9bc378d-2d4b-4990-99ce-02656b1fec0d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dd34a6a62cf94436a2b836fa4f49c4fa', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0ad1486e-ab79-4bad-bad5-777f54ed0ef1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.206'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=880ae0ff-40ec-4de0-a5e7-7c2cf13ecf72, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=a240e2ef-1773-4509-ac04-eae1f5d36e08) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 02:10:12 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:12.507 287122 INFO neutron.agent.ovn.metadata.agent [-] Port a240e2ef-1773-4509-ac04-eae1f5d36e08 in datapath a9bc378d-2d4b-4990-99ce-02656b1fec0d unbound from our chassis#033[00m
Dec  5 02:10:12 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:12.509 287122 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a9bc378d-2d4b-4990-99ce-02656b1fec0d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  5 02:10:12 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:12.511 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[7c8247dd-216e-4cb0-a2ff-ce8ec0804fc7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:12 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:12.512 287122 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d namespace which is not needed anymore#033[00m
Dec  5 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.531 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:12 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Dec  5 02:10:12 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 45.020s CPU time.
Dec  5 02:10:12 compute-0 systemd-machined[138700]: Machine qemu-8-instance-00000008 terminated.
Dec  5 02:10:12 compute-0 neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d[442967]: [NOTICE]   (442971) : haproxy version is 2.8.14-c23fe91
Dec  5 02:10:12 compute-0 neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d[442967]: [NOTICE]   (442971) : path to executable is /usr/sbin/haproxy
Dec  5 02:10:12 compute-0 neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d[442967]: [WARNING]  (442971) : Exiting Master process...
Dec  5 02:10:12 compute-0 neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d[442967]: [WARNING]  (442971) : Exiting Master process...
Dec  5 02:10:12 compute-0 neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d[442967]: [ALERT]    (442971) : Current worker (442973) exited with code 143 (Terminated)
Dec  5 02:10:12 compute-0 neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d[442967]: [WARNING]  (442971) : All workers exited. Exiting... (0)
Dec  5 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.752 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:12 compute-0 systemd[1]: libpod-4c5edeef5f34dfd674818c6df9c9c3d43e543af4bab38484b9e8514164eedd05.scope: Deactivated successfully.
Dec  5 02:10:12 compute-0 conmon[442967]: conmon 4c5edeef5f34dfd67481 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4c5edeef5f34dfd674818c6df9c9c3d43e543af4bab38484b9e8514164eedd05.scope/container/memory.events
Dec  5 02:10:12 compute-0 podman[445193]: 2025-12-05 02:10:12.764671631 +0000 UTC m=+0.095050831 container died 4c5edeef5f34dfd674818c6df9c9c3d43e543af4bab38484b9e8514164eedd05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  5 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.766 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.772 349552 INFO nova.virt.libvirt.driver [-] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Instance destroyed successfully.#033[00m
Dec  5 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.772 349552 DEBUG nova.objects.instance [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lazy-loading 'resources' on Instance uuid 59e35a32-9023-4e49-be56-9da10df3027f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.796 349552 DEBUG nova.virt.libvirt.vif [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T02:08:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1678320742',display_name='tempest-ServerActionsTestJSON-server-1678320742',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1678320742',id=8,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKmirf5PzEcVuq6RNudVuflcugnc6r3Jy50MVVEH7tkttBe4cf5zv9kQC3Ss53DUYZTE/QaGNMMsby6pKc4tzWxZGKXsndhFMr79gHGA5klSxVz8kWH2nsbelSj8zkK0fg==',key_name='tempest-keypair-1953156472',keypairs=<?>,launch_index=0,launched_at=2025-12-05T02:08:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dd34a6a62cf94436a2b836fa4f49c4fa',ramdisk_id='',reservation_id='r-i4td7gfo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1914764435',owner_user_name='tempest-ServerActionsTestJSON-1914764435-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-05T02:10:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b4745812b7eb47908ded25b1eb7c7328',uuid=59e35a32-9023-4e49-be56-9da10df3027f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "address": "fa:16:3e:16:81:87", "network": {"id": "a9bc378d-2d4b-4990-99ce-02656b1fec0d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2010351729-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd34a6a62cf94436a2b836fa4f49c4fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa240e2ef-17", "ovs_interfaceid": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  5 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.797 349552 DEBUG nova.network.os_vif_util [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Converting VIF {"id": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "address": "fa:16:3e:16:81:87", "network": {"id": "a9bc378d-2d4b-4990-99ce-02656b1fec0d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2010351729-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd34a6a62cf94436a2b836fa4f49c4fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa240e2ef-17", "ovs_interfaceid": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.798 349552 DEBUG nova.network.os_vif_util [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:16:81:87,bridge_name='br-int',has_traffic_filtering=True,id=a240e2ef-1773-4509-ac04-eae1f5d36e08,network=Network(a9bc378d-2d4b-4990-99ce-02656b1fec0d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa240e2ef-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.799 349552 DEBUG os_vif [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:16:81:87,bridge_name='br-int',has_traffic_filtering=True,id=a240e2ef-1773-4509-ac04-eae1f5d36e08,network=Network(a9bc378d-2d4b-4990-99ce-02656b1fec0d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa240e2ef-17') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  5 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.802 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.802 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa240e2ef-17, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.804 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.807 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.811 349552 INFO os_vif [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:16:81:87,bridge_name='br-int',has_traffic_filtering=True,id=a240e2ef-1773-4509-ac04-eae1f5d36e08,network=Network(a9bc378d-2d4b-4990-99ce-02656b1fec0d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa240e2ef-17')#033[00m
Dec  5 02:10:12 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4c5edeef5f34dfd674818c6df9c9c3d43e543af4bab38484b9e8514164eedd05-userdata-shm.mount: Deactivated successfully.
Dec  5 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.822 349552 DEBUG nova.virt.libvirt.driver [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Start _get_guest_xml network_info=[{"id": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "address": "fa:16:3e:16:81:87", "network": {"id": "a9bc378d-2d4b-4990-99ce-02656b1fec0d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2010351729-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd34a6a62cf94436a2b836fa4f49c4fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa240e2ef-17", "ovs_interfaceid": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=e9091bfb-b431-47c9-a284-79372046956b,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'image_id': 'e9091bfb-b431-47c9-a284-79372046956b'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  5 02:10:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-834861e6ee78dc388a1bf92deca51436b692390ae47802f4ad88169beea7eb85-merged.mount: Deactivated successfully.
Dec  5 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.833 349552 WARNING nova.virt.libvirt.driver [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.841 349552 DEBUG nova.virt.libvirt.host [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  5 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.842 349552 DEBUG nova.virt.libvirt.host [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  5 02:10:12 compute-0 podman[445193]: 2025-12-05 02:10:12.843368341 +0000 UTC m=+0.173747541 container cleanup 4c5edeef5f34dfd674818c6df9c9c3d43e543af4bab38484b9e8514164eedd05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec  5 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.853 349552 DEBUG nova.virt.libvirt.host [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  5 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.853 349552 DEBUG nova.virt.libvirt.host [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  5 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.854 349552 DEBUG nova.virt.libvirt.driver [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  5 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.854 349552 DEBUG nova.virt.hardware [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-05T02:07:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=e9091bfb-b431-47c9-a284-79372046956b,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  5 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.854 349552 DEBUG nova.virt.hardware [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  5 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.855 349552 DEBUG nova.virt.hardware [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  5 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.855 349552 DEBUG nova.virt.hardware [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  5 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.855 349552 DEBUG nova.virt.hardware [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  5 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.855 349552 DEBUG nova.virt.hardware [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  5 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.856 349552 DEBUG nova.virt.hardware [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  5 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.856 349552 DEBUG nova.virt.hardware [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  5 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.856 349552 DEBUG nova.virt.hardware [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  5 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.856 349552 DEBUG nova.virt.hardware [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  5 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.856 349552 DEBUG nova.virt.hardware [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  5 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.857 349552 DEBUG nova.objects.instance [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lazy-loading 'vcpu_model' on Instance uuid 59e35a32-9023-4e49-be56-9da10df3027f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:10:12 compute-0 systemd[1]: libpod-conmon-4c5edeef5f34dfd674818c6df9c9c3d43e543af4bab38484b9e8514164eedd05.scope: Deactivated successfully.
Dec  5 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.873 349552 DEBUG oslo_concurrency.processutils [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:10:12 compute-0 podman[445229]: 2025-12-05 02:10:12.957714033 +0000 UTC m=+0.081010016 container remove 4c5edeef5f34dfd674818c6df9c9c3d43e543af4bab38484b9e8514164eedd05 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  5 02:10:12 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:12.971 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[9a5943a3-698b-47e2-846a-36a68656f019]: (4, ('Fri Dec  5 02:10:12 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d (4c5edeef5f34dfd674818c6df9c9c3d43e543af4bab38484b9e8514164eedd05)\n4c5edeef5f34dfd674818c6df9c9c3d43e543af4bab38484b9e8514164eedd05\nFri Dec  5 02:10:12 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d (4c5edeef5f34dfd674818c6df9c9c3d43e543af4bab38484b9e8514164eedd05)\n4c5edeef5f34dfd674818c6df9c9c3d43e543af4bab38484b9e8514164eedd05\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:12 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:12.975 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[012f2406-55b5-4fa5-b948-e1aa64e63fae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:12 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:12.977 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa9bc378d-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:10:12 compute-0 kernel: tapa9bc378d-20: left promiscuous mode
Dec  5 02:10:12 compute-0 nova_compute[349548]: 2025-12-05 02:10:12.984 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:13 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:13.004 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[abd352f9-90a1-4f19-b3fc-0ca7deac7f60]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:13 compute-0 nova_compute[349548]: 2025-12-05 02:10:13.013 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:13 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:13.025 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[6bb386c9-945d-44b8-acfe-694f778aeb16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:13 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:13.027 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[780c55b8-dfa2-4b65-9a7c-acfa5f83ec0a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:13 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:13.056 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[6b3baa72-d6f0-4f54-a286-2305a30882a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662057, 'reachable_time': 32430, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 445252, 'error': None, 'target': 'ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:13 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:13.060 287504 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  5 02:10:13 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:13.060 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[ff0dc74c-55f9-4686-bf88-e1c8567e7e8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:13 compute-0 systemd[1]: run-netns-ovnmeta\x2da9bc378d\x2d2d4b\x2d4990\x2d99ce\x2d02656b1fec0d.mount: Deactivated successfully.
Dec  5 02:10:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:10:13 compute-0 nova_compute[349548]: 2025-12-05 02:10:13.093 349552 DEBUG nova.compute.manager [req-1288787f-8daf-4588-a3d0-97b8e5aac3ce req-f65f77a0-9ea3-4f2a-a511-533c3eafee26 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Received event network-vif-unplugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:10:13 compute-0 nova_compute[349548]: 2025-12-05 02:10:13.093 349552 DEBUG oslo_concurrency.lockutils [req-1288787f-8daf-4588-a3d0-97b8e5aac3ce req-f65f77a0-9ea3-4f2a-a511-533c3eafee26 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "59e35a32-9023-4e49-be56-9da10df3027f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:10:13 compute-0 nova_compute[349548]: 2025-12-05 02:10:13.093 349552 DEBUG oslo_concurrency.lockutils [req-1288787f-8daf-4588-a3d0-97b8e5aac3ce req-f65f77a0-9ea3-4f2a-a511-533c3eafee26 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:10:13 compute-0 nova_compute[349548]: 2025-12-05 02:10:13.094 349552 DEBUG oslo_concurrency.lockutils [req-1288787f-8daf-4588-a3d0-97b8e5aac3ce req-f65f77a0-9ea3-4f2a-a511-533c3eafee26 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:10:13 compute-0 nova_compute[349548]: 2025-12-05 02:10:13.094 349552 DEBUG nova.compute.manager [req-1288787f-8daf-4588-a3d0-97b8e5aac3ce req-f65f77a0-9ea3-4f2a-a511-533c3eafee26 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] No waiting events found dispatching network-vif-unplugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 02:10:13 compute-0 nova_compute[349548]: 2025-12-05 02:10:13.094 349552 WARNING nova.compute.manager [req-1288787f-8daf-4588-a3d0-97b8e5aac3ce req-f65f77a0-9ea3-4f2a-a511-533c3eafee26 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Received unexpected event network-vif-unplugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Dec  5 02:10:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  5 02:10:13 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/684689122' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  5 02:10:13 compute-0 nova_compute[349548]: 2025-12-05 02:10:13.418 349552 DEBUG oslo_concurrency.processutils [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:10:13 compute-0 nova_compute[349548]: 2025-12-05 02:10:13.492 349552 DEBUG oslo_concurrency.processutils [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:10:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  5 02:10:13 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2797718717' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  5 02:10:13 compute-0 nova_compute[349548]: 2025-12-05 02:10:13.969 349552 DEBUG oslo_concurrency.processutils [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:10:13 compute-0 nova_compute[349548]: 2025-12-05 02:10:13.971 349552 DEBUG nova.virt.libvirt.vif [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T02:08:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1678320742',display_name='tempest-ServerActionsTestJSON-server-1678320742',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1678320742',id=8,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKmirf5PzEcVuq6RNudVuflcugnc6r3Jy50MVVEH7tkttBe4cf5zv9kQC3Ss53DUYZTE/QaGNMMsby6pKc4tzWxZGKXsndhFMr79gHGA5klSxVz8kWH2nsbelSj8zkK0fg==',key_name='tempest-keypair-1953156472',keypairs=<?>,launch_index=0,launched_at=2025-12-05T02:08:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dd34a6a62cf94436a2b836fa4f49c4fa',ramdisk_id='',reservation_id='r-i4td7gfo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1914764435',owner_user_name='tempest-ServerActionsTestJSON-1914764435-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-05T02:10:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b4745812b7eb47908ded25b1eb7c7328',uuid=59e35a32-9023-4e49-be56-9da10df3027f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "address": "fa:16:3e:16:81:87", "network": {"id": "a9bc378d-2d4b-4990-99ce-02656b1fec0d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2010351729-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd34a6a62cf94436a2b836fa4f49c4fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa240e2ef-17", "ovs_interfaceid": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  5 02:10:13 compute-0 nova_compute[349548]: 2025-12-05 02:10:13.971 349552 DEBUG nova.network.os_vif_util [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Converting VIF {"id": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "address": "fa:16:3e:16:81:87", "network": {"id": "a9bc378d-2d4b-4990-99ce-02656b1fec0d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2010351729-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd34a6a62cf94436a2b836fa4f49c4fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa240e2ef-17", "ovs_interfaceid": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 02:10:13 compute-0 nova_compute[349548]: 2025-12-05 02:10:13.972 349552 DEBUG nova.network.os_vif_util [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:16:81:87,bridge_name='br-int',has_traffic_filtering=True,id=a240e2ef-1773-4509-ac04-eae1f5d36e08,network=Network(a9bc378d-2d4b-4990-99ce-02656b1fec0d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa240e2ef-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 02:10:13 compute-0 nova_compute[349548]: 2025-12-05 02:10:13.974 349552 DEBUG nova.objects.instance [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lazy-loading 'pci_devices' on Instance uuid 59e35a32-9023-4e49-be56-9da10df3027f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:10:13 compute-0 nova_compute[349548]: 2025-12-05 02:10:13.997 349552 DEBUG nova.virt.libvirt.driver [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] End _get_guest_xml xml=<domain type="kvm">
Dec  5 02:10:13 compute-0 nova_compute[349548]:  <uuid>59e35a32-9023-4e49-be56-9da10df3027f</uuid>
Dec  5 02:10:13 compute-0 nova_compute[349548]:  <name>instance-00000008</name>
Dec  5 02:10:13 compute-0 nova_compute[349548]:  <memory>131072</memory>
Dec  5 02:10:13 compute-0 nova_compute[349548]:  <vcpu>1</vcpu>
Dec  5 02:10:14 compute-0 nova_compute[349548]:  <metadata>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  5 02:10:14 compute-0 nova_compute[349548]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:      <nova:name>tempest-ServerActionsTestJSON-server-1678320742</nova:name>
Dec  5 02:10:14 compute-0 nova_compute[349548]:      <nova:creationTime>2025-12-05 02:10:12</nova:creationTime>
Dec  5 02:10:14 compute-0 nova_compute[349548]:      <nova:flavor name="m1.nano">
Dec  5 02:10:14 compute-0 nova_compute[349548]:        <nova:memory>128</nova:memory>
Dec  5 02:10:14 compute-0 nova_compute[349548]:        <nova:disk>1</nova:disk>
Dec  5 02:10:14 compute-0 nova_compute[349548]:        <nova:swap>0</nova:swap>
Dec  5 02:10:14 compute-0 nova_compute[349548]:        <nova:ephemeral>0</nova:ephemeral>
Dec  5 02:10:14 compute-0 nova_compute[349548]:        <nova:vcpus>1</nova:vcpus>
Dec  5 02:10:14 compute-0 nova_compute[349548]:      </nova:flavor>
Dec  5 02:10:14 compute-0 nova_compute[349548]:      <nova:owner>
Dec  5 02:10:14 compute-0 nova_compute[349548]:        <nova:user uuid="b4745812b7eb47908ded25b1eb7c7328">tempest-ServerActionsTestJSON-1914764435-project-member</nova:user>
Dec  5 02:10:14 compute-0 nova_compute[349548]:        <nova:project uuid="dd34a6a62cf94436a2b836fa4f49c4fa">tempest-ServerActionsTestJSON-1914764435</nova:project>
Dec  5 02:10:14 compute-0 nova_compute[349548]:      </nova:owner>
Dec  5 02:10:14 compute-0 nova_compute[349548]:      <nova:root type="image" uuid="e9091bfb-b431-47c9-a284-79372046956b"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:      <nova:ports>
Dec  5 02:10:14 compute-0 nova_compute[349548]:        <nova:port uuid="a240e2ef-1773-4509-ac04-eae1f5d36e08">
Dec  5 02:10:14 compute-0 nova_compute[349548]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:        </nova:port>
Dec  5 02:10:14 compute-0 nova_compute[349548]:      </nova:ports>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    </nova:instance>
Dec  5 02:10:14 compute-0 nova_compute[349548]:  </metadata>
Dec  5 02:10:14 compute-0 nova_compute[349548]:  <sysinfo type="smbios">
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <system>
Dec  5 02:10:14 compute-0 nova_compute[349548]:      <entry name="manufacturer">RDO</entry>
Dec  5 02:10:14 compute-0 nova_compute[349548]:      <entry name="product">OpenStack Compute</entry>
Dec  5 02:10:14 compute-0 nova_compute[349548]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  5 02:10:14 compute-0 nova_compute[349548]:      <entry name="serial">59e35a32-9023-4e49-be56-9da10df3027f</entry>
Dec  5 02:10:14 compute-0 nova_compute[349548]:      <entry name="uuid">59e35a32-9023-4e49-be56-9da10df3027f</entry>
Dec  5 02:10:14 compute-0 nova_compute[349548]:      <entry name="family">Virtual Machine</entry>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    </system>
Dec  5 02:10:14 compute-0 nova_compute[349548]:  </sysinfo>
Dec  5 02:10:14 compute-0 nova_compute[349548]:  <os>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <boot dev="hd"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <smbios mode="sysinfo"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:  </os>
Dec  5 02:10:14 compute-0 nova_compute[349548]:  <features>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <acpi/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <apic/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <vmcoreinfo/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:  </features>
Dec  5 02:10:14 compute-0 nova_compute[349548]:  <clock offset="utc">
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <timer name="pit" tickpolicy="delay"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <timer name="hpet" present="no"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:  </clock>
Dec  5 02:10:14 compute-0 nova_compute[349548]:  <cpu mode="host-model" match="exact">
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <topology sockets="1" cores="1" threads="1"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:  </cpu>
Dec  5 02:10:14 compute-0 nova_compute[349548]:  <devices>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <disk type="network" device="disk">
Dec  5 02:10:14 compute-0 nova_compute[349548]:      <driver type="raw" cache="none"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:      <source protocol="rbd" name="vms/59e35a32-9023-4e49-be56-9da10df3027f_disk">
Dec  5 02:10:14 compute-0 nova_compute[349548]:        <host name="192.168.122.100" port="6789"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:      </source>
Dec  5 02:10:14 compute-0 nova_compute[349548]:      <auth username="openstack">
Dec  5 02:10:14 compute-0 nova_compute[349548]:        <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:      </auth>
Dec  5 02:10:14 compute-0 nova_compute[349548]:      <target dev="vda" bus="virtio"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    </disk>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <disk type="network" device="cdrom">
Dec  5 02:10:14 compute-0 nova_compute[349548]:      <driver type="raw" cache="none"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:      <source protocol="rbd" name="vms/59e35a32-9023-4e49-be56-9da10df3027f_disk.config">
Dec  5 02:10:14 compute-0 nova_compute[349548]:        <host name="192.168.122.100" port="6789"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:      </source>
Dec  5 02:10:14 compute-0 nova_compute[349548]:      <auth username="openstack">
Dec  5 02:10:14 compute-0 nova_compute[349548]:        <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:      </auth>
Dec  5 02:10:14 compute-0 nova_compute[349548]:      <target dev="sda" bus="sata"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    </disk>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <interface type="ethernet">
Dec  5 02:10:14 compute-0 nova_compute[349548]:      <mac address="fa:16:3e:16:81:87"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:      <model type="virtio"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:      <driver name="vhost" rx_queue_size="512"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:      <mtu size="1442"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:      <target dev="tapa240e2ef-17"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    </interface>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <serial type="pty">
Dec  5 02:10:14 compute-0 nova_compute[349548]:      <log file="/var/lib/nova/instances/59e35a32-9023-4e49-be56-9da10df3027f/console.log" append="off"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    </serial>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <video>
Dec  5 02:10:14 compute-0 nova_compute[349548]:      <model type="virtio"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    </video>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <input type="tablet" bus="usb"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <input type="keyboard" bus="usb"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <rng model="virtio">
Dec  5 02:10:14 compute-0 nova_compute[349548]:      <backend model="random">/dev/urandom</backend>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    </rng>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <controller type="usb" index="0"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    <memballoon model="virtio">
Dec  5 02:10:14 compute-0 nova_compute[349548]:      <stats period="10"/>
Dec  5 02:10:14 compute-0 nova_compute[349548]:    </memballoon>
Dec  5 02:10:14 compute-0 nova_compute[349548]:  </devices>
Dec  5 02:10:14 compute-0 nova_compute[349548]: </domain>
Dec  5 02:10:14 compute-0 nova_compute[349548]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  5 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:13.998 349552 DEBUG nova.virt.libvirt.driver [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:13.999 349552 DEBUG nova.virt.libvirt.driver [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.000 349552 DEBUG nova.virt.libvirt.vif [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T02:08:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1678320742',display_name='tempest-ServerActionsTestJSON-server-1678320742',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1678320742',id=8,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKmirf5PzEcVuq6RNudVuflcugnc6r3Jy50MVVEH7tkttBe4cf5zv9kQC3Ss53DUYZTE/QaGNMMsby6pKc4tzWxZGKXsndhFMr79gHGA5klSxVz8kWH2nsbelSj8zkK0fg==',key_name='tempest-keypair-1953156472',keypairs=<?>,launch_index=0,launched_at=2025-12-05T02:08:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='dd34a6a62cf94436a2b836fa4f49c4fa',ramdisk_id='',reservation_id='r-i4td7gfo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1914764435',owner_user_name='tempest-ServerActionsTestJSON-1914764435-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-05T02:10:12Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b4745812b7eb47908ded25b1eb7c7328',uuid=59e35a32-9023-4e49-be56-9da10df3027f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "address": "fa:16:3e:16:81:87", "network": {"id": "a9bc378d-2d4b-4990-99ce-02656b1fec0d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2010351729-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd34a6a62cf94436a2b836fa4f49c4fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa240e2ef-17", "ovs_interfaceid": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  5 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.000 349552 DEBUG nova.network.os_vif_util [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Converting VIF {"id": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "address": "fa:16:3e:16:81:87", "network": {"id": "a9bc378d-2d4b-4990-99ce-02656b1fec0d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2010351729-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd34a6a62cf94436a2b836fa4f49c4fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa240e2ef-17", "ovs_interfaceid": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.000 349552 DEBUG nova.network.os_vif_util [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:16:81:87,bridge_name='br-int',has_traffic_filtering=True,id=a240e2ef-1773-4509-ac04-eae1f5d36e08,network=Network(a9bc378d-2d4b-4990-99ce-02656b1fec0d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa240e2ef-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.001 349552 DEBUG os_vif [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:16:81:87,bridge_name='br-int',has_traffic_filtering=True,id=a240e2ef-1773-4509-ac04-eae1f5d36e08,network=Network(a9bc378d-2d4b-4990-99ce-02656b1fec0d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa240e2ef-17') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  5 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.002 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.002 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.003 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.006 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.006 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa240e2ef-17, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.007 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa240e2ef-17, col_values=(('external_ids', {'iface-id': 'a240e2ef-1773-4509-ac04-eae1f5d36e08', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:16:81:87', 'vm-uuid': '59e35a32-9023-4e49-be56-9da10df3027f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.009 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:14 compute-0 NetworkManager[49092]: <info>  [1764900614.0113] manager: (tapa240e2ef-17): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Dec  5 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.013 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  5 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.021 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.024 349552 INFO os_vif [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:16:81:87,bridge_name='br-int',has_traffic_filtering=True,id=a240e2ef-1773-4509-ac04-eae1f5d36e08,network=Network(a9bc378d-2d4b-4990-99ce-02656b1fec0d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa240e2ef-17')#033[00m
Dec  5 02:10:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1836: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Dec  5 02:10:14 compute-0 kernel: tapa240e2ef-17: entered promiscuous mode
Dec  5 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.149 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:14 compute-0 systemd-udevd[445178]: Network interface NamePolicy= disabled on kernel command line.
Dec  5 02:10:14 compute-0 ovn_controller[89286]: 2025-12-05T02:10:14Z|00107|binding|INFO|Claiming lport a240e2ef-1773-4509-ac04-eae1f5d36e08 for this chassis.
Dec  5 02:10:14 compute-0 ovn_controller[89286]: 2025-12-05T02:10:14Z|00108|binding|INFO|a240e2ef-1773-4509-ac04-eae1f5d36e08: Claiming fa:16:3e:16:81:87 10.100.0.10
Dec  5 02:10:14 compute-0 NetworkManager[49092]: <info>  [1764900614.1520] manager: (tapa240e2ef-17): new Tun device (/org/freedesktop/NetworkManager/Devices/54)
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.160 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:81:87 10.100.0.10'], port_security=['fa:16:3e:16:81:87 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '59e35a32-9023-4e49-be56-9da10df3027f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a9bc378d-2d4b-4990-99ce-02656b1fec0d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dd34a6a62cf94436a2b836fa4f49c4fa', 'neutron:revision_number': '5', 'neutron:security_group_ids': '0ad1486e-ab79-4bad-bad5-777f54ed0ef1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.206'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=880ae0ff-40ec-4de0-a5e7-7c2cf13ecf72, chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=a240e2ef-1773-4509-ac04-eae1f5d36e08) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.162 287122 INFO neutron.agent.ovn.metadata.agent [-] Port a240e2ef-1773-4509-ac04-eae1f5d36e08 in datapath a9bc378d-2d4b-4990-99ce-02656b1fec0d bound to our chassis#033[00m
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.165 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a9bc378d-2d4b-4990-99ce-02656b1fec0d#033[00m
Dec  5 02:10:14 compute-0 ovn_controller[89286]: 2025-12-05T02:10:14Z|00109|binding|INFO|Setting lport a240e2ef-1773-4509-ac04-eae1f5d36e08 ovn-installed in OVS
Dec  5 02:10:14 compute-0 ovn_controller[89286]: 2025-12-05T02:10:14Z|00110|binding|INFO|Setting lport a240e2ef-1773-4509-ac04-eae1f5d36e08 up in Southbound
Dec  5 02:10:14 compute-0 NetworkManager[49092]: <info>  [1764900614.1767] device (tapa240e2ef-17): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  5 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.180 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.180 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[7d303a4b-e81f-4427-8db6-81915c4d7d09]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.182 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa9bc378d-21 in ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.184 412744 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa9bc378d-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.184 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[b37c84f6-ca83-4974-83b1-05e0cb869a57]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:14 compute-0 NetworkManager[49092]: <info>  [1764900614.1862] device (tapa240e2ef-17): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.186 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[f9311d52-696b-45cd-87d4-3753af3639c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.201 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[1baeb28a-9e0f-411f-a1db-dabbe935c884]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:14 compute-0 systemd-machined[138700]: New machine qemu-10-instance-00000008.
Dec  5 02:10:14 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-00000008.
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.231 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[79e300bd-46cd-4992-a825-ec434e2f7b2b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.264 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[73c0394c-74b1-4603-9716-9fbe12d24ceb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:14 compute-0 NetworkManager[49092]: <info>  [1764900614.2723] manager: (tapa9bc378d-20): new Veth device (/org/freedesktop/NetworkManager/Devices/55)
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.271 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[28ab3055-c708-4827-ab9e-fcbbb14386c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.309 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[1b82918c-3a11-489f-b89e-17c3c2322201]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.313 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[17e29d4f-3c4f-4607-8b07-ed9277853f2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:14 compute-0 NetworkManager[49092]: <info>  [1764900614.3423] device (tapa9bc378d-20): carrier: link connected
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.349 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[1fe454db-0a1c-431b-b594-2b95ebefa4c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.370 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[c81264b6-4bb9-48ae-b5a4-a9febab48746]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa9bc378d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c2:fe:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 670100, 'reachable_time': 44877, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 445350, 'error': None, 'target': 'ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.391 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[f0355278-e171-44f3-8c57-069e13c9900f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec2:feea'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 670100, 'tstamp': 670100}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 445351, 'error': None, 'target': 'ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.410 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[e49a377f-1997-4b93-bce6-515d7b210f90]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa9bc378d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c2:fe:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 670100, 'reachable_time': 44877, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 445352, 'error': None, 'target': 'ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.460 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[009903df-7345-49ad-a606-cdce9f9c1190]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.538 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[9bef3ccc-51aa-4dc6-9f99-0e15736143d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.539 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa9bc378d-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.540 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.541 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa9bc378d-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.543 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:14 compute-0 kernel: tapa9bc378d-20: entered promiscuous mode
Dec  5 02:10:14 compute-0 NetworkManager[49092]: <info>  [1764900614.5483] manager: (tapa9bc378d-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Dec  5 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.547 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.549 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa9bc378d-20, col_values=(('external_ids', {'iface-id': '3d0916d7-6f03-4daf-8f3b-126228223c53'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.551 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:14 compute-0 ovn_controller[89286]: 2025-12-05T02:10:14Z|00111|binding|INFO|Releasing lport 3d0916d7-6f03-4daf-8f3b-126228223c53 from this chassis (sb_readonly=0)
Dec  5 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.583 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.586 287122 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a9bc378d-2d4b-4990-99ce-02656b1fec0d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a9bc378d-2d4b-4990-99ce-02656b1fec0d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.587 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[ea29af44-e7cb-4690-a8de-35bb772cc23f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.589 287122 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]: global
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]:    log         /dev/log local0 debug
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]:    log-tag     haproxy-metadata-proxy-a9bc378d-2d4b-4990-99ce-02656b1fec0d
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]:    user        root
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]:    group       root
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]:    maxconn     1024
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]:    pidfile     /var/lib/neutron/external/pids/a9bc378d-2d4b-4990-99ce-02656b1fec0d.pid.haproxy
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]:    daemon
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]: 
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]: defaults
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]:    log global
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]:    mode http
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]:    option httplog
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]:    option dontlognull
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]:    option http-server-close
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]:    option forwardfor
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]:    retries                 3
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]:    timeout http-request    30s
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]:    timeout connect         30s
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]:    timeout client          32s
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]:    timeout server          32s
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]:    timeout http-keep-alive 30s
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]: 
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]: 
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]: listen listener
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]:    bind 169.254.169.254:80
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]:    server metadata /var/lib/neutron/metadata_proxy
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]:    http-request add-header X-OVN-Network-ID a9bc378d-2d4b-4990-99ce-02656b1fec0d
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  5 02:10:14 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:14.590 287122 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d', 'env', 'PROCESS_TAG=haproxy-a9bc378d-2d4b-4990-99ce-02656b1fec0d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a9bc378d-2d4b-4990-99ce-02656b1fec0d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  5 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.909 349552 DEBUG nova.network.neutron [None req-462b6329-01a2-45a9-b19f-01d82fb4c16c 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Updating instance_info_cache with network_info: [{"id": "2ac46e0a-6888-440f-b155-d4b0e8677304", "address": "fa:16:3e:ca:ba:4f", "network": {"id": "77ae1103-3871-4354-8e08-09bb5c0c1ad1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-680696631-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "70b71e0f6ffe47ed86a910f90d71557a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac46e0a-68", "ovs_interfaceid": "2ac46e0a-6888-440f-b155-d4b0e8677304", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.927 349552 DEBUG oslo_concurrency.lockutils [None req-462b6329-01a2-45a9-b19f-01d82fb4c16c 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Releasing lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.928 349552 DEBUG nova.compute.manager [None req-462b6329-01a2-45a9-b19f-01d82fb4c16c 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Dec  5 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.928 349552 DEBUG nova.compute.manager [None req-462b6329-01a2-45a9-b19f-01d82fb4c16c 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] network_info to inject: |[{"id": "2ac46e0a-6888-440f-b155-d4b0e8677304", "address": "fa:16:3e:ca:ba:4f", "network": {"id": "77ae1103-3871-4354-8e08-09bb5c0c1ad1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-680696631-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "70b71e0f6ffe47ed86a910f90d71557a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac46e0a-68", "ovs_interfaceid": "2ac46e0a-6888-440f-b155-d4b0e8677304", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Dec  5 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.930 349552 DEBUG oslo_concurrency.lockutils [req-06a18c23-30b8-4680-9ca3-f4b33a766b4e req-e4f00246-7028-4fa6-b2f1-0f915a73aadd a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:10:14 compute-0 nova_compute[349548]: 2025-12-05 02:10:14.930 349552 DEBUG nova.network.neutron [req-06a18c23-30b8-4680-9ca3-f4b33a766b4e req-e4f00246-7028-4fa6-b2f1-0f915a73aadd a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Refreshing network info cache for port 2ac46e0a-6888-440f-b155-d4b0e8677304 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  5 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.070 349552 DEBUG nova.network.neutron [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Successfully updated port: 26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  5 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.090 349552 DEBUG oslo_concurrency.lockutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Acquiring lock "refresh_cache-3391e1ba-0e6b-4113-b402-027e997b3cb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.091 349552 DEBUG oslo_concurrency.lockutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Acquired lock "refresh_cache-3391e1ba-0e6b-4113-b402-027e997b3cb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.091 349552 DEBUG nova.network.neutron [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  5 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.093 349552 DEBUG nova.virt.libvirt.host [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Removed pending event for 59e35a32-9023-4e49-be56-9da10df3027f due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Dec  5 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.093 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900615.0912287, 59e35a32-9023-4e49-be56-9da10df3027f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.093 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] VM Resumed (Lifecycle Event)#033[00m
Dec  5 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.095 349552 DEBUG nova.compute.manager [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  5 02:10:15 compute-0 podman[445424]: 2025-12-05 02:10:15.098743874 +0000 UTC m=+0.081434218 container create 2907e2a2f5c4404f51d919df2de6dffcf082807c1b7a5b75e70c0f84895d67da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125)
Dec  5 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.100 349552 INFO nova.virt.libvirt.driver [-] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Instance rebooted successfully.#033[00m
Dec  5 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.100 349552 DEBUG nova.compute.manager [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.129 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.133 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  5 02:10:15 compute-0 systemd[1]: Started libpod-conmon-2907e2a2f5c4404f51d919df2de6dffcf082807c1b7a5b75e70c0f84895d67da.scope.
Dec  5 02:10:15 compute-0 podman[445424]: 2025-12-05 02:10:15.0665834 +0000 UTC m=+0.049273774 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  5 02:10:15 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.177 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Dec  5 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.178 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900615.0963786, 59e35a32-9023-4e49-be56-9da10df3027f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.178 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] VM Started (Lifecycle Event)#033[00m
Dec  5 02:10:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7a4098034b79b17a1b0a33ca61c1f904969485d36ccd5269a78d56bbd845de7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  5 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.200 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.205 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  5 02:10:15 compute-0 podman[445424]: 2025-12-05 02:10:15.209150815 +0000 UTC m=+0.191841179 container init 2907e2a2f5c4404f51d919df2de6dffcf082807c1b7a5b75e70c0f84895d67da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec  5 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.215 349552 DEBUG oslo_concurrency.lockutils [None req-c6eb80fb-ab63-4140-bc68-fa3b4d5f24ef b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 6.949s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:10:15 compute-0 podman[445424]: 2025-12-05 02:10:15.216020078 +0000 UTC m=+0.198710422 container start 2907e2a2f5c4404f51d919df2de6dffcf082807c1b7a5b75e70c0f84895d67da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:10:15 compute-0 neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d[445441]: [NOTICE]   (445445) : New worker (445447) forked
Dec  5 02:10:15 compute-0 neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d[445441]: [NOTICE]   (445445) : Loading success.
Dec  5 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.273 349552 DEBUG nova.compute.manager [req-62de5951-7806-43aa-9029-f754eece4c76 req-7b0d15bd-f79b-493b-93dd-ac9c03293643 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Received event network-vif-plugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.273 349552 DEBUG oslo_concurrency.lockutils [req-62de5951-7806-43aa-9029-f754eece4c76 req-7b0d15bd-f79b-493b-93dd-ac9c03293643 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "59e35a32-9023-4e49-be56-9da10df3027f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.273 349552 DEBUG oslo_concurrency.lockutils [req-62de5951-7806-43aa-9029-f754eece4c76 req-7b0d15bd-f79b-493b-93dd-ac9c03293643 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.274 349552 DEBUG oslo_concurrency.lockutils [req-62de5951-7806-43aa-9029-f754eece4c76 req-7b0d15bd-f79b-493b-93dd-ac9c03293643 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.274 349552 DEBUG nova.compute.manager [req-62de5951-7806-43aa-9029-f754eece4c76 req-7b0d15bd-f79b-493b-93dd-ac9c03293643 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] No waiting events found dispatching network-vif-plugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.274 349552 WARNING nova.compute.manager [req-62de5951-7806-43aa-9029-f754eece4c76 req-7b0d15bd-f79b-493b-93dd-ac9c03293643 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Received unexpected event network-vif-plugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 for instance with vm_state active and task_state None.#033[00m
Dec  5 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.274 349552 DEBUG nova.compute.manager [req-62de5951-7806-43aa-9029-f754eece4c76 req-7b0d15bd-f79b-493b-93dd-ac9c03293643 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Received event network-vif-plugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.274 349552 DEBUG oslo_concurrency.lockutils [req-62de5951-7806-43aa-9029-f754eece4c76 req-7b0d15bd-f79b-493b-93dd-ac9c03293643 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "59e35a32-9023-4e49-be56-9da10df3027f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.275 349552 DEBUG oslo_concurrency.lockutils [req-62de5951-7806-43aa-9029-f754eece4c76 req-7b0d15bd-f79b-493b-93dd-ac9c03293643 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.275 349552 DEBUG oslo_concurrency.lockutils [req-62de5951-7806-43aa-9029-f754eece4c76 req-7b0d15bd-f79b-493b-93dd-ac9c03293643 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.275 349552 DEBUG nova.compute.manager [req-62de5951-7806-43aa-9029-f754eece4c76 req-7b0d15bd-f79b-493b-93dd-ac9c03293643 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] No waiting events found dispatching network-vif-plugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.275 349552 WARNING nova.compute.manager [req-62de5951-7806-43aa-9029-f754eece4c76 req-7b0d15bd-f79b-493b-93dd-ac9c03293643 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Received unexpected event network-vif-plugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 for instance with vm_state active and task_state None.#033[00m
Dec  5 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.276 349552 DEBUG nova.compute.manager [req-62de5951-7806-43aa-9029-f754eece4c76 req-7b0d15bd-f79b-493b-93dd-ac9c03293643 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Received event network-changed-26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.276 349552 DEBUG nova.compute.manager [req-62de5951-7806-43aa-9029-f754eece4c76 req-7b0d15bd-f79b-493b-93dd-ac9c03293643 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Refreshing instance network info cache due to event network-changed-26b950d4-e9c2-45ea-8e3a-bd06bf2227d4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  5 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.276 349552 DEBUG oslo_concurrency.lockutils [req-62de5951-7806-43aa-9029-f754eece4c76 req-7b0d15bd-f79b-493b-93dd-ac9c03293643 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-3391e1ba-0e6b-4113-b402-027e997b3cb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:10:15 compute-0 nova_compute[349548]: 2025-12-05 02:10:15.482 349552 DEBUG nova.network.neutron [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  5 02:10:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1837: 321 pgs: 321 active+clean; 262 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Dec  5 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.188 349552 DEBUG oslo_concurrency.lockutils [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Acquiring lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.189 349552 DEBUG oslo_concurrency.lockutils [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.190 349552 DEBUG oslo_concurrency.lockutils [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Acquiring lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.191 349552 DEBUG oslo_concurrency.lockutils [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.192 349552 DEBUG oslo_concurrency.lockutils [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.194 349552 INFO nova.compute.manager [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Terminating instance#033[00m
Dec  5 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.197 349552 DEBUG nova.compute.manager [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  5 02:10:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:10:16 compute-0 kernel: tap2ac46e0a-68 (unregistering): left promiscuous mode
Dec  5 02:10:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:10:16 compute-0 NetworkManager[49092]: <info>  [1764900616.3258] device (tap2ac46e0a-68): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  5 02:10:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:10:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:10:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:10:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:10:16 compute-0 ovn_controller[89286]: 2025-12-05T02:10:16Z|00112|binding|INFO|Releasing lport 2ac46e0a-6888-440f-b155-d4b0e8677304 from this chassis (sb_readonly=0)
Dec  5 02:10:16 compute-0 ovn_controller[89286]: 2025-12-05T02:10:16Z|00113|binding|INFO|Setting lport 2ac46e0a-6888-440f-b155-d4b0e8677304 down in Southbound
Dec  5 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.357 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:16 compute-0 ovn_controller[89286]: 2025-12-05T02:10:16Z|00114|binding|INFO|Removing iface tap2ac46e0a-68 ovn-installed in OVS
Dec  5 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.361 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:16 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:16.369 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ca:ba:4f 10.100.0.11'], port_security=['fa:16:3e:ca:ba:4f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '939ae9f2-b89c-4a19-96de-ab4dfc882a35', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-77ae1103-3871-4354-8e08-09bb5c0c1ad1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '70b71e0f6ffe47ed86a910f90d71557a', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'fd91b173-28fd-4506-a2d4-b70d7da34ab9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.202'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1a9bd25-2abf-40fe-aac7-26f2653ba067, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=2ac46e0a-6888-440f-b155-d4b0e8677304) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 02:10:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:10:16
Dec  5 02:10:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 02:10:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 02:10:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['backups', '.mgr', 'default.rgw.control', 'images', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log']
Dec  5 02:10:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 02:10:16 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:16.375 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 2ac46e0a-6888-440f-b155-d4b0e8677304 in datapath 77ae1103-3871-4354-8e08-09bb5c0c1ad1 unbound from our chassis#033[00m
Dec  5 02:10:16 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:16.379 287122 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 77ae1103-3871-4354-8e08-09bb5c0c1ad1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  5 02:10:16 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:16.380 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[c7e3df8d-70b7-4254-b8e7-81f9d0e2e647]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:16 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:16.381 287122 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1 namespace which is not needed anymore#033[00m
Dec  5 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.393 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:16 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Dec  5 02:10:16 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 46.899s CPU time.
Dec  5 02:10:16 compute-0 systemd-machined[138700]: Machine qemu-7-instance-00000007 terminated.
Dec  5 02:10:16 compute-0 NetworkManager[49092]: <info>  [1764900616.4344] manager: (tap2ac46e0a-68): new Tun device (/org/freedesktop/NetworkManager/Devices/57)
Dec  5 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.438 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.453 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.461 349552 INFO nova.virt.libvirt.driver [-] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Instance destroyed successfully.#033[00m
Dec  5 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.461 349552 DEBUG nova.objects.instance [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Lazy-loading 'resources' on Instance uuid 939ae9f2-b89c-4a19-96de-ab4dfc882a35 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.483 349552 DEBUG nova.virt.libvirt.vif [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T02:08:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-604018291',display_name='tempest-AttachInterfacesUnderV243Test-server-604018291',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-604018291',id=7,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKU6bELVlVCoUJIshERiWUVj0OnvYD2CYxIalQbnWU21bRDwU7WBbW97LN2cH4XlAr/7mmUrM7ksINLIA4cX46Z53k6IEf2IAXFLlXwCAxrx7KcHDeFsx/HWqs2AH5gWDA==',key_name='tempest-keypair-1932183514',keypairs=<?>,launch_index=0,launched_at=2025-12-05T02:08:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='70b71e0f6ffe47ed86a910f90d71557a',ramdisk_id='',reservation_id='r-agiyf4o6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-532006644',owner_user_name='tempest-AttachInterfacesUnderV243Test-532006644-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-05T02:10:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3439b5cde2ff4830bb0294f007842282',uuid=939ae9f2-b89c-4a19-96de-ab4dfc882a35,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2ac46e0a-6888-440f-b155-d4b0e8677304", "address": "fa:16:3e:ca:ba:4f", "network": {"id": "77ae1103-3871-4354-8e08-09bb5c0c1ad1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-680696631-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "70b71e0f6ffe47ed86a910f90d71557a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac46e0a-68", "ovs_interfaceid": "2ac46e0a-6888-440f-b155-d4b0e8677304", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  5 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.483 349552 DEBUG nova.network.os_vif_util [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Converting VIF {"id": "2ac46e0a-6888-440f-b155-d4b0e8677304", "address": "fa:16:3e:ca:ba:4f", "network": {"id": "77ae1103-3871-4354-8e08-09bb5c0c1ad1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-680696631-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "70b71e0f6ffe47ed86a910f90d71557a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac46e0a-68", "ovs_interfaceid": "2ac46e0a-6888-440f-b155-d4b0e8677304", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.484 349552 DEBUG nova.network.os_vif_util [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ca:ba:4f,bridge_name='br-int',has_traffic_filtering=True,id=2ac46e0a-6888-440f-b155-d4b0e8677304,network=Network(77ae1103-3871-4354-8e08-09bb5c0c1ad1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ac46e0a-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.485 349552 DEBUG os_vif [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ca:ba:4f,bridge_name='br-int',has_traffic_filtering=True,id=2ac46e0a-6888-440f-b155-d4b0e8677304,network=Network(77ae1103-3871-4354-8e08-09bb5c0c1ad1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ac46e0a-68') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  5 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.488 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.489 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2ac46e0a-68, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.491 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.494 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  5 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.494 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.497 349552 INFO os_vif [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ca:ba:4f,bridge_name='br-int',has_traffic_filtering=True,id=2ac46e0a-6888-440f-b155-d4b0e8677304,network=Network(77ae1103-3871-4354-8e08-09bb5c0c1ad1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2ac46e0a-68')#033[00m
Dec  5 02:10:16 compute-0 neutron-haproxy-ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1[442534]: [NOTICE]   (442557) : haproxy version is 2.8.14-c23fe91
Dec  5 02:10:16 compute-0 neutron-haproxy-ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1[442534]: [NOTICE]   (442557) : path to executable is /usr/sbin/haproxy
Dec  5 02:10:16 compute-0 neutron-haproxy-ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1[442534]: [WARNING]  (442557) : Exiting Master process...
Dec  5 02:10:16 compute-0 neutron-haproxy-ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1[442534]: [WARNING]  (442557) : Exiting Master process...
Dec  5 02:10:16 compute-0 neutron-haproxy-ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1[442534]: [ALERT]    (442557) : Current worker (442559) exited with code 143 (Terminated)
Dec  5 02:10:16 compute-0 neutron-haproxy-ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1[442534]: [WARNING]  (442557) : All workers exited. Exiting... (0)
Dec  5 02:10:16 compute-0 systemd[1]: libpod-12faf4c2216d9372536395acf5e9f1614a1c5a76751643d625f5c8a217280b16.scope: Deactivated successfully.
Dec  5 02:10:16 compute-0 podman[445500]: 2025-12-05 02:10:16.602622151 +0000 UTC m=+0.064276826 container died 12faf4c2216d9372536395acf5e9f1614a1c5a76751643d625f5c8a217280b16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:10:16 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-12faf4c2216d9372536395acf5e9f1614a1c5a76751643d625f5c8a217280b16-userdata-shm.mount: Deactivated successfully.
Dec  5 02:10:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-c66e076ce0b97b1ffb0be792f84404fb2f83ab9c6ac5cd8cc44b4f6206b0bf01-merged.mount: Deactivated successfully.
Dec  5 02:10:16 compute-0 podman[445500]: 2025-12-05 02:10:16.65493111 +0000 UTC m=+0.116585775 container cleanup 12faf4c2216d9372536395acf5e9f1614a1c5a76751643d625f5c8a217280b16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  5 02:10:16 compute-0 systemd[1]: libpod-conmon-12faf4c2216d9372536395acf5e9f1614a1c5a76751643d625f5c8a217280b16.scope: Deactivated successfully.
Dec  5 02:10:16 compute-0 podman[445531]: 2025-12-05 02:10:16.7646203 +0000 UTC m=+0.070062478 container remove 12faf4c2216d9372536395acf5e9f1614a1c5a76751643d625f5c8a217280b16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:10:16 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:16.779 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[f5185954-fed7-450d-aa7e-1bc570526a0b]: (4, ('Fri Dec  5 02:10:16 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1 (12faf4c2216d9372536395acf5e9f1614a1c5a76751643d625f5c8a217280b16)\n12faf4c2216d9372536395acf5e9f1614a1c5a76751643d625f5c8a217280b16\nFri Dec  5 02:10:16 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1 (12faf4c2216d9372536395acf5e9f1614a1c5a76751643d625f5c8a217280b16)\n12faf4c2216d9372536395acf5e9f1614a1c5a76751643d625f5c8a217280b16\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:16 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:16.782 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[1d4faea1-29f1-43ef-b1b1-194d4918ca06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:16 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:16.786 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77ae1103-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.788 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:16 compute-0 kernel: tap77ae1103-30: left promiscuous mode
Dec  5 02:10:16 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:16.796 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[693f6dc2-4cbe-44f9-88fc-83f03cf1a281]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:16 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:16.814 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[732f6eaf-1118-4b09-ac69-4b27d1abb871]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:16 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:16.815 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[d98ac2f6-c49a-496d-af1a-85291d9d18fc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:16 compute-0 nova_compute[349548]: 2025-12-05 02:10:16.818 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:16 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:16.839 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[858247d4-c3ce-4767-a3e9-74714e6a38fb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661233, 'reachable_time': 42827, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 445545, 'error': None, 'target': 'ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:16 compute-0 systemd[1]: run-netns-ovnmeta\x2d77ae1103\x2d3871\x2d4354\x2d8e08\x2d09bb5c0c1ad1.mount: Deactivated successfully.
Dec  5 02:10:16 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:16.843 287504 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-77ae1103-3871-4354-8e08-09bb5c0c1ad1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  5 02:10:16 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:16.844 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[6b1d0f26-afee-4c5d-b127-2bc91bf10660]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:17 compute-0 nova_compute[349548]: 2025-12-05 02:10:17.116 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:17 compute-0 nova_compute[349548]: 2025-12-05 02:10:17.214 349552 INFO nova.virt.libvirt.driver [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Deleting instance files /var/lib/nova/instances/939ae9f2-b89c-4a19-96de-ab4dfc882a35_del#033[00m
Dec  5 02:10:17 compute-0 nova_compute[349548]: 2025-12-05 02:10:17.214 349552 INFO nova.virt.libvirt.driver [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Deletion of /var/lib/nova/instances/939ae9f2-b89c-4a19-96de-ab4dfc882a35_del complete#033[00m
Dec  5 02:10:17 compute-0 nova_compute[349548]: 2025-12-05 02:10:17.303 349552 INFO nova.compute.manager [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Took 1.10 seconds to destroy the instance on the hypervisor.#033[00m
Dec  5 02:10:17 compute-0 nova_compute[349548]: 2025-12-05 02:10:17.303 349552 DEBUG oslo.service.loopingcall [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  5 02:10:17 compute-0 nova_compute[349548]: 2025-12-05 02:10:17.304 349552 DEBUG nova.compute.manager [-] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  5 02:10:17 compute-0 nova_compute[349548]: 2025-12-05 02:10:17.304 349552 DEBUG nova.network.neutron [-] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  5 02:10:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 02:10:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:10:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 02:10:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:10:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:10:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:10:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:10:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:10:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:10:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:10:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:10:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1838: 321 pgs: 321 active+clean; 202 MiB data, 372 MiB used, 60 GiB / 60 GiB avail; 332 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Dec  5 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.296 349552 DEBUG nova.compute.manager [req-f99c7165-fa3e-456e-bb36-8180c0016a8d req-924f7264-dabe-4ccd-962c-f7b31be44d87 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Received event network-vif-plugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.297 349552 DEBUG oslo_concurrency.lockutils [req-f99c7165-fa3e-456e-bb36-8180c0016a8d req-924f7264-dabe-4ccd-962c-f7b31be44d87 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "59e35a32-9023-4e49-be56-9da10df3027f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.297 349552 DEBUG oslo_concurrency.lockutils [req-f99c7165-fa3e-456e-bb36-8180c0016a8d req-924f7264-dabe-4ccd-962c-f7b31be44d87 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.298 349552 DEBUG oslo_concurrency.lockutils [req-f99c7165-fa3e-456e-bb36-8180c0016a8d req-924f7264-dabe-4ccd-962c-f7b31be44d87 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.298 349552 DEBUG nova.compute.manager [req-f99c7165-fa3e-456e-bb36-8180c0016a8d req-924f7264-dabe-4ccd-962c-f7b31be44d87 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] No waiting events found dispatching network-vif-plugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.299 349552 WARNING nova.compute.manager [req-f99c7165-fa3e-456e-bb36-8180c0016a8d req-924f7264-dabe-4ccd-962c-f7b31be44d87 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Received unexpected event network-vif-plugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 for instance with vm_state active and task_state None.#033[00m
Dec  5 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.300 349552 DEBUG nova.compute.manager [req-f99c7165-fa3e-456e-bb36-8180c0016a8d req-924f7264-dabe-4ccd-962c-f7b31be44d87 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Received event network-vif-unplugged-2ac46e0a-6888-440f-b155-d4b0e8677304 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.300 349552 DEBUG oslo_concurrency.lockutils [req-f99c7165-fa3e-456e-bb36-8180c0016a8d req-924f7264-dabe-4ccd-962c-f7b31be44d87 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.301 349552 DEBUG oslo_concurrency.lockutils [req-f99c7165-fa3e-456e-bb36-8180c0016a8d req-924f7264-dabe-4ccd-962c-f7b31be44d87 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.302 349552 DEBUG oslo_concurrency.lockutils [req-f99c7165-fa3e-456e-bb36-8180c0016a8d req-924f7264-dabe-4ccd-962c-f7b31be44d87 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.302 349552 DEBUG nova.compute.manager [req-f99c7165-fa3e-456e-bb36-8180c0016a8d req-924f7264-dabe-4ccd-962c-f7b31be44d87 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] No waiting events found dispatching network-vif-unplugged-2ac46e0a-6888-440f-b155-d4b0e8677304 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.303 349552 DEBUG nova.compute.manager [req-f99c7165-fa3e-456e-bb36-8180c0016a8d req-924f7264-dabe-4ccd-962c-f7b31be44d87 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Received event network-vif-unplugged-2ac46e0a-6888-440f-b155-d4b0e8677304 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  5 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.363 349552 DEBUG nova.network.neutron [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Updating instance_info_cache with network_info: [{"id": "26b950d4-e9c2-45ea-8e3a-bd06bf2227d4", "address": "fa:16:3e:6a:63:ca", "network": {"id": "ff773210-0089-4a3b-936f-15f2b6743c77", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-4563358-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f120ce30568246929ef2dc1a9f0bd0c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26b950d4-e9", "ovs_interfaceid": "26b950d4-e9c2-45ea-8e3a-bd06bf2227d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.406 349552 DEBUG oslo_concurrency.lockutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Releasing lock "refresh_cache-3391e1ba-0e6b-4113-b402-027e997b3cb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.407 349552 DEBUG nova.compute.manager [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Instance network_info: |[{"id": "26b950d4-e9c2-45ea-8e3a-bd06bf2227d4", "address": "fa:16:3e:6a:63:ca", "network": {"id": "ff773210-0089-4a3b-936f-15f2b6743c77", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-4563358-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f120ce30568246929ef2dc1a9f0bd0c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26b950d4-e9", "ovs_interfaceid": "26b950d4-e9c2-45ea-8e3a-bd06bf2227d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  5 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.408 349552 DEBUG oslo_concurrency.lockutils [req-62de5951-7806-43aa-9029-f754eece4c76 req-7b0d15bd-f79b-493b-93dd-ac9c03293643 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-3391e1ba-0e6b-4113-b402-027e997b3cb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.409 349552 DEBUG nova.network.neutron [req-62de5951-7806-43aa-9029-f754eece4c76 req-7b0d15bd-f79b-493b-93dd-ac9c03293643 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Refreshing network info cache for port 26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  5 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.414 349552 DEBUG nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Start _get_guest_xml network_info=[{"id": "26b950d4-e9c2-45ea-8e3a-bd06bf2227d4", "address": "fa:16:3e:6a:63:ca", "network": {"id": "ff773210-0089-4a3b-936f-15f2b6743c77", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-4563358-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f120ce30568246929ef2dc1a9f0bd0c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26b950d4-e9", "ovs_interfaceid": "26b950d4-e9c2-45ea-8e3a-bd06bf2227d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T02:07:35Z,direct_url=<?>,disk_format='qcow2',id=e9091bfb-b431-47c9-a284-79372046956b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T02:07:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'image_id': 'e9091bfb-b431-47c9-a284-79372046956b'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  5 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.428 349552 WARNING nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.449 349552 DEBUG nova.virt.libvirt.host [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  5 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.450 349552 DEBUG nova.virt.libvirt.host [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  5 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.457 349552 DEBUG nova.virt.libvirt.host [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  5 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.458 349552 DEBUG nova.virt.libvirt.host [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  5 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.459 349552 DEBUG nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  5 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.459 349552 DEBUG nova.virt.hardware [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-05T02:07:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T02:07:35Z,direct_url=<?>,disk_format='qcow2',id=e9091bfb-b431-47c9-a284-79372046956b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T02:07:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  5 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.460 349552 DEBUG nova.virt.hardware [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  5 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.461 349552 DEBUG nova.virt.hardware [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  5 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.461 349552 DEBUG nova.virt.hardware [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  5 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.461 349552 DEBUG nova.virt.hardware [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  5 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.462 349552 DEBUG nova.virt.hardware [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  5 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.462 349552 DEBUG nova.virt.hardware [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  5 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.463 349552 DEBUG nova.virt.hardware [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  5 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.463 349552 DEBUG nova.virt.hardware [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  5 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.464 349552 DEBUG nova.virt.hardware [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  5 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.464 349552 DEBUG nova.virt.hardware [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  5 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.468 349552 DEBUG oslo_concurrency.processutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.896 349552 DEBUG nova.network.neutron [req-06a18c23-30b8-4680-9ca3-f4b33a766b4e req-e4f00246-7028-4fa6-b2f1-0f915a73aadd a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Updated VIF entry in instance network info cache for port 2ac46e0a-6888-440f-b155-d4b0e8677304. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  5 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.898 349552 DEBUG nova.network.neutron [req-06a18c23-30b8-4680-9ca3-f4b33a766b4e req-e4f00246-7028-4fa6-b2f1-0f915a73aadd a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Updating instance_info_cache with network_info: [{"id": "2ac46e0a-6888-440f-b155-d4b0e8677304", "address": "fa:16:3e:ca:ba:4f", "network": {"id": "77ae1103-3871-4354-8e08-09bb5c0c1ad1", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-680696631-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "70b71e0f6ffe47ed86a910f90d71557a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2ac46e0a-68", "ovs_interfaceid": "2ac46e0a-6888-440f-b155-d4b0e8677304", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.916 349552 DEBUG oslo_concurrency.lockutils [req-06a18c23-30b8-4680-9ca3-f4b33a766b4e req-e4f00246-7028-4fa6-b2f1-0f915a73aadd a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-939ae9f2-b89c-4a19-96de-ab4dfc882a35" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:10:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  5 02:10:18 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3988017926' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  5 02:10:18 compute-0 nova_compute[349548]: 2025-12-05 02:10:18.990 349552 DEBUG oslo_concurrency.processutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.047 349552 DEBUG nova.storage.rbd_utils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] rbd image 3391e1ba-0e6b-4113-b402-027e997b3cb9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.056 349552 DEBUG oslo_concurrency.processutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:10:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  5 02:10:19 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/860217210' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  5 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.514 349552 DEBUG oslo_concurrency.processutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.517 349552 DEBUG nova.virt.libvirt.vif [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T02:10:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-2017371141',display_name='tempest-ServerAddressesTestJSON-server-2017371141',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-2017371141',id=10,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f120ce30568246929ef2dc1a9f0bd0c7',ramdisk_id='',reservation_id='r-mwlljk6u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1048961571',owner_user_name='tempest-ServerAddressesTestJSON-1048961571-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T02:10:07Z,user_data=None,user_id='f18ce80284524cbb9497cac2c6e6bf32',uuid=3391e1ba-0e6b-4113-b402-027e997b3cb9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "26b950d4-e9c2-45ea-8e3a-bd06bf2227d4", "address": "fa:16:3e:6a:63:ca", "network": {"id": "ff773210-0089-4a3b-936f-15f2b6743c77", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-4563358-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f120ce30568246929ef2dc1a9f0bd0c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26b950d4-e9", "ovs_interfaceid": "26b950d4-e9c2-45ea-8e3a-bd06bf2227d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  5 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.518 349552 DEBUG nova.network.os_vif_util [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Converting VIF {"id": "26b950d4-e9c2-45ea-8e3a-bd06bf2227d4", "address": "fa:16:3e:6a:63:ca", "network": {"id": "ff773210-0089-4a3b-936f-15f2b6743c77", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-4563358-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f120ce30568246929ef2dc1a9f0bd0c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26b950d4-e9", "ovs_interfaceid": "26b950d4-e9c2-45ea-8e3a-bd06bf2227d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.520 349552 DEBUG nova.network.os_vif_util [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:63:ca,bridge_name='br-int',has_traffic_filtering=True,id=26b950d4-e9c2-45ea-8e3a-bd06bf2227d4,network=Network(ff773210-0089-4a3b-936f-15f2b6743c77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26b950d4-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.523 349552 DEBUG nova.objects.instance [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3391e1ba-0e6b-4113-b402-027e997b3cb9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.550 349552 DEBUG nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] End _get_guest_xml xml=<domain type="kvm">
Dec  5 02:10:19 compute-0 nova_compute[349548]:  <uuid>3391e1ba-0e6b-4113-b402-027e997b3cb9</uuid>
Dec  5 02:10:19 compute-0 nova_compute[349548]:  <name>instance-0000000a</name>
Dec  5 02:10:19 compute-0 nova_compute[349548]:  <memory>131072</memory>
Dec  5 02:10:19 compute-0 nova_compute[349548]:  <vcpu>1</vcpu>
Dec  5 02:10:19 compute-0 nova_compute[349548]:  <metadata>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  5 02:10:19 compute-0 nova_compute[349548]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:      <nova:name>tempest-ServerAddressesTestJSON-server-2017371141</nova:name>
Dec  5 02:10:19 compute-0 nova_compute[349548]:      <nova:creationTime>2025-12-05 02:10:18</nova:creationTime>
Dec  5 02:10:19 compute-0 nova_compute[349548]:      <nova:flavor name="m1.nano">
Dec  5 02:10:19 compute-0 nova_compute[349548]:        <nova:memory>128</nova:memory>
Dec  5 02:10:19 compute-0 nova_compute[349548]:        <nova:disk>1</nova:disk>
Dec  5 02:10:19 compute-0 nova_compute[349548]:        <nova:swap>0</nova:swap>
Dec  5 02:10:19 compute-0 nova_compute[349548]:        <nova:ephemeral>0</nova:ephemeral>
Dec  5 02:10:19 compute-0 nova_compute[349548]:        <nova:vcpus>1</nova:vcpus>
Dec  5 02:10:19 compute-0 nova_compute[349548]:      </nova:flavor>
Dec  5 02:10:19 compute-0 nova_compute[349548]:      <nova:owner>
Dec  5 02:10:19 compute-0 nova_compute[349548]:        <nova:user uuid="f18ce80284524cbb9497cac2c6e6bf32">tempest-ServerAddressesTestJSON-1048961571-project-member</nova:user>
Dec  5 02:10:19 compute-0 nova_compute[349548]:        <nova:project uuid="f120ce30568246929ef2dc1a9f0bd0c7">tempest-ServerAddressesTestJSON-1048961571</nova:project>
Dec  5 02:10:19 compute-0 nova_compute[349548]:      </nova:owner>
Dec  5 02:10:19 compute-0 nova_compute[349548]:      <nova:root type="image" uuid="e9091bfb-b431-47c9-a284-79372046956b"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:      <nova:ports>
Dec  5 02:10:19 compute-0 nova_compute[349548]:        <nova:port uuid="26b950d4-e9c2-45ea-8e3a-bd06bf2227d4">
Dec  5 02:10:19 compute-0 nova_compute[349548]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:        </nova:port>
Dec  5 02:10:19 compute-0 nova_compute[349548]:      </nova:ports>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    </nova:instance>
Dec  5 02:10:19 compute-0 nova_compute[349548]:  </metadata>
Dec  5 02:10:19 compute-0 nova_compute[349548]:  <sysinfo type="smbios">
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <system>
Dec  5 02:10:19 compute-0 nova_compute[349548]:      <entry name="manufacturer">RDO</entry>
Dec  5 02:10:19 compute-0 nova_compute[349548]:      <entry name="product">OpenStack Compute</entry>
Dec  5 02:10:19 compute-0 nova_compute[349548]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  5 02:10:19 compute-0 nova_compute[349548]:      <entry name="serial">3391e1ba-0e6b-4113-b402-027e997b3cb9</entry>
Dec  5 02:10:19 compute-0 nova_compute[349548]:      <entry name="uuid">3391e1ba-0e6b-4113-b402-027e997b3cb9</entry>
Dec  5 02:10:19 compute-0 nova_compute[349548]:      <entry name="family">Virtual Machine</entry>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    </system>
Dec  5 02:10:19 compute-0 nova_compute[349548]:  </sysinfo>
Dec  5 02:10:19 compute-0 nova_compute[349548]:  <os>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <boot dev="hd"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <smbios mode="sysinfo"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:  </os>
Dec  5 02:10:19 compute-0 nova_compute[349548]:  <features>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <acpi/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <apic/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <vmcoreinfo/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:  </features>
Dec  5 02:10:19 compute-0 nova_compute[349548]:  <clock offset="utc">
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <timer name="pit" tickpolicy="delay"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <timer name="hpet" present="no"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:  </clock>
Dec  5 02:10:19 compute-0 nova_compute[349548]:  <cpu mode="host-model" match="exact">
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <topology sockets="1" cores="1" threads="1"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:  </cpu>
Dec  5 02:10:19 compute-0 nova_compute[349548]:  <devices>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <disk type="network" device="disk">
Dec  5 02:10:19 compute-0 nova_compute[349548]:      <driver type="raw" cache="none"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:      <source protocol="rbd" name="vms/3391e1ba-0e6b-4113-b402-027e997b3cb9_disk">
Dec  5 02:10:19 compute-0 nova_compute[349548]:        <host name="192.168.122.100" port="6789"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:      </source>
Dec  5 02:10:19 compute-0 nova_compute[349548]:      <auth username="openstack">
Dec  5 02:10:19 compute-0 nova_compute[349548]:        <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:      </auth>
Dec  5 02:10:19 compute-0 nova_compute[349548]:      <target dev="vda" bus="virtio"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    </disk>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <disk type="network" device="cdrom">
Dec  5 02:10:19 compute-0 nova_compute[349548]:      <driver type="raw" cache="none"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:      <source protocol="rbd" name="vms/3391e1ba-0e6b-4113-b402-027e997b3cb9_disk.config">
Dec  5 02:10:19 compute-0 nova_compute[349548]:        <host name="192.168.122.100" port="6789"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:      </source>
Dec  5 02:10:19 compute-0 nova_compute[349548]:      <auth username="openstack">
Dec  5 02:10:19 compute-0 nova_compute[349548]:        <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:      </auth>
Dec  5 02:10:19 compute-0 nova_compute[349548]:      <target dev="sda" bus="sata"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    </disk>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <interface type="ethernet">
Dec  5 02:10:19 compute-0 nova_compute[349548]:      <mac address="fa:16:3e:6a:63:ca"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:      <model type="virtio"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:      <driver name="vhost" rx_queue_size="512"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:      <mtu size="1442"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:      <target dev="tap26b950d4-e9"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    </interface>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <serial type="pty">
Dec  5 02:10:19 compute-0 nova_compute[349548]:      <log file="/var/lib/nova/instances/3391e1ba-0e6b-4113-b402-027e997b3cb9/console.log" append="off"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    </serial>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <video>
Dec  5 02:10:19 compute-0 nova_compute[349548]:      <model type="virtio"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    </video>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <input type="tablet" bus="usb"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <rng model="virtio">
Dec  5 02:10:19 compute-0 nova_compute[349548]:      <backend model="random">/dev/urandom</backend>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    </rng>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <controller type="usb" index="0"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    <memballoon model="virtio">
Dec  5 02:10:19 compute-0 nova_compute[349548]:      <stats period="10"/>
Dec  5 02:10:19 compute-0 nova_compute[349548]:    </memballoon>
Dec  5 02:10:19 compute-0 nova_compute[349548]:  </devices>
Dec  5 02:10:19 compute-0 nova_compute[349548]: </domain>
Dec  5 02:10:19 compute-0 nova_compute[349548]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  5 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.553 349552 DEBUG nova.compute.manager [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Preparing to wait for external event network-vif-plugged-26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  5 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.555 349552 DEBUG oslo_concurrency.lockutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Acquiring lock "3391e1ba-0e6b-4113-b402-027e997b3cb9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.556 349552 DEBUG oslo_concurrency.lockutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Lock "3391e1ba-0e6b-4113-b402-027e997b3cb9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.557 349552 DEBUG oslo_concurrency.lockutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Lock "3391e1ba-0e6b-4113-b402-027e997b3cb9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.558 349552 DEBUG nova.virt.libvirt.vif [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T02:10:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-2017371141',display_name='tempest-ServerAddressesTestJSON-server-2017371141',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-2017371141',id=10,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f120ce30568246929ef2dc1a9f0bd0c7',ramdisk_id='',reservation_id='r-mwlljk6u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1048961571',owner_user_name='tempest-ServerAddressesTestJSON-1048961571-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T02:10:07Z,user_data=None,user_id='f18ce80284524cbb9497cac2c6e6bf32',uuid=3391e1ba-0e6b-4113-b402-027e997b3cb9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "26b950d4-e9c2-45ea-8e3a-bd06bf2227d4", "address": "fa:16:3e:6a:63:ca", "network": {"id": "ff773210-0089-4a3b-936f-15f2b6743c77", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-4563358-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f120ce30568246929ef2dc1a9f0bd0c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26b950d4-e9", "ovs_interfaceid": "26b950d4-e9c2-45ea-8e3a-bd06bf2227d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  5 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.559 349552 DEBUG nova.network.os_vif_util [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Converting VIF {"id": "26b950d4-e9c2-45ea-8e3a-bd06bf2227d4", "address": "fa:16:3e:6a:63:ca", "network": {"id": "ff773210-0089-4a3b-936f-15f2b6743c77", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-4563358-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f120ce30568246929ef2dc1a9f0bd0c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26b950d4-e9", "ovs_interfaceid": "26b950d4-e9c2-45ea-8e3a-bd06bf2227d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.560 349552 DEBUG nova.network.os_vif_util [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:63:ca,bridge_name='br-int',has_traffic_filtering=True,id=26b950d4-e9c2-45ea-8e3a-bd06bf2227d4,network=Network(ff773210-0089-4a3b-936f-15f2b6743c77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26b950d4-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.561 349552 DEBUG os_vif [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:63:ca,bridge_name='br-int',has_traffic_filtering=True,id=26b950d4-e9c2-45ea-8e3a-bd06bf2227d4,network=Network(ff773210-0089-4a3b-936f-15f2b6743c77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26b950d4-e9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  5 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.564 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.565 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.566 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.571 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.572 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap26b950d4-e9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.573 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap26b950d4-e9, col_values=(('external_ids', {'iface-id': '26b950d4-e9c2-45ea-8e3a-bd06bf2227d4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6a:63:ca', 'vm-uuid': '3391e1ba-0e6b-4113-b402-027e997b3cb9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.576 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:19 compute-0 NetworkManager[49092]: <info>  [1764900619.5793] manager: (tap26b950d4-e9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Dec  5 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.583 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  5 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.585 349552 INFO os_vif [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:63:ca,bridge_name='br-int',has_traffic_filtering=True,id=26b950d4-e9c2-45ea-8e3a-bd06bf2227d4,network=Network(ff773210-0089-4a3b-936f-15f2b6743c77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26b950d4-e9')#033[00m
Dec  5 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.661 349552 DEBUG nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  5 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.666 349552 DEBUG nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  5 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.668 349552 DEBUG nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] No VIF found with MAC fa:16:3e:6a:63:ca, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  5 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.669 349552 INFO nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Using config drive#033[00m
Dec  5 02:10:19 compute-0 nova_compute[349548]: 2025-12-05 02:10:19.736 349552 DEBUG nova.storage.rbd_utils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] rbd image 3391e1ba-0e6b-4113-b402-027e997b3cb9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:10:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1839: 321 pgs: 321 active+clean; 202 MiB data, 372 MiB used, 60 GiB / 60 GiB avail; 332 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Dec  5 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.369 349552 DEBUG nova.network.neutron [req-62de5951-7806-43aa-9029-f754eece4c76 req-7b0d15bd-f79b-493b-93dd-ac9c03293643 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Updated VIF entry in instance network info cache for port 26b950d4-e9c2-45ea-8e3a-bd06bf2227d4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  5 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.370 349552 DEBUG nova.network.neutron [req-62de5951-7806-43aa-9029-f754eece4c76 req-7b0d15bd-f79b-493b-93dd-ac9c03293643 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Updating instance_info_cache with network_info: [{"id": "26b950d4-e9c2-45ea-8e3a-bd06bf2227d4", "address": "fa:16:3e:6a:63:ca", "network": {"id": "ff773210-0089-4a3b-936f-15f2b6743c77", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-4563358-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f120ce30568246929ef2dc1a9f0bd0c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26b950d4-e9", "ovs_interfaceid": "26b950d4-e9c2-45ea-8e3a-bd06bf2227d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.387 349552 DEBUG oslo_concurrency.lockutils [req-62de5951-7806-43aa-9029-f754eece4c76 req-7b0d15bd-f79b-493b-93dd-ac9c03293643 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-3391e1ba-0e6b-4113-b402-027e997b3cb9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.420 349552 INFO nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Creating config drive at /var/lib/nova/instances/3391e1ba-0e6b-4113-b402-027e997b3cb9/disk.config#033[00m
Dec  5 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.426 349552 DEBUG oslo_concurrency.processutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3391e1ba-0e6b-4113-b402-027e997b3cb9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpntct65h4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.457 349552 DEBUG nova.compute.manager [req-80077e0e-46ec-4a59-8722-ffd4821907e4 req-73b7d638-c2ef-4fce-9644-cd216782537e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Received event network-vif-plugged-2ac46e0a-6888-440f-b155-d4b0e8677304 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.458 349552 DEBUG oslo_concurrency.lockutils [req-80077e0e-46ec-4a59-8722-ffd4821907e4 req-73b7d638-c2ef-4fce-9644-cd216782537e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.458 349552 DEBUG oslo_concurrency.lockutils [req-80077e0e-46ec-4a59-8722-ffd4821907e4 req-73b7d638-c2ef-4fce-9644-cd216782537e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.459 349552 DEBUG oslo_concurrency.lockutils [req-80077e0e-46ec-4a59-8722-ffd4821907e4 req-73b7d638-c2ef-4fce-9644-cd216782537e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.459 349552 DEBUG nova.compute.manager [req-80077e0e-46ec-4a59-8722-ffd4821907e4 req-73b7d638-c2ef-4fce-9644-cd216782537e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] No waiting events found dispatching network-vif-plugged-2ac46e0a-6888-440f-b155-d4b0e8677304 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.460 349552 WARNING nova.compute.manager [req-80077e0e-46ec-4a59-8722-ffd4821907e4 req-73b7d638-c2ef-4fce-9644-cd216782537e a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Received unexpected event network-vif-plugged-2ac46e0a-6888-440f-b155-d4b0e8677304 for instance with vm_state active and task_state deleting.#033[00m
Dec  5 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.524 349552 DEBUG nova.network.neutron [-] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.547 349552 INFO nova.compute.manager [-] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Took 3.24 seconds to deallocate network for instance.#033[00m
Dec  5 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.570 349552 DEBUG oslo_concurrency.processutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3391e1ba-0e6b-4113-b402-027e997b3cb9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpntct65h4" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.617 349552 DEBUG nova.storage.rbd_utils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] rbd image 3391e1ba-0e6b-4113-b402-027e997b3cb9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.627 349552 DEBUG oslo_concurrency.processutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3391e1ba-0e6b-4113-b402-027e997b3cb9/disk.config 3391e1ba-0e6b-4113-b402-027e997b3cb9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.663 349552 DEBUG oslo_concurrency.lockutils [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.665 349552 DEBUG oslo_concurrency.lockutils [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.773 349552 DEBUG oslo_concurrency.processutils [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.908 349552 DEBUG oslo_concurrency.processutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3391e1ba-0e6b-4113-b402-027e997b3cb9/disk.config 3391e1ba-0e6b-4113-b402-027e997b3cb9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.281s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.910 349552 INFO nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Deleting local config drive /var/lib/nova/instances/3391e1ba-0e6b-4113-b402-027e997b3cb9/disk.config because it was imported into RBD.#033[00m
Dec  5 02:10:20 compute-0 kernel: tap26b950d4-e9: entered promiscuous mode
Dec  5 02:10:20 compute-0 NetworkManager[49092]: <info>  [1764900620.9867] manager: (tap26b950d4-e9): new Tun device (/org/freedesktop/NetworkManager/Devices/59)
Dec  5 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.986 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:20 compute-0 ovn_controller[89286]: 2025-12-05T02:10:20Z|00115|binding|INFO|Claiming lport 26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 for this chassis.
Dec  5 02:10:20 compute-0 ovn_controller[89286]: 2025-12-05T02:10:20Z|00116|binding|INFO|26b950d4-e9c2-45ea-8e3a-bd06bf2227d4: Claiming fa:16:3e:6a:63:ca 10.100.0.12
Dec  5 02:10:20 compute-0 nova_compute[349548]: 2025-12-05 02:10:20.993 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.009 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:63:ca 10.100.0.12'], port_security=['fa:16:3e:6a:63:ca 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3391e1ba-0e6b-4113-b402-027e997b3cb9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ff773210-0089-4a3b-936f-15f2b6743c77', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f120ce30568246929ef2dc1a9f0bd0c7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8661fcbe-cefc-4ef8-b7d8-1566fb9b4df4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fd9eaded-949d-4594-9bc0-f87080068e48, chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=26b950d4-e9c2-45ea-8e3a-bd06bf2227d4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.011 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 in datapath ff773210-0089-4a3b-936f-15f2b6743c77 bound to our chassis#033[00m
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.012 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ff773210-0089-4a3b-936f-15f2b6743c77#033[00m
Dec  5 02:10:21 compute-0 ovn_controller[89286]: 2025-12-05T02:10:21Z|00117|binding|INFO|Setting lport 26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 ovn-installed in OVS
Dec  5 02:10:21 compute-0 ovn_controller[89286]: 2025-12-05T02:10:21Z|00118|binding|INFO|Setting lport 26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 up in Southbound
Dec  5 02:10:21 compute-0 nova_compute[349548]: 2025-12-05 02:10:21.027 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.031 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[7ea57c14-a9f3-40c8-a17b-ce6fed4c0e0e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.032 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapff773210-01 in ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.035 412744 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapff773210-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.035 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[28cb1315-2a10-4d00-b8c7-fa6116273017]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.036 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[2b91fe73-7005-482e-baa1-1c65e98757f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:21 compute-0 systemd-udevd[445703]: Network interface NamePolicy= disabled on kernel command line.
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.051 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[36263673-c419-48ce-b799-0d7c945a15f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:21 compute-0 NetworkManager[49092]: <info>  [1764900621.0573] device (tap26b950d4-e9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  5 02:10:21 compute-0 NetworkManager[49092]: <info>  [1764900621.0617] device (tap26b950d4-e9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  5 02:10:21 compute-0 systemd-machined[138700]: New machine qemu-11-instance-0000000a.
Dec  5 02:10:21 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-0000000a.
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.080 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[fe7450c2-405f-484f-8084-7152f8282941]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.120 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[2adcb426-a8f5-4226-a357-7e0a65ef58e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.129 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[83ff9854-50b1-406d-9bd4-0b5989a66908]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:21 compute-0 systemd-udevd[445707]: Network interface NamePolicy= disabled on kernel command line.
Dec  5 02:10:21 compute-0 NetworkManager[49092]: <info>  [1764900621.1350] manager: (tapff773210-00): new Veth device (/org/freedesktop/NetworkManager/Devices/60)
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.168 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[5c190967-3b0f-44e3-bd26-83b1c1b1ed37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.173 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[60420da3-4206-4cda-9f47-3b1be7d597c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:21 compute-0 NetworkManager[49092]: <info>  [1764900621.2070] device (tapff773210-00): carrier: link connected
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.211 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[2f4de7ea-4022-4ab6-bbfc-7208c0414d86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.234 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[fd036e86-e232-4a46-b119-c8dd84fa8678]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapff773210-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:a7:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 670786, 'reachable_time': 43944, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 445736, 'error': None, 'target': 'ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.258 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[a7f4bb63-b4b7-462f-b09a-c650c2916e4b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4e:a70b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 670786, 'tstamp': 670786}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 445737, 'error': None, 'target': 'ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:10:21 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2024992637' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.278 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[1dd8fdf8-67f8-4723-a858-b5d36826e5f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapff773210-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:a7:0b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 670786, 'reachable_time': 43944, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 445738, 'error': None, 'target': 'ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:21 compute-0 nova_compute[349548]: 2025-12-05 02:10:21.305 349552 DEBUG oslo_concurrency.processutils [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:10:21 compute-0 nova_compute[349548]: 2025-12-05 02:10:21.314 349552 DEBUG nova.compute.provider_tree [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.330 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[7a0cf593-fff9-41a4-8ce6-01cf9d44a72d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:21 compute-0 nova_compute[349548]: 2025-12-05 02:10:21.332 349552 DEBUG nova.scheduler.client.report [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:10:21 compute-0 nova_compute[349548]: 2025-12-05 02:10:21.371 349552 DEBUG oslo_concurrency.lockutils [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.706s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:10:21 compute-0 nova_compute[349548]: 2025-12-05 02:10:21.419 349552 INFO nova.scheduler.client.report [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Deleted allocations for instance 939ae9f2-b89c-4a19-96de-ab4dfc882a35#033[00m
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.428 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[0622f133-2875-4adf-b48e-daf7de652dfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.429 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff773210-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.429 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.430 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapff773210-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:10:21 compute-0 kernel: tapff773210-00: entered promiscuous mode
Dec  5 02:10:21 compute-0 nova_compute[349548]: 2025-12-05 02:10:21.432 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:21 compute-0 NetworkManager[49092]: <info>  [1764900621.4362] manager: (tapff773210-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.440 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapff773210-00, col_values=(('external_ids', {'iface-id': 'ff2931b3-fb94-4976-be60-545b1f5dca2f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:10:21 compute-0 nova_compute[349548]: 2025-12-05 02:10:21.442 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:21 compute-0 ovn_controller[89286]: 2025-12-05T02:10:21Z|00119|binding|INFO|Releasing lport ff2931b3-fb94-4976-be60-545b1f5dca2f from this chassis (sb_readonly=0)
Dec  5 02:10:21 compute-0 nova_compute[349548]: 2025-12-05 02:10:21.468 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.472 287122 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ff773210-0089-4a3b-936f-15f2b6743c77.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ff773210-0089-4a3b-936f-15f2b6743c77.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.474 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[e4fe77d5-f3f3-4af9-96f9-72059a3f1f0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.475 287122 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]: global
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]:    log         /dev/log local0 debug
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]:    log-tag     haproxy-metadata-proxy-ff773210-0089-4a3b-936f-15f2b6743c77
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]:    user        root
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]:    group       root
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]:    maxconn     1024
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]:    pidfile     /var/lib/neutron/external/pids/ff773210-0089-4a3b-936f-15f2b6743c77.pid.haproxy
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]:    daemon
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]: 
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]: defaults
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]:    log global
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]:    mode http
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]:    option httplog
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]:    option dontlognull
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]:    option http-server-close
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]:    option forwardfor
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]:    retries                 3
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]:    timeout http-request    30s
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]:    timeout connect         30s
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]:    timeout client          32s
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]:    timeout server          32s
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]:    timeout http-keep-alive 30s
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]: 
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]: 
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]: listen listener
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]:    bind 169.254.169.254:80
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]:    server metadata /var/lib/neutron/metadata_proxy
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]:    http-request add-header X-OVN-Network-ID ff773210-0089-4a3b-936f-15f2b6743c77
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  5 02:10:21 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:21.476 287122 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77', 'env', 'PROCESS_TAG=haproxy-ff773210-0089-4a3b-936f-15f2b6743c77', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ff773210-0089-4a3b-936f-15f2b6743c77.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  5 02:10:21 compute-0 nova_compute[349548]: 2025-12-05 02:10:21.517 349552 DEBUG oslo_concurrency.lockutils [None req-2f3c98c4-75c0-4724-b405-f36cf798abdb 3439b5cde2ff4830bb0294f007842282 70b71e0f6ffe47ed86a910f90d71557a - - default default] Lock "939ae9f2-b89c-4a19-96de-ab4dfc882a35" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.328s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:10:21 compute-0 nova_compute[349548]: 2025-12-05 02:10:21.701 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900621.7012355, 3391e1ba-0e6b-4113-b402-027e997b3cb9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:10:21 compute-0 nova_compute[349548]: 2025-12-05 02:10:21.702 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] VM Started (Lifecycle Event)#033[00m
Dec  5 02:10:21 compute-0 nova_compute[349548]: 2025-12-05 02:10:21.817 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:10:21 compute-0 nova_compute[349548]: 2025-12-05 02:10:21.825 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900621.7013443, 3391e1ba-0e6b-4113-b402-027e997b3cb9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:10:21 compute-0 nova_compute[349548]: 2025-12-05 02:10:21.825 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] VM Paused (Lifecycle Event)#033[00m
Dec  5 02:10:21 compute-0 nova_compute[349548]: 2025-12-05 02:10:21.864 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:10:21 compute-0 nova_compute[349548]: 2025-12-05 02:10:21.871 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  5 02:10:21 compute-0 nova_compute[349548]: 2025-12-05 02:10:21.891 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  5 02:10:21 compute-0 podman[445811]: 2025-12-05 02:10:21.941354042 +0000 UTC m=+0.102657234 container create 1ff096dccd9dc4795265f55f795004135ca0cefe1ea13c4b753431eb93173f4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  5 02:10:21 compute-0 podman[445811]: 2025-12-05 02:10:21.878072715 +0000 UTC m=+0.039375897 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  5 02:10:22 compute-0 systemd[1]: Started libpod-conmon-1ff096dccd9dc4795265f55f795004135ca0cefe1ea13c4b753431eb93173f4d.scope.
Dec  5 02:10:22 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:10:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9f87d456a5d6144246a464ab2071935c153697f26dcdbc4ce01d7704fe82715/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  5 02:10:22 compute-0 podman[445811]: 2025-12-05 02:10:22.095209364 +0000 UTC m=+0.256512596 container init 1ff096dccd9dc4795265f55f795004135ca0cefe1ea13c4b753431eb93173f4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  5 02:10:22 compute-0 podman[445811]: 2025-12-05 02:10:22.105115502 +0000 UTC m=+0.266418694 container start 1ff096dccd9dc4795265f55f795004135ca0cefe1ea13c4b753431eb93173f4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  5 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.118 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:22 compute-0 neutron-haproxy-ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77[445826]: [NOTICE]   (445830) : New worker (445832) forked
Dec  5 02:10:22 compute-0 neutron-haproxy-ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77[445826]: [NOTICE]   (445830) : Loading success.
Dec  5 02:10:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1840: 321 pgs: 321 active+clean; 183 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 123 op/s
Dec  5 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.543 349552 DEBUG nova.compute.manager [req-53d30055-d424-4818-9e0f-3e20d2749fad req-ede8446c-b0bd-4409-897a-960012128c21 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Received event network-vif-deleted-2ac46e0a-6888-440f-b155-d4b0e8677304 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.544 349552 DEBUG nova.compute.manager [req-53d30055-d424-4818-9e0f-3e20d2749fad req-ede8446c-b0bd-4409-897a-960012128c21 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Received event network-vif-plugged-26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.545 349552 DEBUG oslo_concurrency.lockutils [req-53d30055-d424-4818-9e0f-3e20d2749fad req-ede8446c-b0bd-4409-897a-960012128c21 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "3391e1ba-0e6b-4113-b402-027e997b3cb9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.546 349552 DEBUG oslo_concurrency.lockutils [req-53d30055-d424-4818-9e0f-3e20d2749fad req-ede8446c-b0bd-4409-897a-960012128c21 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "3391e1ba-0e6b-4113-b402-027e997b3cb9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.546 349552 DEBUG oslo_concurrency.lockutils [req-53d30055-d424-4818-9e0f-3e20d2749fad req-ede8446c-b0bd-4409-897a-960012128c21 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "3391e1ba-0e6b-4113-b402-027e997b3cb9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.547 349552 DEBUG nova.compute.manager [req-53d30055-d424-4818-9e0f-3e20d2749fad req-ede8446c-b0bd-4409-897a-960012128c21 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Processing event network-vif-plugged-26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  5 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.548 349552 DEBUG nova.compute.manager [req-53d30055-d424-4818-9e0f-3e20d2749fad req-ede8446c-b0bd-4409-897a-960012128c21 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Received event network-vif-plugged-26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.549 349552 DEBUG oslo_concurrency.lockutils [req-53d30055-d424-4818-9e0f-3e20d2749fad req-ede8446c-b0bd-4409-897a-960012128c21 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "3391e1ba-0e6b-4113-b402-027e997b3cb9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.550 349552 DEBUG oslo_concurrency.lockutils [req-53d30055-d424-4818-9e0f-3e20d2749fad req-ede8446c-b0bd-4409-897a-960012128c21 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "3391e1ba-0e6b-4113-b402-027e997b3cb9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.550 349552 DEBUG oslo_concurrency.lockutils [req-53d30055-d424-4818-9e0f-3e20d2749fad req-ede8446c-b0bd-4409-897a-960012128c21 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "3391e1ba-0e6b-4113-b402-027e997b3cb9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.551 349552 DEBUG nova.compute.manager [req-53d30055-d424-4818-9e0f-3e20d2749fad req-ede8446c-b0bd-4409-897a-960012128c21 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] No waiting events found dispatching network-vif-plugged-26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.552 349552 WARNING nova.compute.manager [req-53d30055-d424-4818-9e0f-3e20d2749fad req-ede8446c-b0bd-4409-897a-960012128c21 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Received unexpected event network-vif-plugged-26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 for instance with vm_state building and task_state spawning.#033[00m
Dec  5 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.553 349552 DEBUG nova.compute.manager [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  5 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.559 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900622.559234, 3391e1ba-0e6b-4113-b402-027e997b3cb9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.561 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] VM Resumed (Lifecycle Event)#033[00m
Dec  5 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.564 349552 DEBUG nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  5 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.571 349552 INFO nova.virt.libvirt.driver [-] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Instance spawned successfully.#033[00m
Dec  5 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.572 349552 DEBUG nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  5 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.578 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.587 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  5 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.619 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  5 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.622 349552 DEBUG nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.623 349552 DEBUG nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.624 349552 DEBUG nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.625 349552 DEBUG nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.626 349552 DEBUG nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.627 349552 DEBUG nova.virt.libvirt.driver [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.678 349552 INFO nova.compute.manager [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Took 15.11 seconds to spawn the instance on the hypervisor.#033[00m
Dec  5 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.679 349552 DEBUG nova.compute.manager [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.740 349552 INFO nova.compute.manager [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Took 16.27 seconds to build instance.#033[00m
Dec  5 02:10:22 compute-0 nova_compute[349548]: 2025-12-05 02:10:22.761 349552 DEBUG oslo_concurrency.lockutils [None req-2b168022-534e-41f8-821e-ad194a8859a0 f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Lock "3391e1ba-0e6b-4113-b402-027e997b3cb9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.412s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:10:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:10:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1841: 321 pgs: 321 active+clean; 183 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 305 KiB/s wr, 109 op/s
Dec  5 02:10:24 compute-0 nova_compute[349548]: 2025-12-05 02:10:24.580 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:24 compute-0 podman[445841]: 2025-12-05 02:10:24.703818588 +0000 UTC m=+0.103665783 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 02:10:24 compute-0 podman[445842]: 2025-12-05 02:10:24.736087994 +0000 UTC m=+0.130567458 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:24.998 349552 DEBUG oslo_concurrency.lockutils [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Acquiring lock "3391e1ba-0e6b-4113-b402-027e997b3cb9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.000 349552 DEBUG oslo_concurrency.lockutils [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Lock "3391e1ba-0e6b-4113-b402-027e997b3cb9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.008 349552 DEBUG oslo_concurrency.lockutils [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Acquiring lock "3391e1ba-0e6b-4113-b402-027e997b3cb9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.009 349552 DEBUG oslo_concurrency.lockutils [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Lock "3391e1ba-0e6b-4113-b402-027e997b3cb9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.009 349552 DEBUG oslo_concurrency.lockutils [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Lock "3391e1ba-0e6b-4113-b402-027e997b3cb9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.011 349552 INFO nova.compute.manager [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Terminating instance#033[00m
Dec  5 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.013 349552 DEBUG nova.compute.manager [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  5 02:10:25 compute-0 kernel: tap26b950d4-e9 (unregistering): left promiscuous mode
Dec  5 02:10:25 compute-0 NetworkManager[49092]: <info>  [1764900625.1193] device (tap26b950d4-e9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  5 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.140 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:25 compute-0 ovn_controller[89286]: 2025-12-05T02:10:25Z|00120|binding|INFO|Releasing lport 26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 from this chassis (sb_readonly=0)
Dec  5 02:10:25 compute-0 ovn_controller[89286]: 2025-12-05T02:10:25Z|00121|binding|INFO|Setting lport 26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 down in Southbound
Dec  5 02:10:25 compute-0 ovn_controller[89286]: 2025-12-05T02:10:25Z|00122|binding|INFO|Removing iface tap26b950d4-e9 ovn-installed in OVS
Dec  5 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.143 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:25 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:25.147 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:63:ca 10.100.0.12'], port_security=['fa:16:3e:6a:63:ca 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '3391e1ba-0e6b-4113-b402-027e997b3cb9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ff773210-0089-4a3b-936f-15f2b6743c77', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f120ce30568246929ef2dc1a9f0bd0c7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8661fcbe-cefc-4ef8-b7d8-1566fb9b4df4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fd9eaded-949d-4594-9bc0-f87080068e48, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=26b950d4-e9c2-45ea-8e3a-bd06bf2227d4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 02:10:25 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:25.150 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 in datapath ff773210-0089-4a3b-936f-15f2b6743c77 unbound from our chassis#033[00m
Dec  5 02:10:25 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:25.152 287122 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ff773210-0089-4a3b-936f-15f2b6743c77, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  5 02:10:25 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:25.155 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[d023610a-abaa-45dc-9ef0-c26149adf90f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:25 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:25.156 287122 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77 namespace which is not needed anymore#033[00m
Dec  5 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.171 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:25 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Dec  5 02:10:25 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000a.scope: Consumed 3.271s CPU time.
Dec  5 02:10:25 compute-0 systemd-machined[138700]: Machine qemu-11-instance-0000000a terminated.
Dec  5 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.272 349552 INFO nova.virt.libvirt.driver [-] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Instance destroyed successfully.#033[00m
Dec  5 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.272 349552 DEBUG nova.objects.instance [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Lazy-loading 'resources' on Instance uuid 3391e1ba-0e6b-4113-b402-027e997b3cb9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.295 349552 DEBUG nova.virt.libvirt.vif [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T02:10:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-2017371141',display_name='tempest-ServerAddressesTestJSON-server-2017371141',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-2017371141',id=10,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-05T02:10:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f120ce30568246929ef2dc1a9f0bd0c7',ramdisk_id='',reservation_id='r-mwlljk6u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-1048961571',owner_user_name='tempest-ServerAddressesTestJSON-1048961571-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-05T02:10:22Z,user_data=None,user_id='f18ce80284524cbb9497cac2c6e6bf32',uuid=3391e1ba-0e6b-4113-b402-027e997b3cb9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "26b950d4-e9c2-45ea-8e3a-bd06bf2227d4", "address": "fa:16:3e:6a:63:ca", "network": {"id": "ff773210-0089-4a3b-936f-15f2b6743c77", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-4563358-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f120ce30568246929ef2dc1a9f0bd0c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26b950d4-e9", "ovs_interfaceid": "26b950d4-e9c2-45ea-8e3a-bd06bf2227d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  5 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.296 349552 DEBUG nova.network.os_vif_util [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Converting VIF {"id": "26b950d4-e9c2-45ea-8e3a-bd06bf2227d4", "address": "fa:16:3e:6a:63:ca", "network": {"id": "ff773210-0089-4a3b-936f-15f2b6743c77", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-4563358-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f120ce30568246929ef2dc1a9f0bd0c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap26b950d4-e9", "ovs_interfaceid": "26b950d4-e9c2-45ea-8e3a-bd06bf2227d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.298 349552 DEBUG nova.network.os_vif_util [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:63:ca,bridge_name='br-int',has_traffic_filtering=True,id=26b950d4-e9c2-45ea-8e3a-bd06bf2227d4,network=Network(ff773210-0089-4a3b-936f-15f2b6743c77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26b950d4-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.299 349552 DEBUG os_vif [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:63:ca,bridge_name='br-int',has_traffic_filtering=True,id=26b950d4-e9c2-45ea-8e3a-bd06bf2227d4,network=Network(ff773210-0089-4a3b-936f-15f2b6743c77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26b950d4-e9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  5 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.301 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.302 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap26b950d4-e9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.304 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.307 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  5 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.310 349552 INFO os_vif [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:63:ca,bridge_name='br-int',has_traffic_filtering=True,id=26b950d4-e9c2-45ea-8e3a-bd06bf2227d4,network=Network(ff773210-0089-4a3b-936f-15f2b6743c77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap26b950d4-e9')#033[00m
Dec  5 02:10:25 compute-0 neutron-haproxy-ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77[445826]: [NOTICE]   (445830) : haproxy version is 2.8.14-c23fe91
Dec  5 02:10:25 compute-0 neutron-haproxy-ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77[445826]: [NOTICE]   (445830) : path to executable is /usr/sbin/haproxy
Dec  5 02:10:25 compute-0 neutron-haproxy-ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77[445826]: [WARNING]  (445830) : Exiting Master process...
Dec  5 02:10:25 compute-0 neutron-haproxy-ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77[445826]: [WARNING]  (445830) : Exiting Master process...
Dec  5 02:10:25 compute-0 neutron-haproxy-ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77[445826]: [ALERT]    (445830) : Current worker (445832) exited with code 143 (Terminated)
Dec  5 02:10:25 compute-0 neutron-haproxy-ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77[445826]: [WARNING]  (445830) : All workers exited. Exiting... (0)
Dec  5 02:10:25 compute-0 systemd[1]: libpod-1ff096dccd9dc4795265f55f795004135ca0cefe1ea13c4b753431eb93173f4d.scope: Deactivated successfully.
Dec  5 02:10:25 compute-0 podman[445921]: 2025-12-05 02:10:25.434973613 +0000 UTC m=+0.091373948 container died 1ff096dccd9dc4795265f55f795004135ca0cefe1ea13c4b753431eb93173f4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:10:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1ff096dccd9dc4795265f55f795004135ca0cefe1ea13c4b753431eb93173f4d-userdata-shm.mount: Deactivated successfully.
Dec  5 02:10:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9f87d456a5d6144246a464ab2071935c153697f26dcdbc4ce01d7704fe82715-merged.mount: Deactivated successfully.
Dec  5 02:10:25 compute-0 podman[445921]: 2025-12-05 02:10:25.507607993 +0000 UTC m=+0.164008328 container cleanup 1ff096dccd9dc4795265f55f795004135ca0cefe1ea13c4b753431eb93173f4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec  5 02:10:25 compute-0 systemd[1]: libpod-conmon-1ff096dccd9dc4795265f55f795004135ca0cefe1ea13c4b753431eb93173f4d.scope: Deactivated successfully.
Dec  5 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.551 349552 DEBUG nova.compute.manager [req-a0ab87e5-3bee-45f2-989a-3b8ad2449188 req-29f313c1-5948-4321-8c47-e7796844ee41 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Received event network-vif-unplugged-26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.551 349552 DEBUG oslo_concurrency.lockutils [req-a0ab87e5-3bee-45f2-989a-3b8ad2449188 req-29f313c1-5948-4321-8c47-e7796844ee41 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "3391e1ba-0e6b-4113-b402-027e997b3cb9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.552 349552 DEBUG oslo_concurrency.lockutils [req-a0ab87e5-3bee-45f2-989a-3b8ad2449188 req-29f313c1-5948-4321-8c47-e7796844ee41 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "3391e1ba-0e6b-4113-b402-027e997b3cb9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.552 349552 DEBUG oslo_concurrency.lockutils [req-a0ab87e5-3bee-45f2-989a-3b8ad2449188 req-29f313c1-5948-4321-8c47-e7796844ee41 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "3391e1ba-0e6b-4113-b402-027e997b3cb9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.552 349552 DEBUG nova.compute.manager [req-a0ab87e5-3bee-45f2-989a-3b8ad2449188 req-29f313c1-5948-4321-8c47-e7796844ee41 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] No waiting events found dispatching network-vif-unplugged-26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.553 349552 DEBUG nova.compute.manager [req-a0ab87e5-3bee-45f2-989a-3b8ad2449188 req-29f313c1-5948-4321-8c47-e7796844ee41 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Received event network-vif-unplugged-26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  5 02:10:25 compute-0 podman[445962]: 2025-12-05 02:10:25.632878261 +0000 UTC m=+0.083228248 container remove 1ff096dccd9dc4795265f55f795004135ca0cefe1ea13c4b753431eb93173f4d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  5 02:10:25 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:25.644 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[69f07dd1-2a6c-4c10-9ca6-843dd9f061d9]: (4, ('Fri Dec  5 02:10:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77 (1ff096dccd9dc4795265f55f795004135ca0cefe1ea13c4b753431eb93173f4d)\n1ff096dccd9dc4795265f55f795004135ca0cefe1ea13c4b753431eb93173f4d\nFri Dec  5 02:10:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77 (1ff096dccd9dc4795265f55f795004135ca0cefe1ea13c4b753431eb93173f4d)\n1ff096dccd9dc4795265f55f795004135ca0cefe1ea13c4b753431eb93173f4d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:25 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:25.646 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[1d7b1105-9ee7-447e-8111-2446b88bd4d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:25 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:25.647 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff773210-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:10:25 compute-0 kernel: tapff773210-00: left promiscuous mode
Dec  5 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.650 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:25 compute-0 nova_compute[349548]: 2025-12-05 02:10:25.676 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:25 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:25.679 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[6617b5ba-ef4b-4ecf-ba36-326eab0483d5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:25 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:25.695 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[ba2396ee-73dc-4967-8653-b58a2affe564]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:25 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:25.696 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[cb071d55-8859-4f25-8e3e-d105530421e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:25 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:25.720 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[c482841b-10ce-48cd-a4b5-aefb407694c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 670777, 'reachable_time': 18995, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 445975, 'error': None, 'target': 'ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:25 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:25.723 287504 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ff773210-0089-4a3b-936f-15f2b6743c77 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  5 02:10:25 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:25.724 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[10355490-7fc5-4a2b-a1c7-01a50d564dca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:10:25 compute-0 systemd[1]: run-netns-ovnmeta\x2dff773210\x2d0089\x2d4a3b\x2d936f\x2d15f2b6743c77.mount: Deactivated successfully.
Dec  5 02:10:26 compute-0 nova_compute[349548]: 2025-12-05 02:10:26.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:10:26 compute-0 nova_compute[349548]: 2025-12-05 02:10:26.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  5 02:10:26 compute-0 nova_compute[349548]: 2025-12-05 02:10:26.086 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  5 02:10:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1842: 321 pgs: 321 active+clean; 183 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 14 KiB/s wr, 114 op/s
Dec  5 02:10:26 compute-0 nova_compute[349548]: 2025-12-05 02:10:26.297 349552 INFO nova.virt.libvirt.driver [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Deleting instance files /var/lib/nova/instances/3391e1ba-0e6b-4113-b402-027e997b3cb9_del#033[00m
Dec  5 02:10:26 compute-0 nova_compute[349548]: 2025-12-05 02:10:26.298 349552 INFO nova.virt.libvirt.driver [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Deletion of /var/lib/nova/instances/3391e1ba-0e6b-4113-b402-027e997b3cb9_del complete#033[00m
Dec  5 02:10:26 compute-0 nova_compute[349548]: 2025-12-05 02:10:26.356 349552 INFO nova.compute.manager [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Took 1.34 seconds to destroy the instance on the hypervisor.#033[00m
Dec  5 02:10:26 compute-0 nova_compute[349548]: 2025-12-05 02:10:26.357 349552 DEBUG oslo.service.loopingcall [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  5 02:10:26 compute-0 nova_compute[349548]: 2025-12-05 02:10:26.358 349552 DEBUG nova.compute.manager [-] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  5 02:10:26 compute-0 nova_compute[349548]: 2025-12-05 02:10:26.359 349552 DEBUG nova.network.neutron [-] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  5 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011085768094354367 of space, bias 1.0, pg target 0.332573042830631 quantized to 32 (current 32)
Dec  5 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec  5 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:10:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 02:10:27 compute-0 nova_compute[349548]: 2025-12-05 02:10:27.121 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:27 compute-0 nova_compute[349548]: 2025-12-05 02:10:27.786 349552 DEBUG nova.compute.manager [req-0b7ef9a9-f90e-4f48-921c-7f782dce2192 req-ca4e5998-e056-47cc-8fac-5053555b40b1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Received event network-vif-plugged-26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:10:27 compute-0 nova_compute[349548]: 2025-12-05 02:10:27.786 349552 DEBUG oslo_concurrency.lockutils [req-0b7ef9a9-f90e-4f48-921c-7f782dce2192 req-ca4e5998-e056-47cc-8fac-5053555b40b1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "3391e1ba-0e6b-4113-b402-027e997b3cb9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:10:27 compute-0 nova_compute[349548]: 2025-12-05 02:10:27.787 349552 DEBUG oslo_concurrency.lockutils [req-0b7ef9a9-f90e-4f48-921c-7f782dce2192 req-ca4e5998-e056-47cc-8fac-5053555b40b1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "3391e1ba-0e6b-4113-b402-027e997b3cb9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:10:27 compute-0 nova_compute[349548]: 2025-12-05 02:10:27.787 349552 DEBUG oslo_concurrency.lockutils [req-0b7ef9a9-f90e-4f48-921c-7f782dce2192 req-ca4e5998-e056-47cc-8fac-5053555b40b1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "3391e1ba-0e6b-4113-b402-027e997b3cb9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:10:27 compute-0 nova_compute[349548]: 2025-12-05 02:10:27.788 349552 DEBUG nova.compute.manager [req-0b7ef9a9-f90e-4f48-921c-7f782dce2192 req-ca4e5998-e056-47cc-8fac-5053555b40b1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] No waiting events found dispatching network-vif-plugged-26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 02:10:27 compute-0 nova_compute[349548]: 2025-12-05 02:10:27.788 349552 WARNING nova.compute.manager [req-0b7ef9a9-f90e-4f48-921c-7f782dce2192 req-ca4e5998-e056-47cc-8fac-5053555b40b1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Received unexpected event network-vif-plugged-26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 for instance with vm_state active and task_state deleting.#033[00m
Dec  5 02:10:27 compute-0 nova_compute[349548]: 2025-12-05 02:10:27.837 349552 DEBUG nova.network.neutron [-] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:10:27 compute-0 nova_compute[349548]: 2025-12-05 02:10:27.870 349552 INFO nova.compute.manager [-] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Took 1.51 seconds to deallocate network for instance.#033[00m
Dec  5 02:10:27 compute-0 nova_compute[349548]: 2025-12-05 02:10:27.916 349552 DEBUG oslo_concurrency.lockutils [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:10:27 compute-0 nova_compute[349548]: 2025-12-05 02:10:27.917 349552 DEBUG oslo_concurrency.lockutils [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:10:28 compute-0 nova_compute[349548]: 2025-12-05 02:10:28.000 349552 DEBUG oslo_concurrency.processutils [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:10:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:10:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1843: 321 pgs: 321 active+clean; 158 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 15 KiB/s wr, 153 op/s
Dec  5 02:10:28 compute-0 ovn_controller[89286]: 2025-12-05T02:10:28Z|00123|binding|INFO|Releasing lport 3d0916d7-6f03-4daf-8f3b-126228223c53 from this chassis (sb_readonly=0)
Dec  5 02:10:28 compute-0 nova_compute[349548]: 2025-12-05 02:10:28.433 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:10:28 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3138225154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:10:28 compute-0 nova_compute[349548]: 2025-12-05 02:10:28.572 349552 DEBUG oslo_concurrency.processutils [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:10:28 compute-0 nova_compute[349548]: 2025-12-05 02:10:28.586 349552 DEBUG nova.compute.provider_tree [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:10:28 compute-0 nova_compute[349548]: 2025-12-05 02:10:28.615 349552 DEBUG nova.scheduler.client.report [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:10:28 compute-0 nova_compute[349548]: 2025-12-05 02:10:28.659 349552 DEBUG oslo_concurrency.lockutils [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:10:28 compute-0 nova_compute[349548]: 2025-12-05 02:10:28.728 349552 INFO nova.scheduler.client.report [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Deleted allocations for instance 3391e1ba-0e6b-4113-b402-027e997b3cb9#033[00m
Dec  5 02:10:28 compute-0 nova_compute[349548]: 2025-12-05 02:10:28.813 349552 DEBUG oslo_concurrency.lockutils [None req-4fab6c09-48f3-47da-ab1d-4fa8f6aa81ce f18ce80284524cbb9497cac2c6e6bf32 f120ce30568246929ef2dc1a9f0bd0c7 - - default default] Lock "3391e1ba-0e6b-4113-b402-027e997b3cb9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:10:29 compute-0 podman[446000]: 2025-12-05 02:10:29.7303133 +0000 UTC m=+0.130179767 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  5 02:10:29 compute-0 podman[445999]: 2025-12-05 02:10:29.746642899 +0000 UTC m=+0.147529774 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  5 02:10:29 compute-0 podman[158197]: time="2025-12-05T02:10:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:10:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:10:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 02:10:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:10:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8650 "" "Go-http-client/1.1"
Dec  5 02:10:30 compute-0 nova_compute[349548]: 2025-12-05 02:10:30.047 349552 DEBUG nova.compute.manager [req-f9d46564-3e51-48bc-bc47-7c222d674257 req-29c9e701-2f99-4ed5-90e6-bcd8ee1ffe85 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Received event network-vif-deleted-26b950d4-e9c2-45ea-8e3a-bd06bf2227d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:10:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1844: 321 pgs: 321 active+clean; 158 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 14 KiB/s wr, 125 op/s
Dec  5 02:10:30 compute-0 nova_compute[349548]: 2025-12-05 02:10:30.306 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:31 compute-0 openstack_network_exporter[366555]: ERROR   02:10:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:10:31 compute-0 openstack_network_exporter[366555]: ERROR   02:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:10:31 compute-0 openstack_network_exporter[366555]: ERROR   02:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:10:31 compute-0 openstack_network_exporter[366555]: ERROR   02:10:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:10:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:10:31 compute-0 openstack_network_exporter[366555]: ERROR   02:10:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:10:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:10:31 compute-0 nova_compute[349548]: 2025-12-05 02:10:31.456 349552 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764900616.4531763, 939ae9f2-b89c-4a19-96de-ab4dfc882a35 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:10:31 compute-0 nova_compute[349548]: 2025-12-05 02:10:31.456 349552 INFO nova.compute.manager [-] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] VM Stopped (Lifecycle Event)#033[00m
Dec  5 02:10:31 compute-0 nova_compute[349548]: 2025-12-05 02:10:31.475 349552 DEBUG nova.compute.manager [None req-4023f92e-94f5-4bee-b40b-db03e84fd6c0 - - - - - -] [instance: 939ae9f2-b89c-4a19-96de-ab4dfc882a35] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:10:31 compute-0 ovn_controller[89286]: 2025-12-05T02:10:31Z|00124|binding|INFO|Releasing lport 3d0916d7-6f03-4daf-8f3b-126228223c53 from this chassis (sb_readonly=0)
Dec  5 02:10:31 compute-0 nova_compute[349548]: 2025-12-05 02:10:31.651 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:31 compute-0 podman[446035]: 2025-12-05 02:10:31.750290292 +0000 UTC m=+0.132634446 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, distribution-scope=public, io.buildah.version=1.29.0, io.openshift.expose-services=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, managed_by=edpm_ansible, vendor=Red Hat, Inc., vcs-type=git, name=ubi9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, release=1214.1726694543, maintainer=Red Hat, Inc.)
Dec  5 02:10:32 compute-0 nova_compute[349548]: 2025-12-05 02:10:32.123 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1845: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 14 KiB/s wr, 130 op/s
Dec  5 02:10:32 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:32.651 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:c8:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:b5:45:4f:f9:d2'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 02:10:32 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:32.654 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  5 02:10:32 compute-0 nova_compute[349548]: 2025-12-05 02:10:32.654 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:10:33 compute-0 nova_compute[349548]: 2025-12-05 02:10:33.494 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1846: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.6 KiB/s wr, 68 op/s
Dec  5 02:10:35 compute-0 nova_compute[349548]: 2025-12-05 02:10:35.309 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:35 compute-0 nova_compute[349548]: 2025-12-05 02:10:35.922 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1847: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 850 KiB/s rd, 1.3 KiB/s wr, 55 op/s
Dec  5 02:10:37 compute-0 nova_compute[349548]: 2025-12-05 02:10:37.087 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:10:37 compute-0 nova_compute[349548]: 2025-12-05 02:10:37.088 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:10:37 compute-0 nova_compute[349548]: 2025-12-05 02:10:37.088 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 02:10:37 compute-0 nova_compute[349548]: 2025-12-05 02:10:37.127 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:37 compute-0 nova_compute[349548]: 2025-12-05 02:10:37.672 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:38 compute-0 nova_compute[349548]: 2025-12-05 02:10:38.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:10:38 compute-0 nova_compute[349548]: 2025-12-05 02:10:38.068 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 02:10:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:10:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1848: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 713 KiB/s rd, 1.2 KiB/s wr, 48 op/s
Dec  5 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.323 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  5 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.324 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  5 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.326 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d0bdee0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.336 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 59e35a32-9023-4e49-be56-9da10df3027f from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  5 02:10:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:38.338 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/59e35a32-9023-4e49-be56-9da10df3027f -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}03a5c5085f72a10a14834caf2c8f725d7bea9761ee1da0af3d318eb89d91a8ae" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  5 02:10:38 compute-0 nova_compute[349548]: 2025-12-05 02:10:38.931 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-59e35a32-9023-4e49-be56-9da10df3027f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:10:38 compute-0 nova_compute[349548]: 2025-12-05 02:10:38.933 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-59e35a32-9023-4e49-be56-9da10df3027f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:10:38 compute-0 nova_compute[349548]: 2025-12-05 02:10:38.934 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.307 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1982 Content-Type: application/json Date: Fri, 05 Dec 2025 02:10:38 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-f3ceec08-ee75-4990-8404-30178aea2e92 x-openstack-request-id: req-f3ceec08-ee75-4990-8404-30178aea2e92 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.308 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "59e35a32-9023-4e49-be56-9da10df3027f", "name": "tempest-ServerActionsTestJSON-server-1678320742", "status": "ACTIVE", "tenant_id": "dd34a6a62cf94436a2b836fa4f49c4fa", "user_id": "b4745812b7eb47908ded25b1eb7c7328", "metadata": {}, "hostId": "ec24e2cce3283e55f968b7a36269e7bf355c27e7ccc9833dd73aa657", "image": {"id": "e9091bfb-b431-47c9-a284-79372046956b", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/e9091bfb-b431-47c9-a284-79372046956b"}]}, "flavor": {"id": "bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49"}]}, "created": "2025-12-05T02:08:38Z", "updated": "2025-12-05T02:10:15Z", "addresses": {"tempest-ServerActionsTestJSON-2010351729-network": [{"version": 4, "addr": "10.100.0.10", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:16:81:87"}, {"version": 4, "addr": "192.168.122.206", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:16:81:87"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/59e35a32-9023-4e49-be56-9da10df3027f"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/59e35a32-9023-4e49-be56-9da10df3027f"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-1953156472", "OS-SRV-USG:launched_at": "2025-12-05T02:08:56.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--1840647419"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000008", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.308 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/59e35a32-9023-4e49-be56-9da10df3027f used request id req-f3ceec08-ee75-4990-8404-30178aea2e92 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.310 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '59e35a32-9023-4e49-be56-9da10df3027f', 'name': 'tempest-ServerActionsTestJSON-server-1678320742', 'flavor': {'id': 'bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'e9091bfb-b431-47c9-a284-79372046956b'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000008', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd34a6a62cf94436a2b836fa4f49c4fa', 'user_id': 'b4745812b7eb47908ded25b1eb7c7328', 'hostId': 'ec24e2cce3283e55f968b7a36269e7bf355c27e7ccc9833dd73aa657', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.311 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.311 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd61438050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.311 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd61438050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.312 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.313 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-05T02:10:39.312183) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.314 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.314 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.315 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.315 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.315 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.316 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.316 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-05T02:10:39.315830) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.340 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.340 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.341 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.342 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.342 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.342 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.342 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.343 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.343 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-05T02:10:39.342493) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.344 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.344 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.344 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.344 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.345 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.345 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1678320742>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1678320742>]
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.346 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.346 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.346 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-05T02:10:39.344703) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.347 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.347 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.347 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-05T02:10:39.347397) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.409 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.410 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.411 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.411 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.411 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.412 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.412 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.412 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.412 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/disk.device.read.latency volume: 1626049265 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.413 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-05T02:10:39.412451) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.414 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/disk.device.read.latency volume: 2427288 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.415 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.415 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.415 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.416 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.416 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.416 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.416 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-05T02:10:39.416365) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.417 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.418 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.418 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.419 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.419 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.419 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.419 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.420 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.420 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.421 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.422 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.422 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.423 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-05T02:10:39.420095) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.423 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.423 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.424 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.424 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.424 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.425 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.426 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-05T02:10:39.424374) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.426 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.427 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.427 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.427 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.428 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.428 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.428 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-05T02:10:39.428139) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.473 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.474 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.474 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.475 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.475 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.475 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.475 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.476 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-05T02:10:39.475446) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.476 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.476 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.477 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.477 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.478 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.478 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.478 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.478 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.478 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.479 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.480 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.480 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.480 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-05T02:10:39.478550) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.481 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.481 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.481 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.481 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.482 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-05T02:10:39.481709) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.489 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 59e35a32-9023-4e49-be56-9da10df3027f / tapa240e2ef-17 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.489 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.490 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.490 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.491 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.491 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.491 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.491 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.492 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-05T02:10:39.491618) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.492 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.493 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.493 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.494 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.494 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.494 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.494 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.495 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-05T02:10:39.494632) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.495 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.496 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.496 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.497 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.497 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.497 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.497 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.498 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-05T02:10:39.497832) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.497 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.499 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.500 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.500 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.500 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.500 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.501 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.501 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.501 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.502 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.502 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.503 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.503 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-05T02:10:39.501218) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.504 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.504 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.504 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.505 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.505 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-05T02:10:39.504724) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.506 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.506 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.507 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.507 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.507 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.507 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.508 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-05T02:10:39.507632) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.508 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.508 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1678320742>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1678320742>]
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.509 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.510 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.510 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.510 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.510 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.511 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.511 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 59e35a32-9023-4e49-be56-9da10df3027f: ceilometer.compute.pollsters.NoVolumeException
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.511 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.512 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.512 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-05T02:10:39.510813) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.512 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.514 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.515 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.517 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.523 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-05T02:10:39.517784) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.518 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.523 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.524 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.524 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.524 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.524 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.524 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.524 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-05T02:10:39.524427) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.525 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.525 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.525 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.525 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.525 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.526 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.526 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.526 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-05T02:10:39.526171) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.526 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.527 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.527 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.527 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.527 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.527 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.527 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.527 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/cpu volume: 23500000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.528 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.528 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.528 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.528 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.528 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.528 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.528 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.529 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-05T02:10:39.527591) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.529 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-05T02:10:39.528792) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.529 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.530 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.530 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.530 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.530 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.531 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.531 14 DEBUG ceilometer.compute.pollsters [-] 59e35a32-9023-4e49-be56-9da10df3027f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.531 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.532 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.533 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.534 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.535 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:10:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:10:39.536 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-05T02:10:39.530999) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:10:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1849: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 340 B/s wr, 4 op/s
Dec  5 02:10:40 compute-0 nova_compute[349548]: 2025-12-05 02:10:40.257 349552 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764900625.2545006, 3391e1ba-0e6b-4113-b402-027e997b3cb9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:10:40 compute-0 nova_compute[349548]: 2025-12-05 02:10:40.258 349552 INFO nova.compute.manager [-] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] VM Stopped (Lifecycle Event)#033[00m
Dec  5 02:10:40 compute-0 nova_compute[349548]: 2025-12-05 02:10:40.277 349552 DEBUG nova.compute.manager [None req-4d4ac99f-cb50-4b9f-aff1-457682faf163 - - - - - -] [instance: 3391e1ba-0e6b-4113-b402-027e997b3cb9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:10:40 compute-0 nova_compute[349548]: 2025-12-05 02:10:40.312 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:40 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:40.656 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8dd76c1c-ab01-42af-b35e-2e870841b6ad, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:10:40 compute-0 nova_compute[349548]: 2025-12-05 02:10:40.925 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:41 compute-0 podman[446056]: 2025-12-05 02:10:41.704240626 +0000 UTC m=+0.107968203 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  5 02:10:41 compute-0 podman[446057]: 2025-12-05 02:10:41.716271344 +0000 UTC m=+0.128125719 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  5 02:10:41 compute-0 podman[446059]: 2025-12-05 02:10:41.724436044 +0000 UTC m=+0.108235491 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9-minimal, io.openshift.tags=minimal rhel9, release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_id=edpm, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  5 02:10:41 compute-0 podman[446058]: 2025-12-05 02:10:41.758610513 +0000 UTC m=+0.156200468 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  5 02:10:41 compute-0 nova_compute[349548]: 2025-12-05 02:10:41.775 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Updating instance_info_cache with network_info: [{"id": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "address": "fa:16:3e:16:81:87", "network": {"id": "a9bc378d-2d4b-4990-99ce-02656b1fec0d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2010351729-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd34a6a62cf94436a2b836fa4f49c4fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa240e2ef-17", "ovs_interfaceid": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:10:41 compute-0 nova_compute[349548]: 2025-12-05 02:10:41.802 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-59e35a32-9023-4e49-be56-9da10df3027f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:10:41 compute-0 nova_compute[349548]: 2025-12-05 02:10:41.803 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  5 02:10:41 compute-0 nova_compute[349548]: 2025-12-05 02:10:41.804 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:10:41 compute-0 nova_compute[349548]: 2025-12-05 02:10:41.805 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:10:41 compute-0 nova_compute[349548]: 2025-12-05 02:10:41.806 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:10:41 compute-0 nova_compute[349548]: 2025-12-05 02:10:41.834 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:10:41 compute-0 nova_compute[349548]: 2025-12-05 02:10:41.835 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:10:41 compute-0 nova_compute[349548]: 2025-12-05 02:10:41.836 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:10:41 compute-0 nova_compute[349548]: 2025-12-05 02:10:41.838 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 02:10:41 compute-0 nova_compute[349548]: 2025-12-05 02:10:41.839 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:10:42 compute-0 nova_compute[349548]: 2025-12-05 02:10:42.130 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1850: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 341 B/s wr, 4 op/s
Dec  5 02:10:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:10:42 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2151506995' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:10:42 compute-0 nova_compute[349548]: 2025-12-05 02:10:42.343 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:10:42 compute-0 nova_compute[349548]: 2025-12-05 02:10:42.447 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:10:42 compute-0 nova_compute[349548]: 2025-12-05 02:10:42.449 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:10:42 compute-0 nova_compute[349548]: 2025-12-05 02:10:42.926 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:10:42 compute-0 nova_compute[349548]: 2025-12-05 02:10:42.927 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3901MB free_disk=59.94267654418945GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 02:10:42 compute-0 nova_compute[349548]: 2025-12-05 02:10:42.928 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:10:42 compute-0 nova_compute[349548]: 2025-12-05 02:10:42.929 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:10:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:10:43 compute-0 nova_compute[349548]: 2025-12-05 02:10:43.369 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 59e35a32-9023-4e49-be56-9da10df3027f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:10:43 compute-0 nova_compute[349548]: 2025-12-05 02:10:43.370 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 02:10:43 compute-0 nova_compute[349548]: 2025-12-05 02:10:43.370 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 02:10:43 compute-0 nova_compute[349548]: 2025-12-05 02:10:43.569 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:10:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:10:44 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/364089829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:10:44 compute-0 nova_compute[349548]: 2025-12-05 02:10:44.081 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:10:44 compute-0 nova_compute[349548]: 2025-12-05 02:10:44.093 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:10:44 compute-0 nova_compute[349548]: 2025-12-05 02:10:44.111 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:10:44 compute-0 nova_compute[349548]: 2025-12-05 02:10:44.135 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 02:10:44 compute-0 nova_compute[349548]: 2025-12-05 02:10:44.135 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.207s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:10:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1851: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:10:44 compute-0 nova_compute[349548]: 2025-12-05 02:10:44.397 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:10:44 compute-0 nova_compute[349548]: 2025-12-05 02:10:44.398 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:10:45 compute-0 nova_compute[349548]: 2025-12-05 02:10:45.314 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 02:10:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/848933106' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 02:10:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 02:10:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/848933106' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 02:10:46 compute-0 nova_compute[349548]: 2025-12-05 02:10:46.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:10:46 compute-0 nova_compute[349548]: 2025-12-05 02:10:46.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:10:46 compute-0 nova_compute[349548]: 2025-12-05 02:10:46.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  5 02:10:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1852: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:10:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:10:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:10:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:10:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:10:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:10:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:10:46 compute-0 ovn_controller[89286]: 2025-12-05T02:10:46Z|00125|binding|INFO|Releasing lport 3d0916d7-6f03-4daf-8f3b-126228223c53 from this chassis (sb_readonly=0)
Dec  5 02:10:46 compute-0 nova_compute[349548]: 2025-12-05 02:10:46.878 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:47 compute-0 nova_compute[349548]: 2025-12-05 02:10:47.005 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:47 compute-0 nova_compute[349548]: 2025-12-05 02:10:47.132 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:47 compute-0 nova_compute[349548]: 2025-12-05 02:10:47.694 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:10:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1853: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:10:49 compute-0 nova_compute[349548]: 2025-12-05 02:10:49.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:10:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1854: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:10:50 compute-0 nova_compute[349548]: 2025-12-05 02:10:50.318 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:50 compute-0 nova_compute[349548]: 2025-12-05 02:10:50.759 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:51 compute-0 ovn_controller[89286]: 2025-12-05T02:10:51Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:16:81:87 10.100.0.10
Dec  5 02:10:51 compute-0 nova_compute[349548]: 2025-12-05 02:10:51.794 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:52 compute-0 nova_compute[349548]: 2025-12-05 02:10:52.138 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1855: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 355 KiB/s rd, 11 KiB/s wr, 32 op/s
Dec  5 02:10:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:10:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1856: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 401 KiB/s rd, 11 KiB/s wr, 35 op/s
Dec  5 02:10:55 compute-0 podman[446325]: 2025-12-05 02:10:55.311169822 +0000 UTC m=+0.138953954 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  5 02:10:55 compute-0 podman[446326]: 2025-12-05 02:10:55.314530416 +0000 UTC m=+0.138034118 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  5 02:10:55 compute-0 nova_compute[349548]: 2025-12-05 02:10:55.320 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:55 compute-0 podman[446394]: 2025-12-05 02:10:55.536752247 +0000 UTC m=+0.114502157 container exec aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:10:55 compute-0 podman[446394]: 2025-12-05 02:10:55.655704508 +0000 UTC m=+0.233454418 container exec_died aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  5 02:10:56 compute-0 ovn_controller[89286]: 2025-12-05T02:10:56Z|00126|binding|INFO|Releasing lport 3d0916d7-6f03-4daf-8f3b-126228223c53 from this chassis (sb_readonly=0)
Dec  5 02:10:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1857: 321 pgs: 321 active+clean; 136 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 525 KiB/s rd, 11 KiB/s wr, 42 op/s
Dec  5 02:10:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:56.206 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:10:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:56.207 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:10:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:10:56.207 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:10:56 compute-0 nova_compute[349548]: 2025-12-05 02:10:56.221 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:56 compute-0 nova_compute[349548]: 2025-12-05 02:10:56.686 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 02:10:56 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:10:56 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 02:10:56 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:10:57 compute-0 nova_compute[349548]: 2025-12-05 02:10:57.142 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:10:57 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:10:57 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:10:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:10:58 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:10:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 02:10:58 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:10:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 02:10:58 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:10:58 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 7f1bb807-9954-47a4-a714-30dc91a93359 does not exist
Dec  5 02:10:58 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 7664a778-cc9c-49b4-bd84-cd7885c21a05 does not exist
Dec  5 02:10:58 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 906892c3-4c74-4b5f-bfba-2f94fd8bd5db does not exist
Dec  5 02:10:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 02:10:58 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 02:10:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 02:10:58 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:10:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:10:58 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:10:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:10:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1858: 321 pgs: 321 active+clean; 138 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 528 KiB/s rd, 22 KiB/s wr, 44 op/s
Dec  5 02:10:58 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:10:58 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:10:58 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:10:59 compute-0 podman[446810]: 2025-12-05 02:10:59.220477197 +0000 UTC m=+0.090545724 container create aa95ab66662aeaf231aa458c691ae6a5bfa13437adae88a2f6fd9dffa9fedf1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:10:59 compute-0 podman[446810]: 2025-12-05 02:10:59.18428521 +0000 UTC m=+0.054353787 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:10:59 compute-0 systemd[1]: Started libpod-conmon-aa95ab66662aeaf231aa458c691ae6a5bfa13437adae88a2f6fd9dffa9fedf1b.scope.
Dec  5 02:10:59 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:10:59 compute-0 podman[446810]: 2025-12-05 02:10:59.373744092 +0000 UTC m=+0.243812659 container init aa95ab66662aeaf231aa458c691ae6a5bfa13437adae88a2f6fd9dffa9fedf1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  5 02:10:59 compute-0 podman[446810]: 2025-12-05 02:10:59.391513501 +0000 UTC m=+0.261582018 container start aa95ab66662aeaf231aa458c691ae6a5bfa13437adae88a2f6fd9dffa9fedf1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dhawan, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:10:59 compute-0 podman[446810]: 2025-12-05 02:10:59.397757756 +0000 UTC m=+0.267826323 container attach aa95ab66662aeaf231aa458c691ae6a5bfa13437adae88a2f6fd9dffa9fedf1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:10:59 compute-0 quizzical_dhawan[446826]: 167 167
Dec  5 02:10:59 compute-0 systemd[1]: libpod-aa95ab66662aeaf231aa458c691ae6a5bfa13437adae88a2f6fd9dffa9fedf1b.scope: Deactivated successfully.
Dec  5 02:10:59 compute-0 podman[446810]: 2025-12-05 02:10:59.40394546 +0000 UTC m=+0.274013977 container died aa95ab66662aeaf231aa458c691ae6a5bfa13437adae88a2f6fd9dffa9fedf1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  5 02:10:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba47947482d24e67feee9b152caf25bdfa716a3ab21debe23ded31f028a96000-merged.mount: Deactivated successfully.
Dec  5 02:10:59 compute-0 podman[446810]: 2025-12-05 02:10:59.487165727 +0000 UTC m=+0.357234244 container remove aa95ab66662aeaf231aa458c691ae6a5bfa13437adae88a2f6fd9dffa9fedf1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_dhawan, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Dec  5 02:10:59 compute-0 systemd[1]: libpod-conmon-aa95ab66662aeaf231aa458c691ae6a5bfa13437adae88a2f6fd9dffa9fedf1b.scope: Deactivated successfully.
Dec  5 02:10:59 compute-0 podman[158197]: time="2025-12-05T02:10:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:10:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:10:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 02:10:59 compute-0 podman[446848]: 2025-12-05 02:10:59.7878252 +0000 UTC m=+0.102706745 container create 8fddc251eab8b72e383dff0dad75640265f1bdd08ae4e2056b8c3e3fb3ea8853 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  5 02:10:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:10:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8654 "" "Go-http-client/1.1"
Dec  5 02:10:59 compute-0 podman[446848]: 2025-12-05 02:10:59.752603061 +0000 UTC m=+0.067484656 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:10:59 compute-0 systemd[1]: Started libpod-conmon-8fddc251eab8b72e383dff0dad75640265f1bdd08ae4e2056b8c3e3fb3ea8853.scope.
Dec  5 02:10:59 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:10:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ec6b7cfdbfeed907f929288f90e8dcdcdc96f4dce5c9693016c9e909c9e4d95/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:10:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ec6b7cfdbfeed907f929288f90e8dcdcdc96f4dce5c9693016c9e909c9e4d95/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:10:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ec6b7cfdbfeed907f929288f90e8dcdcdc96f4dce5c9693016c9e909c9e4d95/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:10:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ec6b7cfdbfeed907f929288f90e8dcdcdc96f4dce5c9693016c9e909c9e4d95/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:10:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ec6b7cfdbfeed907f929288f90e8dcdcdc96f4dce5c9693016c9e909c9e4d95/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 02:10:59 compute-0 podman[446848]: 2025-12-05 02:10:59.932291088 +0000 UTC m=+0.247172643 container init 8fddc251eab8b72e383dff0dad75640265f1bdd08ae4e2056b8c3e3fb3ea8853 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  5 02:10:59 compute-0 podman[446848]: 2025-12-05 02:10:59.967744264 +0000 UTC m=+0.282625799 container start 8fddc251eab8b72e383dff0dad75640265f1bdd08ae4e2056b8c3e3fb3ea8853 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Dec  5 02:10:59 compute-0 podman[446848]: 2025-12-05 02:10:59.97404063 +0000 UTC m=+0.288922185 container attach 8fddc251eab8b72e383dff0dad75640265f1bdd08ae4e2056b8c3e3fb3ea8853 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_banzai, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:10:59 compute-0 podman[446861]: 2025-12-05 02:10:59.980938774 +0000 UTC m=+0.124611481 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  5 02:10:59 compute-0 podman[446860]: 2025-12-05 02:10:59.998663132 +0000 UTC m=+0.145963601 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  5 02:11:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1859: 321 pgs: 321 active+clean; 138 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 528 KiB/s rd, 22 KiB/s wr, 44 op/s
Dec  5 02:11:00 compute-0 nova_compute[349548]: 2025-12-05 02:11:00.324 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:01 compute-0 beautiful_banzai[446874]: --> passed data devices: 0 physical, 3 LVM
Dec  5 02:11:01 compute-0 beautiful_banzai[446874]: --> relative data size: 1.0
Dec  5 02:11:01 compute-0 beautiful_banzai[446874]: --> All data devices are unavailable
Dec  5 02:11:01 compute-0 systemd[1]: libpod-8fddc251eab8b72e383dff0dad75640265f1bdd08ae4e2056b8c3e3fb3ea8853.scope: Deactivated successfully.
Dec  5 02:11:01 compute-0 systemd[1]: libpod-8fddc251eab8b72e383dff0dad75640265f1bdd08ae4e2056b8c3e3fb3ea8853.scope: Consumed 1.269s CPU time.
Dec  5 02:11:01 compute-0 podman[446848]: 2025-12-05 02:11:01.313682156 +0000 UTC m=+1.628563701 container died 8fddc251eab8b72e383dff0dad75640265f1bdd08ae4e2056b8c3e3fb3ea8853 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_banzai, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:11:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ec6b7cfdbfeed907f929288f90e8dcdcdc96f4dce5c9693016c9e909c9e4d95-merged.mount: Deactivated successfully.
Dec  5 02:11:01 compute-0 podman[446848]: 2025-12-05 02:11:01.408220211 +0000 UTC m=+1.723101736 container remove 8fddc251eab8b72e383dff0dad75640265f1bdd08ae4e2056b8c3e3fb3ea8853 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_banzai, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:11:01 compute-0 openstack_network_exporter[366555]: ERROR   02:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:11:01 compute-0 openstack_network_exporter[366555]: ERROR   02:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:11:01 compute-0 openstack_network_exporter[366555]: ERROR   02:11:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:11:01 compute-0 openstack_network_exporter[366555]: ERROR   02:11:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:11:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:11:01 compute-0 openstack_network_exporter[366555]: ERROR   02:11:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:11:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:11:01 compute-0 systemd[1]: libpod-conmon-8fddc251eab8b72e383dff0dad75640265f1bdd08ae4e2056b8c3e3fb3ea8853.scope: Deactivated successfully.
Dec  5 02:11:02 compute-0 podman[447010]: 2025-12-05 02:11:02.052488985 +0000 UTC m=+0.138059118 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, io.buildah.version=1.29.0, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, architecture=x86_64, release-0.7.12=, version=9.4, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9)
Dec  5 02:11:02 compute-0 nova_compute[349548]: 2025-12-05 02:11:02.146 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1860: 321 pgs: 321 active+clean; 138 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 528 KiB/s rd, 22 KiB/s wr, 44 op/s
Dec  5 02:11:02 compute-0 podman[447091]: 2025-12-05 02:11:02.626420465 +0000 UTC m=+0.092445898 container create c9de4190c83aa6f4a5aa42a12d48677337295eb879e53b5669e8faeafb68f97b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  5 02:11:02 compute-0 podman[447091]: 2025-12-05 02:11:02.593551042 +0000 UTC m=+0.059576535 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:11:02 compute-0 systemd[1]: Started libpod-conmon-c9de4190c83aa6f4a5aa42a12d48677337295eb879e53b5669e8faeafb68f97b.scope.
Dec  5 02:11:02 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:11:02 compute-0 podman[447091]: 2025-12-05 02:11:02.771612903 +0000 UTC m=+0.237638396 container init c9de4190c83aa6f4a5aa42a12d48677337295eb879e53b5669e8faeafb68f97b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Dec  5 02:11:02 compute-0 podman[447091]: 2025-12-05 02:11:02.788825766 +0000 UTC m=+0.254851189 container start c9de4190c83aa6f4a5aa42a12d48677337295eb879e53b5669e8faeafb68f97b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_brahmagupta, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  5 02:11:02 compute-0 podman[447091]: 2025-12-05 02:11:02.794609549 +0000 UTC m=+0.260634992 container attach c9de4190c83aa6f4a5aa42a12d48677337295eb879e53b5669e8faeafb68f97b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_brahmagupta, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:11:02 compute-0 keen_brahmagupta[447107]: 167 167
Dec  5 02:11:02 compute-0 systemd[1]: libpod-c9de4190c83aa6f4a5aa42a12d48677337295eb879e53b5669e8faeafb68f97b.scope: Deactivated successfully.
Dec  5 02:11:02 compute-0 podman[447091]: 2025-12-05 02:11:02.800296988 +0000 UTC m=+0.266322441 container died c9de4190c83aa6f4a5aa42a12d48677337295eb879e53b5669e8faeafb68f97b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:11:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-93a175741fca3584742b0c518b33b753782242c1e2b8754409801493a17c84ba-merged.mount: Deactivated successfully.
Dec  5 02:11:02 compute-0 podman[447091]: 2025-12-05 02:11:02.884255236 +0000 UTC m=+0.350280679 container remove c9de4190c83aa6f4a5aa42a12d48677337295eb879e53b5669e8faeafb68f97b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:11:02 compute-0 systemd[1]: libpod-conmon-c9de4190c83aa6f4a5aa42a12d48677337295eb879e53b5669e8faeafb68f97b.scope: Deactivated successfully.
Dec  5 02:11:03 compute-0 nova_compute[349548]: 2025-12-05 02:11:03.155 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:11:03 compute-0 podman[447130]: 2025-12-05 02:11:03.176435223 +0000 UTC m=+0.104127006 container create baa07be29eeebc3cc46d08c026f0d5aebcfa38865f38f9f14d23a6bbcc3bdfe6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bose, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  5 02:11:03 compute-0 podman[447130]: 2025-12-05 02:11:03.131462529 +0000 UTC m=+0.059154372 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:11:03 compute-0 systemd[1]: Started libpod-conmon-baa07be29eeebc3cc46d08c026f0d5aebcfa38865f38f9f14d23a6bbcc3bdfe6.scope.
Dec  5 02:11:03 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:11:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f463dac364302e5579c57a1d704ab4f472ff238a957a2c17db09a24ec0a5718/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:11:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f463dac364302e5579c57a1d704ab4f472ff238a957a2c17db09a24ec0a5718/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:11:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f463dac364302e5579c57a1d704ab4f472ff238a957a2c17db09a24ec0a5718/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:11:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f463dac364302e5579c57a1d704ab4f472ff238a957a2c17db09a24ec0a5718/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:11:03 compute-0 podman[447130]: 2025-12-05 02:11:03.359286167 +0000 UTC m=+0.286977990 container init baa07be29eeebc3cc46d08c026f0d5aebcfa38865f38f9f14d23a6bbcc3bdfe6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bose, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  5 02:11:03 compute-0 podman[447130]: 2025-12-05 02:11:03.379284779 +0000 UTC m=+0.306976572 container start baa07be29eeebc3cc46d08c026f0d5aebcfa38865f38f9f14d23a6bbcc3bdfe6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bose, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:11:03 compute-0 podman[447130]: 2025-12-05 02:11:03.385720169 +0000 UTC m=+0.313412012 container attach baa07be29eeebc3cc46d08c026f0d5aebcfa38865f38f9f14d23a6bbcc3bdfe6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Dec  5 02:11:03 compute-0 ovn_controller[89286]: 2025-12-05T02:11:03Z|00127|binding|INFO|Releasing lport 3d0916d7-6f03-4daf-8f3b-126228223c53 from this chassis (sb_readonly=0)
Dec  5 02:11:04 compute-0 nova_compute[349548]: 2025-12-05 02:11:04.009 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:04 compute-0 gifted_bose[447147]: {
Dec  5 02:11:04 compute-0 gifted_bose[447147]:    "0": [
Dec  5 02:11:04 compute-0 gifted_bose[447147]:        {
Dec  5 02:11:04 compute-0 gifted_bose[447147]:            "devices": [
Dec  5 02:11:04 compute-0 gifted_bose[447147]:                "/dev/loop3"
Dec  5 02:11:04 compute-0 gifted_bose[447147]:            ],
Dec  5 02:11:04 compute-0 gifted_bose[447147]:            "lv_name": "ceph_lv0",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:            "lv_size": "21470642176",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:            "name": "ceph_lv0",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:            "tags": {
Dec  5 02:11:04 compute-0 gifted_bose[447147]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:                "ceph.cluster_name": "ceph",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:                "ceph.crush_device_class": "",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:                "ceph.encrypted": "0",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:                "ceph.osd_id": "0",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:                "ceph.type": "block",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:                "ceph.vdo": "0"
Dec  5 02:11:04 compute-0 gifted_bose[447147]:            },
Dec  5 02:11:04 compute-0 gifted_bose[447147]:            "type": "block",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:            "vg_name": "ceph_vg0"
Dec  5 02:11:04 compute-0 gifted_bose[447147]:        }
Dec  5 02:11:04 compute-0 gifted_bose[447147]:    ],
Dec  5 02:11:04 compute-0 gifted_bose[447147]:    "1": [
Dec  5 02:11:04 compute-0 gifted_bose[447147]:        {
Dec  5 02:11:04 compute-0 gifted_bose[447147]:            "devices": [
Dec  5 02:11:04 compute-0 gifted_bose[447147]:                "/dev/loop4"
Dec  5 02:11:04 compute-0 gifted_bose[447147]:            ],
Dec  5 02:11:04 compute-0 gifted_bose[447147]:            "lv_name": "ceph_lv1",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:            "lv_size": "21470642176",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:            "name": "ceph_lv1",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:            "tags": {
Dec  5 02:11:04 compute-0 gifted_bose[447147]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:                "ceph.cluster_name": "ceph",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:                "ceph.crush_device_class": "",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:                "ceph.encrypted": "0",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:                "ceph.osd_id": "1",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:                "ceph.type": "block",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:                "ceph.vdo": "0"
Dec  5 02:11:04 compute-0 gifted_bose[447147]:            },
Dec  5 02:11:04 compute-0 gifted_bose[447147]:            "type": "block",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:            "vg_name": "ceph_vg1"
Dec  5 02:11:04 compute-0 gifted_bose[447147]:        }
Dec  5 02:11:04 compute-0 gifted_bose[447147]:    ],
Dec  5 02:11:04 compute-0 gifted_bose[447147]:    "2": [
Dec  5 02:11:04 compute-0 gifted_bose[447147]:        {
Dec  5 02:11:04 compute-0 gifted_bose[447147]:            "devices": [
Dec  5 02:11:04 compute-0 gifted_bose[447147]:                "/dev/loop5"
Dec  5 02:11:04 compute-0 gifted_bose[447147]:            ],
Dec  5 02:11:04 compute-0 gifted_bose[447147]:            "lv_name": "ceph_lv2",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:            "lv_size": "21470642176",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:            "name": "ceph_lv2",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:            "tags": {
Dec  5 02:11:04 compute-0 gifted_bose[447147]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:                "ceph.cluster_name": "ceph",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:                "ceph.crush_device_class": "",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:                "ceph.encrypted": "0",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:                "ceph.osd_id": "2",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:                "ceph.type": "block",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:                "ceph.vdo": "0"
Dec  5 02:11:04 compute-0 gifted_bose[447147]:            },
Dec  5 02:11:04 compute-0 gifted_bose[447147]:            "type": "block",
Dec  5 02:11:04 compute-0 gifted_bose[447147]:            "vg_name": "ceph_vg2"
Dec  5 02:11:04 compute-0 gifted_bose[447147]:        }
Dec  5 02:11:04 compute-0 gifted_bose[447147]:    ]
Dec  5 02:11:04 compute-0 gifted_bose[447147]: }
Dec  5 02:11:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1861: 321 pgs: 321 active+clean; 138 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 173 KiB/s rd, 11 KiB/s wr, 11 op/s
Dec  5 02:11:04 compute-0 systemd[1]: libpod-baa07be29eeebc3cc46d08c026f0d5aebcfa38865f38f9f14d23a6bbcc3bdfe6.scope: Deactivated successfully.
Dec  5 02:11:04 compute-0 podman[447130]: 2025-12-05 02:11:04.209957859 +0000 UTC m=+1.137649622 container died baa07be29eeebc3cc46d08c026f0d5aebcfa38865f38f9f14d23a6bbcc3bdfe6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bose, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:11:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f463dac364302e5579c57a1d704ab4f472ff238a957a2c17db09a24ec0a5718-merged.mount: Deactivated successfully.
Dec  5 02:11:04 compute-0 podman[447130]: 2025-12-05 02:11:04.296653244 +0000 UTC m=+1.224345007 container remove baa07be29eeebc3cc46d08c026f0d5aebcfa38865f38f9f14d23a6bbcc3bdfe6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  5 02:11:04 compute-0 systemd[1]: libpod-conmon-baa07be29eeebc3cc46d08c026f0d5aebcfa38865f38f9f14d23a6bbcc3bdfe6.scope: Deactivated successfully.
Dec  5 02:11:05 compute-0 nova_compute[349548]: 2025-12-05 02:11:05.328 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:05 compute-0 podman[447305]: 2025-12-05 02:11:05.511280678 +0000 UTC m=+0.105338290 container create 6086dc202c1b27895b64a531c496d77759b8f3cb6ca3e95a56a31b6781bc3f19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pasteur, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  5 02:11:05 compute-0 podman[447305]: 2025-12-05 02:11:05.453272839 +0000 UTC m=+0.047330521 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:11:05 compute-0 systemd[1]: Started libpod-conmon-6086dc202c1b27895b64a531c496d77759b8f3cb6ca3e95a56a31b6781bc3f19.scope.
Dec  5 02:11:05 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:11:05 compute-0 podman[447305]: 2025-12-05 02:11:05.657611088 +0000 UTC m=+0.251668700 container init 6086dc202c1b27895b64a531c496d77759b8f3cb6ca3e95a56a31b6781bc3f19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pasteur, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  5 02:11:05 compute-0 podman[447305]: 2025-12-05 02:11:05.676248901 +0000 UTC m=+0.270306533 container start 6086dc202c1b27895b64a531c496d77759b8f3cb6ca3e95a56a31b6781bc3f19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:11:05 compute-0 podman[447305]: 2025-12-05 02:11:05.683664359 +0000 UTC m=+0.277721971 container attach 6086dc202c1b27895b64a531c496d77759b8f3cb6ca3e95a56a31b6781bc3f19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pasteur, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:11:05 compute-0 compassionate_pasteur[447320]: 167 167
Dec  5 02:11:05 compute-0 systemd[1]: libpod-6086dc202c1b27895b64a531c496d77759b8f3cb6ca3e95a56a31b6781bc3f19.scope: Deactivated successfully.
Dec  5 02:11:05 compute-0 podman[447305]: 2025-12-05 02:11:05.690291815 +0000 UTC m=+0.284349457 container died 6086dc202c1b27895b64a531c496d77759b8f3cb6ca3e95a56a31b6781bc3f19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:11:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-f65fc6282d4fd209e4892a2b78507b078f0874069acf1daa5cf880bffcdf93bb-merged.mount: Deactivated successfully.
Dec  5 02:11:05 compute-0 podman[447305]: 2025-12-05 02:11:05.784167202 +0000 UTC m=+0.378224844 container remove 6086dc202c1b27895b64a531c496d77759b8f3cb6ca3e95a56a31b6781bc3f19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:11:05 compute-0 systemd[1]: libpod-conmon-6086dc202c1b27895b64a531c496d77759b8f3cb6ca3e95a56a31b6781bc3f19.scope: Deactivated successfully.
Dec  5 02:11:06 compute-0 podman[447344]: 2025-12-05 02:11:06.081024029 +0000 UTC m=+0.086358826 container create 9fd97c4e20ad393be0371cc61eab7767c974e72757fecb3c0cf9e85f0a09fc3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_neumann, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:11:06 compute-0 podman[447344]: 2025-12-05 02:11:06.060165554 +0000 UTC m=+0.065500371 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:11:06 compute-0 systemd[1]: Started libpod-conmon-9fd97c4e20ad393be0371cc61eab7767c974e72757fecb3c0cf9e85f0a09fc3b.scope.
Dec  5 02:11:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1862: 321 pgs: 321 active+clean; 138 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 127 KiB/s rd, 11 KiB/s wr, 9 op/s
Dec  5 02:11:06 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:11:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e6f5614f73b16173fd3e2b1f8fb60b92a45ad3ff91a78fd55998170f30bcb23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:11:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e6f5614f73b16173fd3e2b1f8fb60b92a45ad3ff91a78fd55998170f30bcb23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:11:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e6f5614f73b16173fd3e2b1f8fb60b92a45ad3ff91a78fd55998170f30bcb23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:11:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e6f5614f73b16173fd3e2b1f8fb60b92a45ad3ff91a78fd55998170f30bcb23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:11:06 compute-0 podman[447344]: 2025-12-05 02:11:06.247095324 +0000 UTC m=+0.252430211 container init 9fd97c4e20ad393be0371cc61eab7767c974e72757fecb3c0cf9e85f0a09fc3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_neumann, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:11:06 compute-0 podman[447344]: 2025-12-05 02:11:06.259389939 +0000 UTC m=+0.264724776 container start 9fd97c4e20ad393be0371cc61eab7767c974e72757fecb3c0cf9e85f0a09fc3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:11:06 compute-0 podman[447344]: 2025-12-05 02:11:06.268812014 +0000 UTC m=+0.274146891 container attach 9fd97c4e20ad393be0371cc61eab7767c974e72757fecb3c0cf9e85f0a09fc3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:11:07 compute-0 nova_compute[349548]: 2025-12-05 02:11:07.154 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:07 compute-0 unruffled_neumann[447360]: {
Dec  5 02:11:07 compute-0 unruffled_neumann[447360]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 02:11:07 compute-0 unruffled_neumann[447360]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:11:07 compute-0 unruffled_neumann[447360]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 02:11:07 compute-0 unruffled_neumann[447360]:        "osd_id": 0,
Dec  5 02:11:07 compute-0 unruffled_neumann[447360]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:11:07 compute-0 unruffled_neumann[447360]:        "type": "bluestore"
Dec  5 02:11:07 compute-0 unruffled_neumann[447360]:    },
Dec  5 02:11:07 compute-0 unruffled_neumann[447360]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 02:11:07 compute-0 unruffled_neumann[447360]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:11:07 compute-0 unruffled_neumann[447360]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 02:11:07 compute-0 unruffled_neumann[447360]:        "osd_id": 1,
Dec  5 02:11:07 compute-0 unruffled_neumann[447360]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:11:07 compute-0 unruffled_neumann[447360]:        "type": "bluestore"
Dec  5 02:11:07 compute-0 unruffled_neumann[447360]:    },
Dec  5 02:11:07 compute-0 unruffled_neumann[447360]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 02:11:07 compute-0 unruffled_neumann[447360]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:11:07 compute-0 unruffled_neumann[447360]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 02:11:07 compute-0 unruffled_neumann[447360]:        "osd_id": 2,
Dec  5 02:11:07 compute-0 unruffled_neumann[447360]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:11:07 compute-0 unruffled_neumann[447360]:        "type": "bluestore"
Dec  5 02:11:07 compute-0 unruffled_neumann[447360]:    }
Dec  5 02:11:07 compute-0 unruffled_neumann[447360]: }
Dec  5 02:11:07 compute-0 systemd[1]: libpod-9fd97c4e20ad393be0371cc61eab7767c974e72757fecb3c0cf9e85f0a09fc3b.scope: Deactivated successfully.
Dec  5 02:11:07 compute-0 systemd[1]: libpod-9fd97c4e20ad393be0371cc61eab7767c974e72757fecb3c0cf9e85f0a09fc3b.scope: Consumed 1.168s CPU time.
Dec  5 02:11:07 compute-0 conmon[447360]: conmon 9fd97c4e20ad393be037 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9fd97c4e20ad393be0371cc61eab7767c974e72757fecb3c0cf9e85f0a09fc3b.scope/container/memory.events
Dec  5 02:11:07 compute-0 podman[447344]: 2025-12-05 02:11:07.431102407 +0000 UTC m=+1.436437214 container died 9fd97c4e20ad393be0371cc61eab7767c974e72757fecb3c0cf9e85f0a09fc3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:11:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e6f5614f73b16173fd3e2b1f8fb60b92a45ad3ff91a78fd55998170f30bcb23-merged.mount: Deactivated successfully.
Dec  5 02:11:07 compute-0 podman[447344]: 2025-12-05 02:11:07.563843515 +0000 UTC m=+1.569178342 container remove 9fd97c4e20ad393be0371cc61eab7767c974e72757fecb3c0cf9e85f0a09fc3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:11:07 compute-0 systemd[1]: libpod-conmon-9fd97c4e20ad393be0371cc61eab7767c974e72757fecb3c0cf9e85f0a09fc3b.scope: Deactivated successfully.
Dec  5 02:11:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 02:11:07 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:11:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 02:11:07 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:11:07 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 15dcc09c-f284-4bee-a320-a3596a46e16b does not exist
Dec  5 02:11:07 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 481959c2-9325-4171-9f64-9a4d58d3ef54 does not exist
Dec  5 02:11:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  5 02:11:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1863: 321 pgs: 321 active+clean; 138 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 14 KiB/s wr, 7 op/s
Dec  5 02:11:08 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:11:08 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:11:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Dec  5 02:11:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Dec  5 02:11:08 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Dec  5 02:11:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1865: 321 pgs: 321 active+clean; 138 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 4.6 KiB/s rd, 3.6 KiB/s wr, 6 op/s
Dec  5 02:11:10 compute-0 nova_compute[349548]: 2025-12-05 02:11:10.333 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:12 compute-0 nova_compute[349548]: 2025-12-05 02:11:12.158 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1866: 321 pgs: 321 active+clean; 138 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s rd, 4.8 KiB/s wr, 11 op/s
Dec  5 02:11:12 compute-0 podman[447458]: 2025-12-05 02:11:12.722248811 +0000 UTC m=+0.109387893 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vcs-type=git, version=9.6, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  5 02:11:12 compute-0 podman[447456]: 2025-12-05 02:11:12.72968133 +0000 UTC m=+0.129960401 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 02:11:12 compute-0 podman[447455]: 2025-12-05 02:11:12.735358109 +0000 UTC m=+0.136608608 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  5 02:11:12 compute-0 podman[447457]: 2025-12-05 02:11:12.768437258 +0000 UTC m=+0.164204273 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:11:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:11:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1867: 321 pgs: 321 active+clean; 159 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Dec  5 02:11:15 compute-0 nova_compute[349548]: 2025-12-05 02:11:15.338 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1868: 321 pgs: 321 active+clean; 159 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Dec  5 02:11:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:11:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:11:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:11:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:11:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:11:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:11:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:11:16
Dec  5 02:11:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 02:11:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 02:11:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['vms', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', 'backups', 'default.rgw.control', 'default.rgw.log', '.mgr']
Dec  5 02:11:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 02:11:16 compute-0 nova_compute[349548]: 2025-12-05 02:11:16.591 349552 DEBUG oslo_concurrency.lockutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Acquiring lock "292fd084-0808-4a80-adc1-6ab1f28e188a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:11:16 compute-0 nova_compute[349548]: 2025-12-05 02:11:16.592 349552 DEBUG oslo_concurrency.lockutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "292fd084-0808-4a80-adc1-6ab1f28e188a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:11:16 compute-0 nova_compute[349548]: 2025-12-05 02:11:16.611 349552 DEBUG nova.compute.manager [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  5 02:11:16 compute-0 nova_compute[349548]: 2025-12-05 02:11:16.732 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:16 compute-0 nova_compute[349548]: 2025-12-05 02:11:16.736 349552 DEBUG oslo_concurrency.lockutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:11:16 compute-0 nova_compute[349548]: 2025-12-05 02:11:16.737 349552 DEBUG oslo_concurrency.lockutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:11:16 compute-0 nova_compute[349548]: 2025-12-05 02:11:16.747 349552 DEBUG nova.virt.hardware [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  5 02:11:16 compute-0 nova_compute[349548]: 2025-12-05 02:11:16.747 349552 INFO nova.compute.claims [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  5 02:11:16 compute-0 nova_compute[349548]: 2025-12-05 02:11:16.885 349552 DEBUG oslo_concurrency.processutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:11:17 compute-0 nova_compute[349548]: 2025-12-05 02:11:17.160 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:11:17 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3918919140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:11:17 compute-0 nova_compute[349548]: 2025-12-05 02:11:17.665 349552 DEBUG oslo_concurrency.processutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.780s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:11:17 compute-0 nova_compute[349548]: 2025-12-05 02:11:17.679 349552 DEBUG nova.compute.provider_tree [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:11:17 compute-0 nova_compute[349548]: 2025-12-05 02:11:17.705 349552 DEBUG nova.scheduler.client.report [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:11:17 compute-0 nova_compute[349548]: 2025-12-05 02:11:17.756 349552 DEBUG oslo_concurrency.lockutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.019s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:11:17 compute-0 nova_compute[349548]: 2025-12-05 02:11:17.758 349552 DEBUG nova.compute.manager [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  5 02:11:17 compute-0 nova_compute[349548]: 2025-12-05 02:11:17.821 349552 DEBUG nova.compute.manager [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  5 02:11:17 compute-0 nova_compute[349548]: 2025-12-05 02:11:17.823 349552 DEBUG nova.network.neutron [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  5 02:11:17 compute-0 nova_compute[349548]: 2025-12-05 02:11:17.847 349552 INFO nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  5 02:11:17 compute-0 nova_compute[349548]: 2025-12-05 02:11:17.870 349552 DEBUG nova.compute.manager [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  5 02:11:17 compute-0 nova_compute[349548]: 2025-12-05 02:11:17.991 349552 DEBUG nova.compute.manager [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  5 02:11:17 compute-0 nova_compute[349548]: 2025-12-05 02:11:17.994 349552 DEBUG nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  5 02:11:17 compute-0 nova_compute[349548]: 2025-12-05 02:11:17.995 349552 INFO nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Creating image(s)#033[00m
Dec  5 02:11:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 02:11:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:11:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 02:11:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:11:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:11:18 compute-0 nova_compute[349548]: 2025-12-05 02:11:18.049 349552 DEBUG nova.storage.rbd_utils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] rbd image 292fd084-0808-4a80-adc1-6ab1f28e188a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:11:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:11:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:11:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:11:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:11:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:11:18 compute-0 nova_compute[349548]: 2025-12-05 02:11:18.120 349552 DEBUG nova.storage.rbd_utils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] rbd image 292fd084-0808-4a80-adc1-6ab1f28e188a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:11:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:11:18 compute-0 nova_compute[349548]: 2025-12-05 02:11:18.188 349552 DEBUG nova.storage.rbd_utils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] rbd image 292fd084-0808-4a80-adc1-6ab1f28e188a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:11:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1869: 321 pgs: 321 active+clean; 159 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s rd, 2.0 MiB/s wr, 13 op/s
Dec  5 02:11:18 compute-0 nova_compute[349548]: 2025-12-05 02:11:18.200 349552 DEBUG oslo_concurrency.lockutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Acquiring lock "ce40e952b4771285622230948599d16442d55b06" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:11:18 compute-0 nova_compute[349548]: 2025-12-05 02:11:18.202 349552 DEBUG oslo_concurrency.lockutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "ce40e952b4771285622230948599d16442d55b06" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:11:18 compute-0 nova_compute[349548]: 2025-12-05 02:11:18.211 349552 DEBUG nova.policy [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '99591ed8361e41579fee1d14f16bf0f7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b01709a3378347e1a3f25eeb2b8b1bca', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  5 02:11:18 compute-0 nova_compute[349548]: 2025-12-05 02:11:18.537 349552 DEBUG nova.virt.libvirt.imagebackend [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Image locations are: [{'url': 'rbd://cbd280d3-cbd8-528b-ace6-2b3a887cdcee/images/773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://cbd280d3-cbd8-528b-ace6-2b3a887cdcee/images/773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Dec  5 02:11:19 compute-0 nova_compute[349548]: 2025-12-05 02:11:19.261 349552 DEBUG nova.network.neutron [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Successfully created port: 706f9405-4061-481e-a252-9b14f4534a4e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  5 02:11:19 compute-0 nova_compute[349548]: 2025-12-05 02:11:19.615 349552 DEBUG oslo_concurrency.processutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ce40e952b4771285622230948599d16442d55b06.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:11:19 compute-0 nova_compute[349548]: 2025-12-05 02:11:19.709 349552 DEBUG oslo_concurrency.processutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ce40e952b4771285622230948599d16442d55b06.part --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:11:19 compute-0 nova_compute[349548]: 2025-12-05 02:11:19.711 349552 DEBUG nova.virt.images [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] 773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec  5 02:11:19 compute-0 nova_compute[349548]: 2025-12-05 02:11:19.713 349552 DEBUG nova.privsep.utils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  5 02:11:19 compute-0 nova_compute[349548]: 2025-12-05 02:11:19.714 349552 DEBUG oslo_concurrency.processutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/ce40e952b4771285622230948599d16442d55b06.part /var/lib/nova/instances/_base/ce40e952b4771285622230948599d16442d55b06.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:11:19 compute-0 nova_compute[349548]: 2025-12-05 02:11:19.979 349552 DEBUG oslo_concurrency.processutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/ce40e952b4771285622230948599d16442d55b06.part /var/lib/nova/instances/_base/ce40e952b4771285622230948599d16442d55b06.converted" returned: 0 in 0.265s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:11:19 compute-0 nova_compute[349548]: 2025-12-05 02:11:19.988 349552 DEBUG oslo_concurrency.processutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ce40e952b4771285622230948599d16442d55b06.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.105 349552 DEBUG oslo_concurrency.processutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ce40e952b4771285622230948599d16442d55b06.converted --force-share --output=json" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.108 349552 DEBUG oslo_concurrency.lockutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "ce40e952b4771285622230948599d16442d55b06" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.168 349552 DEBUG nova.storage.rbd_utils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] rbd image 292fd084-0808-4a80-adc1-6ab1f28e188a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.181 349552 DEBUG oslo_concurrency.processutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ce40e952b4771285622230948599d16442d55b06 292fd084-0808-4a80-adc1-6ab1f28e188a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:11:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1870: 321 pgs: 321 active+clean; 159 MiB data, 338 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 1.8 MiB/s wr, 11 op/s
Dec  5 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.226 349552 DEBUG oslo_concurrency.lockutils [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Acquiring lock "59e35a32-9023-4e49-be56-9da10df3027f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.227 349552 DEBUG oslo_concurrency.lockutils [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.228 349552 DEBUG oslo_concurrency.lockutils [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Acquiring lock "59e35a32-9023-4e49-be56-9da10df3027f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.229 349552 DEBUG oslo_concurrency.lockutils [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.230 349552 DEBUG oslo_concurrency.lockutils [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.234 349552 INFO nova.compute.manager [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Terminating instance#033[00m
Dec  5 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.237 349552 DEBUG nova.compute.manager [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  5 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.250 349552 DEBUG nova.network.neutron [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Successfully updated port: 706f9405-4061-481e-a252-9b14f4534a4e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  5 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.270 349552 DEBUG oslo_concurrency.lockutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Acquiring lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.271 349552 DEBUG oslo_concurrency.lockutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Acquired lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.271 349552 DEBUG nova.network.neutron [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  5 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.342 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:20 compute-0 kernel: tapa240e2ef-17 (unregistering): left promiscuous mode
Dec  5 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.374 349552 DEBUG nova.compute.manager [req-21032ee1-a71c-44d9-a0a1-ad388778cfde req-fcc5a807-5a03-4e35-b1f2-ec4c759aff2b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Received event network-changed-706f9405-4061-481e-a252-9b14f4534a4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.375 349552 DEBUG nova.compute.manager [req-21032ee1-a71c-44d9-a0a1-ad388778cfde req-fcc5a807-5a03-4e35-b1f2-ec4c759aff2b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Refreshing instance network info cache due to event network-changed-706f9405-4061-481e-a252-9b14f4534a4e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  5 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.375 349552 DEBUG oslo_concurrency.lockutils [req-21032ee1-a71c-44d9-a0a1-ad388778cfde req-fcc5a807-5a03-4e35-b1f2-ec4c759aff2b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:11:20 compute-0 NetworkManager[49092]: <info>  [1764900680.3854] device (tapa240e2ef-17): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  5 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.396 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:20 compute-0 ovn_controller[89286]: 2025-12-05T02:11:20Z|00128|binding|INFO|Releasing lport a240e2ef-1773-4509-ac04-eae1f5d36e08 from this chassis (sb_readonly=0)
Dec  5 02:11:20 compute-0 ovn_controller[89286]: 2025-12-05T02:11:20Z|00129|binding|INFO|Setting lport a240e2ef-1773-4509-ac04-eae1f5d36e08 down in Southbound
Dec  5 02:11:20 compute-0 ovn_controller[89286]: 2025-12-05T02:11:20Z|00130|binding|INFO|Removing iface tapa240e2ef-17 ovn-installed in OVS
Dec  5 02:11:20 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:20.420 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:81:87 10.100.0.10'], port_security=['fa:16:3e:16:81:87 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '59e35a32-9023-4e49-be56-9da10df3027f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a9bc378d-2d4b-4990-99ce-02656b1fec0d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dd34a6a62cf94436a2b836fa4f49c4fa', 'neutron:revision_number': '6', 'neutron:security_group_ids': '0ad1486e-ab79-4bad-bad5-777f54ed0ef1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.206', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=880ae0ff-40ec-4de0-a5e7-7c2cf13ecf72, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=a240e2ef-1773-4509-ac04-eae1f5d36e08) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.418 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:20 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:20.424 287122 INFO neutron.agent.ovn.metadata.agent [-] Port a240e2ef-1773-4509-ac04-eae1f5d36e08 in datapath a9bc378d-2d4b-4990-99ce-02656b1fec0d unbound from our chassis#033[00m
Dec  5 02:11:20 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:20.426 287122 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a9bc378d-2d4b-4990-99ce-02656b1fec0d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  5 02:11:20 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:20.429 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[87387183-1404-4998-b659-ae390afe87a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:11:20 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:20.430 287122 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d namespace which is not needed anymore#033[00m
Dec  5 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.439 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:20 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000008.scope: Deactivated successfully.
Dec  5 02:11:20 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000008.scope: Consumed 48.502s CPU time.
Dec  5 02:11:20 compute-0 systemd-machined[138700]: Machine qemu-10-instance-00000008 terminated.
Dec  5 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.482 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.492 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.506 349552 INFO nova.virt.libvirt.driver [-] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Instance destroyed successfully.#033[00m
Dec  5 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.506 349552 DEBUG nova.objects.instance [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lazy-loading 'resources' on Instance uuid 59e35a32-9023-4e49-be56-9da10df3027f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.524 349552 DEBUG nova.virt.libvirt.vif [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T02:08:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1678320742',display_name='tempest-ServerActionsTestJSON-server-1678320742',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1678320742',id=8,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKmirf5PzEcVuq6RNudVuflcugnc6r3Jy50MVVEH7tkttBe4cf5zv9kQC3Ss53DUYZTE/QaGNMMsby6pKc4tzWxZGKXsndhFMr79gHGA5klSxVz8kWH2nsbelSj8zkK0fg==',key_name='tempest-keypair-1953156472',keypairs=<?>,launch_index=0,launched_at=2025-12-05T02:08:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dd34a6a62cf94436a2b836fa4f49c4fa',ramdisk_id='',reservation_id='r-i4td7gfo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1914764435',owner_user_name='tempest-ServerActionsTestJSON-1914764435-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-05T02:10:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b4745812b7eb47908ded25b1eb7c7328',uuid=59e35a32-9023-4e49-be56-9da10df3027f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "address": "fa:16:3e:16:81:87", "network": {"id": "a9bc378d-2d4b-4990-99ce-02656b1fec0d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2010351729-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd34a6a62cf94436a2b836fa4f49c4fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa240e2ef-17", "ovs_interfaceid": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  5 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.524 349552 DEBUG nova.network.os_vif_util [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Converting VIF {"id": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "address": "fa:16:3e:16:81:87", "network": {"id": "a9bc378d-2d4b-4990-99ce-02656b1fec0d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2010351729-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd34a6a62cf94436a2b836fa4f49c4fa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa240e2ef-17", "ovs_interfaceid": "a240e2ef-1773-4509-ac04-eae1f5d36e08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.525 349552 DEBUG nova.network.os_vif_util [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:16:81:87,bridge_name='br-int',has_traffic_filtering=True,id=a240e2ef-1773-4509-ac04-eae1f5d36e08,network=Network(a9bc378d-2d4b-4990-99ce-02656b1fec0d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa240e2ef-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.525 349552 DEBUG os_vif [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:16:81:87,bridge_name='br-int',has_traffic_filtering=True,id=a240e2ef-1773-4509-ac04-eae1f5d36e08,network=Network(a9bc378d-2d4b-4990-99ce-02656b1fec0d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa240e2ef-17') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  5 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.527 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.527 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa240e2ef-17, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.529 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.531 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  5 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.532 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.536 349552 INFO os_vif [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:16:81:87,bridge_name='br-int',has_traffic_filtering=True,id=a240e2ef-1773-4509-ac04-eae1f5d36e08,network=Network(a9bc378d-2d4b-4990-99ce-02656b1fec0d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa240e2ef-17')#033[00m
Dec  5 02:11:20 compute-0 neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d[445441]: [NOTICE]   (445445) : haproxy version is 2.8.14-c23fe91
Dec  5 02:11:20 compute-0 neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d[445441]: [NOTICE]   (445445) : path to executable is /usr/sbin/haproxy
Dec  5 02:11:20 compute-0 neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d[445441]: [WARNING]  (445445) : Exiting Master process...
Dec  5 02:11:20 compute-0 neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d[445441]: [WARNING]  (445445) : Exiting Master process...
Dec  5 02:11:20 compute-0 neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d[445441]: [ALERT]    (445445) : Current worker (445447) exited with code 143 (Terminated)
Dec  5 02:11:20 compute-0 neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d[445441]: [WARNING]  (445445) : All workers exited. Exiting... (0)
Dec  5 02:11:20 compute-0 systemd[1]: libpod-2907e2a2f5c4404f51d919df2de6dffcf082807c1b7a5b75e70c0f84895d67da.scope: Deactivated successfully.
Dec  5 02:11:20 compute-0 podman[447704]: 2025-12-05 02:11:20.646802886 +0000 UTC m=+0.078837035 container died 2907e2a2f5c4404f51d919df2de6dffcf082807c1b7a5b75e70c0f84895d67da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.652 349552 DEBUG oslo_concurrency.processutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ce40e952b4771285622230948599d16442d55b06 292fd084-0808-4a80-adc1-6ab1f28e188a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:11:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2907e2a2f5c4404f51d919df2de6dffcf082807c1b7a5b75e70c0f84895d67da-userdata-shm.mount: Deactivated successfully.
Dec  5 02:11:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7a4098034b79b17a1b0a33ca61c1f904969485d36ccd5269a78d56bbd845de7-merged.mount: Deactivated successfully.
Dec  5 02:11:20 compute-0 podman[447704]: 2025-12-05 02:11:20.723217372 +0000 UTC m=+0.155251491 container cleanup 2907e2a2f5c4404f51d919df2de6dffcf082807c1b7a5b75e70c0f84895d67da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  5 02:11:20 compute-0 systemd[1]: libpod-conmon-2907e2a2f5c4404f51d919df2de6dffcf082807c1b7a5b75e70c0f84895d67da.scope: Deactivated successfully.
Dec  5 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.793 349552 DEBUG nova.storage.rbd_utils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] resizing rbd image 292fd084-0808-4a80-adc1-6ab1f28e188a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  5 02:11:20 compute-0 podman[447766]: 2025-12-05 02:11:20.824707063 +0000 UTC m=+0.068050542 container remove 2907e2a2f5c4404f51d919df2de6dffcf082807c1b7a5b75e70c0f84895d67da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  5 02:11:20 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:20.838 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[538396d8-3fb6-4c16-9c81-e4e049d7c71f]: (4, ('Fri Dec  5 02:11:20 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d (2907e2a2f5c4404f51d919df2de6dffcf082807c1b7a5b75e70c0f84895d67da)\n2907e2a2f5c4404f51d919df2de6dffcf082807c1b7a5b75e70c0f84895d67da\nFri Dec  5 02:11:20 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d (2907e2a2f5c4404f51d919df2de6dffcf082807c1b7a5b75e70c0f84895d67da)\n2907e2a2f5c4404f51d919df2de6dffcf082807c1b7a5b75e70c0f84895d67da\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:11:20 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:20.842 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[d8989a44-bc83-4279-88cf-9e61df040bf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:11:20 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:20.844 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa9bc378d-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.847 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:20 compute-0 kernel: tapa9bc378d-20: left promiscuous mode
Dec  5 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.854 349552 DEBUG nova.network.neutron [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  5 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.862 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:20 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:20.866 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[de89a6ba-86fb-4495-9959-b5e507debfb6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:11:20 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:20.883 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[0a117b6f-e069-49d1-b5c0-076be9a9f3cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:11:20 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:20.884 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[acb97763-dde2-4b4e-973c-144c38026d73]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:11:20 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:20.905 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[63a6bf10-2168-422c-b88b-42569d227c56]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 670092, 'reachable_time': 27790, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 447810, 'error': None, 'target': 'ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:11:20 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:20.907 287504 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a9bc378d-2d4b-4990-99ce-02656b1fec0d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  5 02:11:20 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:20.907 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[733590c3-df23-4a7e-805b-573b29813295]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:11:20 compute-0 systemd[1]: run-netns-ovnmeta\x2da9bc378d\x2d2d4b\x2d4990\x2d99ce\x2d02656b1fec0d.mount: Deactivated successfully.
Dec  5 02:11:20 compute-0 nova_compute[349548]: 2025-12-05 02:11:20.986 349552 DEBUG nova.objects.instance [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lazy-loading 'migration_context' on Instance uuid 292fd084-0808-4a80-adc1-6ab1f28e188a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:11:21 compute-0 nova_compute[349548]: 2025-12-05 02:11:21.014 349552 DEBUG nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  5 02:11:21 compute-0 nova_compute[349548]: 2025-12-05 02:11:21.015 349552 DEBUG nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Ensure instance console log exists: /var/lib/nova/instances/292fd084-0808-4a80-adc1-6ab1f28e188a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  5 02:11:21 compute-0 nova_compute[349548]: 2025-12-05 02:11:21.015 349552 DEBUG oslo_concurrency.lockutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:11:21 compute-0 nova_compute[349548]: 2025-12-05 02:11:21.015 349552 DEBUG oslo_concurrency.lockutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:11:21 compute-0 nova_compute[349548]: 2025-12-05 02:11:21.016 349552 DEBUG oslo_concurrency.lockutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:11:21 compute-0 nova_compute[349548]: 2025-12-05 02:11:21.338 349552 DEBUG nova.compute.manager [req-5cf25255-d5cb-4637-81d6-708b1a4645a2 req-7e5b43c2-856f-4f4a-a7c7-6fd554fad6be a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Received event network-vif-unplugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:11:21 compute-0 nova_compute[349548]: 2025-12-05 02:11:21.339 349552 DEBUG oslo_concurrency.lockutils [req-5cf25255-d5cb-4637-81d6-708b1a4645a2 req-7e5b43c2-856f-4f4a-a7c7-6fd554fad6be a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "59e35a32-9023-4e49-be56-9da10df3027f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:11:21 compute-0 nova_compute[349548]: 2025-12-05 02:11:21.339 349552 DEBUG oslo_concurrency.lockutils [req-5cf25255-d5cb-4637-81d6-708b1a4645a2 req-7e5b43c2-856f-4f4a-a7c7-6fd554fad6be a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:11:21 compute-0 nova_compute[349548]: 2025-12-05 02:11:21.340 349552 DEBUG oslo_concurrency.lockutils [req-5cf25255-d5cb-4637-81d6-708b1a4645a2 req-7e5b43c2-856f-4f4a-a7c7-6fd554fad6be a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:11:21 compute-0 nova_compute[349548]: 2025-12-05 02:11:21.340 349552 DEBUG nova.compute.manager [req-5cf25255-d5cb-4637-81d6-708b1a4645a2 req-7e5b43c2-856f-4f4a-a7c7-6fd554fad6be a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] No waiting events found dispatching network-vif-unplugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 02:11:21 compute-0 nova_compute[349548]: 2025-12-05 02:11:21.340 349552 DEBUG nova.compute.manager [req-5cf25255-d5cb-4637-81d6-708b1a4645a2 req-7e5b43c2-856f-4f4a-a7c7-6fd554fad6be a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Received event network-vif-unplugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  5 02:11:21 compute-0 nova_compute[349548]: 2025-12-05 02:11:21.388 349552 INFO nova.virt.libvirt.driver [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Deleting instance files /var/lib/nova/instances/59e35a32-9023-4e49-be56-9da10df3027f_del#033[00m
Dec  5 02:11:21 compute-0 nova_compute[349548]: 2025-12-05 02:11:21.389 349552 INFO nova.virt.libvirt.driver [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Deletion of /var/lib/nova/instances/59e35a32-9023-4e49-be56-9da10df3027f_del complete#033[00m
Dec  5 02:11:21 compute-0 nova_compute[349548]: 2025-12-05 02:11:21.440 349552 INFO nova.compute.manager [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Took 1.20 seconds to destroy the instance on the hypervisor.#033[00m
Dec  5 02:11:21 compute-0 nova_compute[349548]: 2025-12-05 02:11:21.441 349552 DEBUG oslo.service.loopingcall [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  5 02:11:21 compute-0 nova_compute[349548]: 2025-12-05 02:11:21.442 349552 DEBUG nova.compute.manager [-] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  5 02:11:21 compute-0 nova_compute[349548]: 2025-12-05 02:11:21.442 349552 DEBUG nova.network.neutron [-] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  5 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.018 349552 DEBUG nova.network.neutron [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updating instance_info_cache with network_info: [{"id": "706f9405-4061-481e-a252-9b14f4534a4e", "address": "fa:16:3e:cf:10:bc", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.151", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706f9405-40", "ovs_interfaceid": "706f9405-4061-481e-a252-9b14f4534a4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.050 349552 DEBUG oslo_concurrency.lockutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Releasing lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.050 349552 DEBUG nova.compute.manager [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Instance network_info: |[{"id": "706f9405-4061-481e-a252-9b14f4534a4e", "address": "fa:16:3e:cf:10:bc", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.151", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706f9405-40", "ovs_interfaceid": "706f9405-4061-481e-a252-9b14f4534a4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  5 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.051 349552 DEBUG oslo_concurrency.lockutils [req-21032ee1-a71c-44d9-a0a1-ad388778cfde req-fcc5a807-5a03-4e35-b1f2-ec4c759aff2b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.051 349552 DEBUG nova.network.neutron [req-21032ee1-a71c-44d9-a0a1-ad388778cfde req-fcc5a807-5a03-4e35-b1f2-ec4c759aff2b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Refreshing network info cache for port 706f9405-4061-481e-a252-9b14f4534a4e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  5 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.056 349552 DEBUG nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Start _get_guest_xml network_info=[{"id": "706f9405-4061-481e-a252-9b14f4534a4e", "address": "fa:16:3e:cf:10:bc", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.151", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706f9405-40", "ovs_interfaceid": "706f9405-4061-481e-a252-9b14f4534a4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T02:11:06Z,direct_url=<?>,disk_format='qcow2',id=773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e,min_disk=0,min_ram=0,name='tempest-scenario-img--2105045224',owner='b01709a3378347e1a3f25eeb2b8b1bca',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T02:11:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'image_id': '773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  5 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.069 349552 WARNING nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.086 349552 DEBUG nova.virt.libvirt.host [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  5 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.087 349552 DEBUG nova.virt.libvirt.host [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  5 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.093 349552 DEBUG nova.virt.libvirt.host [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  5 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.094 349552 DEBUG nova.virt.libvirt.host [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  5 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.094 349552 DEBUG nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  5 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.095 349552 DEBUG nova.virt.hardware [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-05T02:07:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T02:11:06Z,direct_url=<?>,disk_format='qcow2',id=773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e,min_disk=0,min_ram=0,name='tempest-scenario-img--2105045224',owner='b01709a3378347e1a3f25eeb2b8b1bca',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T02:11:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  5 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.095 349552 DEBUG nova.virt.hardware [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  5 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.096 349552 DEBUG nova.virt.hardware [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  5 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.096 349552 DEBUG nova.virt.hardware [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  5 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.097 349552 DEBUG nova.virt.hardware [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  5 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.097 349552 DEBUG nova.virt.hardware [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  5 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.098 349552 DEBUG nova.virt.hardware [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  5 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.098 349552 DEBUG nova.virt.hardware [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  5 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.098 349552 DEBUG nova.virt.hardware [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  5 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.099 349552 DEBUG nova.virt.hardware [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  5 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.099 349552 DEBUG nova.virt.hardware [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  5 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.104 349552 DEBUG oslo_concurrency.processutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.163 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1871: 321 pgs: 321 active+clean; 162 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.8 MiB/s wr, 38 op/s
Dec  5 02:11:22 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  5 02:11:22 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4120132925' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  5 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.630 349552 DEBUG oslo_concurrency.processutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.681 349552 DEBUG nova.storage.rbd_utils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] rbd image 292fd084-0808-4a80-adc1-6ab1f28e188a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.706 349552 DEBUG oslo_concurrency.processutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.759 349552 DEBUG nova.network.neutron [-] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.786 349552 INFO nova.compute.manager [-] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Took 1.34 seconds to deallocate network for instance.#033[00m
Dec  5 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.838 349552 DEBUG oslo_concurrency.lockutils [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.839 349552 DEBUG oslo_concurrency.lockutils [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:11:22 compute-0 nova_compute[349548]: 2025-12-05 02:11:22.924 349552 DEBUG oslo_concurrency.processutils [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:11:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  5 02:11:23 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4210066393' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.140 349552 DEBUG nova.network.neutron [req-21032ee1-a71c-44d9-a0a1-ad388778cfde req-fcc5a807-5a03-4e35-b1f2-ec4c759aff2b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updated VIF entry in instance network info cache for port 706f9405-4061-481e-a252-9b14f4534a4e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.142 349552 DEBUG nova.network.neutron [req-21032ee1-a71c-44d9-a0a1-ad388778cfde req-fcc5a807-5a03-4e35-b1f2-ec4c759aff2b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updating instance_info_cache with network_info: [{"id": "706f9405-4061-481e-a252-9b14f4534a4e", "address": "fa:16:3e:cf:10:bc", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.151", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706f9405-40", "ovs_interfaceid": "706f9405-4061-481e-a252-9b14f4534a4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:11:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.165 349552 DEBUG oslo_concurrency.processutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.167 349552 DEBUG nova.virt.libvirt.vif [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T02:11:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-3255585-asg-ymkpcnuo2iqm-rsaqvth2jwvx-k3ipymnd45pa',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-3255585-asg-ymkpcnuo2iqm-rsaqvth2jwvx-k3ipymnd45pa',id=11,image_ref='773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='92ca195d-98d1-443c-9947-dcb7ca7b926a'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b01709a3378347e1a3f25eeb2b8b1bca',ramdisk_id='',reservation_id='r-d903m2ip',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-257639068',owner_user_name='tempest-PrometheusGabbiTest-257639068-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T02:11:17Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='99591ed8361e41579fee1d14f16bf0f7',uuid=292fd084-0808-4a80-adc1-6ab1f28e188a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "706f9405-4061-481e-a252-9b14f4534a4e", "address": "fa:16:3e:cf:10:bc", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.151", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706f9405-40", "ovs_interfaceid": "706f9405-4061-481e-a252-9b14f4534a4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.168 349552 DEBUG nova.network.os_vif_util [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Converting VIF {"id": "706f9405-4061-481e-a252-9b14f4534a4e", "address": "fa:16:3e:cf:10:bc", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.151", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706f9405-40", "ovs_interfaceid": "706f9405-4061-481e-a252-9b14f4534a4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.169 349552 DEBUG nova.network.os_vif_util [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:10:bc,bridge_name='br-int',has_traffic_filtering=True,id=706f9405-4061-481e-a252-9b14f4534a4e,network=Network(d7842201-32d0-4f34-ad6b-51f98e5f8322),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap706f9405-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.172 349552 DEBUG nova.objects.instance [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lazy-loading 'pci_devices' on Instance uuid 292fd084-0808-4a80-adc1-6ab1f28e188a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.175 349552 DEBUG oslo_concurrency.lockutils [req-21032ee1-a71c-44d9-a0a1-ad388778cfde req-fcc5a807-5a03-4e35-b1f2-ec4c759aff2b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.189 349552 DEBUG nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] End _get_guest_xml xml=<domain type="kvm">
Dec  5 02:11:23 compute-0 nova_compute[349548]:  <uuid>292fd084-0808-4a80-adc1-6ab1f28e188a</uuid>
Dec  5 02:11:23 compute-0 nova_compute[349548]:  <name>instance-0000000b</name>
Dec  5 02:11:23 compute-0 nova_compute[349548]:  <memory>131072</memory>
Dec  5 02:11:23 compute-0 nova_compute[349548]:  <vcpu>1</vcpu>
Dec  5 02:11:23 compute-0 nova_compute[349548]:  <metadata>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  5 02:11:23 compute-0 nova_compute[349548]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:      <nova:name>te-3255585-asg-ymkpcnuo2iqm-rsaqvth2jwvx-k3ipymnd45pa</nova:name>
Dec  5 02:11:23 compute-0 nova_compute[349548]:      <nova:creationTime>2025-12-05 02:11:22</nova:creationTime>
Dec  5 02:11:23 compute-0 nova_compute[349548]:      <nova:flavor name="m1.nano">
Dec  5 02:11:23 compute-0 nova_compute[349548]:        <nova:memory>128</nova:memory>
Dec  5 02:11:23 compute-0 nova_compute[349548]:        <nova:disk>1</nova:disk>
Dec  5 02:11:23 compute-0 nova_compute[349548]:        <nova:swap>0</nova:swap>
Dec  5 02:11:23 compute-0 nova_compute[349548]:        <nova:ephemeral>0</nova:ephemeral>
Dec  5 02:11:23 compute-0 nova_compute[349548]:        <nova:vcpus>1</nova:vcpus>
Dec  5 02:11:23 compute-0 nova_compute[349548]:      </nova:flavor>
Dec  5 02:11:23 compute-0 nova_compute[349548]:      <nova:owner>
Dec  5 02:11:23 compute-0 nova_compute[349548]:        <nova:user uuid="99591ed8361e41579fee1d14f16bf0f7">tempest-PrometheusGabbiTest-257639068-project-member</nova:user>
Dec  5 02:11:23 compute-0 nova_compute[349548]:        <nova:project uuid="b01709a3378347e1a3f25eeb2b8b1bca">tempest-PrometheusGabbiTest-257639068</nova:project>
Dec  5 02:11:23 compute-0 nova_compute[349548]:      </nova:owner>
Dec  5 02:11:23 compute-0 nova_compute[349548]:      <nova:root type="image" uuid="773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:      <nova:ports>
Dec  5 02:11:23 compute-0 nova_compute[349548]:        <nova:port uuid="706f9405-4061-481e-a252-9b14f4534a4e">
Dec  5 02:11:23 compute-0 nova_compute[349548]:          <nova:ip type="fixed" address="10.100.0.151" ipVersion="4"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:        </nova:port>
Dec  5 02:11:23 compute-0 nova_compute[349548]:      </nova:ports>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    </nova:instance>
Dec  5 02:11:23 compute-0 nova_compute[349548]:  </metadata>
Dec  5 02:11:23 compute-0 nova_compute[349548]:  <sysinfo type="smbios">
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <system>
Dec  5 02:11:23 compute-0 nova_compute[349548]:      <entry name="manufacturer">RDO</entry>
Dec  5 02:11:23 compute-0 nova_compute[349548]:      <entry name="product">OpenStack Compute</entry>
Dec  5 02:11:23 compute-0 nova_compute[349548]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  5 02:11:23 compute-0 nova_compute[349548]:      <entry name="serial">292fd084-0808-4a80-adc1-6ab1f28e188a</entry>
Dec  5 02:11:23 compute-0 nova_compute[349548]:      <entry name="uuid">292fd084-0808-4a80-adc1-6ab1f28e188a</entry>
Dec  5 02:11:23 compute-0 nova_compute[349548]:      <entry name="family">Virtual Machine</entry>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    </system>
Dec  5 02:11:23 compute-0 nova_compute[349548]:  </sysinfo>
Dec  5 02:11:23 compute-0 nova_compute[349548]:  <os>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <boot dev="hd"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <smbios mode="sysinfo"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:  </os>
Dec  5 02:11:23 compute-0 nova_compute[349548]:  <features>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <acpi/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <apic/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <vmcoreinfo/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:  </features>
Dec  5 02:11:23 compute-0 nova_compute[349548]:  <clock offset="utc">
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <timer name="pit" tickpolicy="delay"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <timer name="hpet" present="no"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:  </clock>
Dec  5 02:11:23 compute-0 nova_compute[349548]:  <cpu mode="host-model" match="exact">
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <topology sockets="1" cores="1" threads="1"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:  </cpu>
Dec  5 02:11:23 compute-0 nova_compute[349548]:  <devices>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <disk type="network" device="disk">
Dec  5 02:11:23 compute-0 nova_compute[349548]:      <driver type="raw" cache="none"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:      <source protocol="rbd" name="vms/292fd084-0808-4a80-adc1-6ab1f28e188a_disk">
Dec  5 02:11:23 compute-0 nova_compute[349548]:        <host name="192.168.122.100" port="6789"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:      </source>
Dec  5 02:11:23 compute-0 nova_compute[349548]:      <auth username="openstack">
Dec  5 02:11:23 compute-0 nova_compute[349548]:        <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:      </auth>
Dec  5 02:11:23 compute-0 nova_compute[349548]:      <target dev="vda" bus="virtio"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    </disk>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <disk type="network" device="cdrom">
Dec  5 02:11:23 compute-0 nova_compute[349548]:      <driver type="raw" cache="none"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:      <source protocol="rbd" name="vms/292fd084-0808-4a80-adc1-6ab1f28e188a_disk.config">
Dec  5 02:11:23 compute-0 nova_compute[349548]:        <host name="192.168.122.100" port="6789"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:      </source>
Dec  5 02:11:23 compute-0 nova_compute[349548]:      <auth username="openstack">
Dec  5 02:11:23 compute-0 nova_compute[349548]:        <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:      </auth>
Dec  5 02:11:23 compute-0 nova_compute[349548]:      <target dev="sda" bus="sata"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    </disk>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <interface type="ethernet">
Dec  5 02:11:23 compute-0 nova_compute[349548]:      <mac address="fa:16:3e:cf:10:bc"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:      <model type="virtio"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:      <driver name="vhost" rx_queue_size="512"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:      <mtu size="1442"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:      <target dev="tap706f9405-40"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    </interface>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <serial type="pty">
Dec  5 02:11:23 compute-0 nova_compute[349548]:      <log file="/var/lib/nova/instances/292fd084-0808-4a80-adc1-6ab1f28e188a/console.log" append="off"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    </serial>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <video>
Dec  5 02:11:23 compute-0 nova_compute[349548]:      <model type="virtio"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    </video>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <input type="tablet" bus="usb"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <rng model="virtio">
Dec  5 02:11:23 compute-0 nova_compute[349548]:      <backend model="random">/dev/urandom</backend>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    </rng>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <controller type="usb" index="0"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    <memballoon model="virtio">
Dec  5 02:11:23 compute-0 nova_compute[349548]:      <stats period="10"/>
Dec  5 02:11:23 compute-0 nova_compute[349548]:    </memballoon>
Dec  5 02:11:23 compute-0 nova_compute[349548]:  </devices>
Dec  5 02:11:23 compute-0 nova_compute[349548]: </domain>
Dec  5 02:11:23 compute-0 nova_compute[349548]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.190 349552 DEBUG nova.compute.manager [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Preparing to wait for external event network-vif-plugged-706f9405-4061-481e-a252-9b14f4534a4e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.191 349552 DEBUG oslo_concurrency.lockutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Acquiring lock "292fd084-0808-4a80-adc1-6ab1f28e188a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.191 349552 DEBUG oslo_concurrency.lockutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "292fd084-0808-4a80-adc1-6ab1f28e188a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.192 349552 DEBUG oslo_concurrency.lockutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "292fd084-0808-4a80-adc1-6ab1f28e188a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.194 349552 DEBUG nova.virt.libvirt.vif [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T02:11:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-3255585-asg-ymkpcnuo2iqm-rsaqvth2jwvx-k3ipymnd45pa',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-3255585-asg-ymkpcnuo2iqm-rsaqvth2jwvx-k3ipymnd45pa',id=11,image_ref='773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='92ca195d-98d1-443c-9947-dcb7ca7b926a'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b01709a3378347e1a3f25eeb2b8b1bca',ramdisk_id='',reservation_id='r-d903m2ip',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-257639068',owner_user_name='tempest-PrometheusGabbiTest-257639068-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T02:11:17Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='99591ed8361e41579fee1d14f16bf0f7',uuid=292fd084-0808-4a80-adc1-6ab1f28e188a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "706f9405-4061-481e-a252-9b14f4534a4e", "address": "fa:16:3e:cf:10:bc", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.151", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706f9405-40", "ovs_interfaceid": "706f9405-4061-481e-a252-9b14f4534a4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.194 349552 DEBUG nova.network.os_vif_util [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Converting VIF {"id": "706f9405-4061-481e-a252-9b14f4534a4e", "address": "fa:16:3e:cf:10:bc", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.151", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706f9405-40", "ovs_interfaceid": "706f9405-4061-481e-a252-9b14f4534a4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.195 349552 DEBUG nova.network.os_vif_util [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cf:10:bc,bridge_name='br-int',has_traffic_filtering=True,id=706f9405-4061-481e-a252-9b14f4534a4e,network=Network(d7842201-32d0-4f34-ad6b-51f98e5f8322),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap706f9405-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.196 349552 DEBUG os_vif [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:10:bc,bridge_name='br-int',has_traffic_filtering=True,id=706f9405-4061-481e-a252-9b14f4534a4e,network=Network(d7842201-32d0-4f34-ad6b-51f98e5f8322),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap706f9405-40') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.197 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.197 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.198 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.203 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.204 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap706f9405-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.205 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap706f9405-40, col_values=(('external_ids', {'iface-id': '706f9405-4061-481e-a252-9b14f4534a4e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cf:10:bc', 'vm-uuid': '292fd084-0808-4a80-adc1-6ab1f28e188a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:11:23 compute-0 NetworkManager[49092]: <info>  [1764900683.2091] manager: (tap706f9405-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.207 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.210 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.218 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.220 349552 INFO os_vif [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cf:10:bc,bridge_name='br-int',has_traffic_filtering=True,id=706f9405-4061-481e-a252-9b14f4534a4e,network=Network(d7842201-32d0-4f34-ad6b-51f98e5f8322),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap706f9405-40')#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.310 349552 DEBUG nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.311 349552 DEBUG nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.311 349552 DEBUG nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] No VIF found with MAC fa:16:3e:cf:10:bc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.312 349552 INFO nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Using config drive#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.362 349552 DEBUG nova.storage.rbd_utils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] rbd image 292fd084-0808-4a80-adc1-6ab1f28e188a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:11:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:11:23 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4116091337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.456 349552 DEBUG oslo_concurrency.processutils [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.465 349552 DEBUG nova.compute.manager [req-bb7ce5b0-28a1-4d59-8a93-255ac5f3edc5 req-149b21eb-0122-47d8-8fd3-93a18afa19c0 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Received event network-vif-plugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.465 349552 DEBUG oslo_concurrency.lockutils [req-bb7ce5b0-28a1-4d59-8a93-255ac5f3edc5 req-149b21eb-0122-47d8-8fd3-93a18afa19c0 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "59e35a32-9023-4e49-be56-9da10df3027f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.466 349552 DEBUG oslo_concurrency.lockutils [req-bb7ce5b0-28a1-4d59-8a93-255ac5f3edc5 req-149b21eb-0122-47d8-8fd3-93a18afa19c0 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.466 349552 DEBUG oslo_concurrency.lockutils [req-bb7ce5b0-28a1-4d59-8a93-255ac5f3edc5 req-149b21eb-0122-47d8-8fd3-93a18afa19c0 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.466 349552 DEBUG nova.compute.manager [req-bb7ce5b0-28a1-4d59-8a93-255ac5f3edc5 req-149b21eb-0122-47d8-8fd3-93a18afa19c0 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] No waiting events found dispatching network-vif-plugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.467 349552 WARNING nova.compute.manager [req-bb7ce5b0-28a1-4d59-8a93-255ac5f3edc5 req-149b21eb-0122-47d8-8fd3-93a18afa19c0 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Received unexpected event network-vif-plugged-a240e2ef-1773-4509-ac04-eae1f5d36e08 for instance with vm_state deleted and task_state None.#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.467 349552 DEBUG nova.compute.manager [req-bb7ce5b0-28a1-4d59-8a93-255ac5f3edc5 req-149b21eb-0122-47d8-8fd3-93a18afa19c0 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Received event network-vif-deleted-a240e2ef-1773-4509-ac04-eae1f5d36e08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.473 349552 DEBUG nova.compute.provider_tree [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.490 349552 DEBUG nova.scheduler.client.report [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.516 349552 DEBUG oslo_concurrency.lockutils [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.540 349552 INFO nova.scheduler.client.report [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Deleted allocations for instance 59e35a32-9023-4e49-be56-9da10df3027f#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.590 349552 DEBUG oslo_concurrency.lockutils [None req-c12f30ce-fbb6-4aa9-be4a-a2e382891ff2 b4745812b7eb47908ded25b1eb7c7328 dd34a6a62cf94436a2b836fa4f49c4fa - - default default] Lock "59e35a32-9023-4e49-be56-9da10df3027f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.363s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.724 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.778 349552 INFO nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Creating config drive at /var/lib/nova/instances/292fd084-0808-4a80-adc1-6ab1f28e188a/disk.config#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.791 349552 DEBUG oslo_concurrency.processutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/292fd084-0808-4a80-adc1-6ab1f28e188a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprrp1zrla execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.937 349552 DEBUG oslo_concurrency.processutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/292fd084-0808-4a80-adc1-6ab1f28e188a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprrp1zrla" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.984 349552 DEBUG nova.storage.rbd_utils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] rbd image 292fd084-0808-4a80-adc1-6ab1f28e188a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:11:23 compute-0 nova_compute[349548]: 2025-12-05 02:11:23.993 349552 DEBUG oslo_concurrency.processutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/292fd084-0808-4a80-adc1-6ab1f28e188a/disk.config 292fd084-0808-4a80-adc1-6ab1f28e188a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:11:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1872: 321 pgs: 321 active+clean; 148 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.3 MiB/s wr, 53 op/s
Dec  5 02:11:24 compute-0 nova_compute[349548]: 2025-12-05 02:11:24.321 349552 DEBUG oslo_concurrency.processutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/292fd084-0808-4a80-adc1-6ab1f28e188a/disk.config 292fd084-0808-4a80-adc1-6ab1f28e188a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.327s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:11:24 compute-0 nova_compute[349548]: 2025-12-05 02:11:24.322 349552 INFO nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Deleting local config drive /var/lib/nova/instances/292fd084-0808-4a80-adc1-6ab1f28e188a/disk.config because it was imported into RBD.#033[00m
Dec  5 02:11:24 compute-0 kernel: tap706f9405-40: entered promiscuous mode
Dec  5 02:11:24 compute-0 NetworkManager[49092]: <info>  [1764900684.4058] manager: (tap706f9405-40): new Tun device (/org/freedesktop/NetworkManager/Devices/63)
Dec  5 02:11:24 compute-0 ovn_controller[89286]: 2025-12-05T02:11:24Z|00131|binding|INFO|Claiming lport 706f9405-4061-481e-a252-9b14f4534a4e for this chassis.
Dec  5 02:11:24 compute-0 ovn_controller[89286]: 2025-12-05T02:11:24Z|00132|binding|INFO|706f9405-4061-481e-a252-9b14f4534a4e: Claiming fa:16:3e:cf:10:bc 10.100.0.151
Dec  5 02:11:24 compute-0 nova_compute[349548]: 2025-12-05 02:11:24.408 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:24 compute-0 nova_compute[349548]: 2025-12-05 02:11:24.416 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.421 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:10:bc 10.100.0.151'], port_security=['fa:16:3e:cf:10:bc 10.100.0.151'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.151/16', 'neutron:device_id': '292fd084-0808-4a80-adc1-6ab1f28e188a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7842201-32d0-4f34-ad6b-51f98e5f8322', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b01709a3378347e1a3f25eeb2b8b1bca', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cb556767-8d1b-4432-9d0a-485dcba856ee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=40610b26-f7eb-46a6-9c49-714ab1f77db8, chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=706f9405-4061-481e-a252-9b14f4534a4e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.423 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 706f9405-4061-481e-a252-9b14f4534a4e in datapath d7842201-32d0-4f34-ad6b-51f98e5f8322 bound to our chassis#033[00m
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.424 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d7842201-32d0-4f34-ad6b-51f98e5f8322#033[00m
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.441 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[45cde6d6-2875-4a24-83e1-0718adb4dda1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.442 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd7842201-31 in ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.445 412744 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd7842201-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.445 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[b15ed5b3-1d76-4a5e-8604-fdcb5f5d8246]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.446 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[cf51c271-9200-4ae4-ba07-1ac27eeca4ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:11:24 compute-0 nova_compute[349548]: 2025-12-05 02:11:24.461 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:24 compute-0 ovn_controller[89286]: 2025-12-05T02:11:24Z|00133|binding|INFO|Setting lport 706f9405-4061-481e-a252-9b14f4534a4e ovn-installed in OVS
Dec  5 02:11:24 compute-0 ovn_controller[89286]: 2025-12-05T02:11:24Z|00134|binding|INFO|Setting lport 706f9405-4061-481e-a252-9b14f4534a4e up in Southbound
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.469 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[fb41cc9e-9689-44a2-a501-70ad03a7468d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:11:24 compute-0 systemd-udevd[447990]: Network interface NamePolicy= disabled on kernel command line.
Dec  5 02:11:24 compute-0 nova_compute[349548]: 2025-12-05 02:11:24.472 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:24 compute-0 systemd-machined[138700]: New machine qemu-12-instance-0000000b.
Dec  5 02:11:24 compute-0 NetworkManager[49092]: <info>  [1764900684.4973] device (tap706f9405-40): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  5 02:11:24 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-0000000b.
Dec  5 02:11:24 compute-0 NetworkManager[49092]: <info>  [1764900684.4981] device (tap706f9405-40): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.505 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[05294f40-f537-45d0-abc3-8fa4285d1264]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.548 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[ecabc19f-9b9c-48da-94cf-714bcb6e5596]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.555 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[0fe2e687-7c54-447f-9826-2a1082fb4152]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:11:24 compute-0 NetworkManager[49092]: <info>  [1764900684.5580] manager: (tapd7842201-30): new Veth device (/org/freedesktop/NetworkManager/Devices/64)
Dec  5 02:11:24 compute-0 systemd-udevd[447993]: Network interface NamePolicy= disabled on kernel command line.
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.589 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[76af0377-8203-4c80-b389-118a50095911]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.594 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[d22b9c55-26bc-479d-813f-875dc9b7269e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:11:24 compute-0 NetworkManager[49092]: <info>  [1764900684.6239] device (tapd7842201-30): carrier: link connected
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.629 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[29bd6d18-2ae3-40da-80ca-70b32befeb60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.658 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[0444becc-5cc8-4002-b738-4567d60337b2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7842201-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:26:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677128, 'reachable_time': 18430, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 448021, 'error': None, 'target': 'ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.675 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[94e7cc0c-05e1-4f47-af1f-c66f93dd50ed]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5b:2670'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 677128, 'tstamp': 677128}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 448022, 'error': None, 'target': 'ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.696 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[17a2c770-b2e6-421f-8d19-94574feaafc2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7842201-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:26:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677128, 'reachable_time': 18430, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 448023, 'error': None, 'target': 'ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.736 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[0ed58de0-a3ad-41ee-8483-f23339c0777c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.814 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[0c441a62-138b-4257-b7ba-bc17deccf97d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.816 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7842201-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.816 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.817 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7842201-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:11:24 compute-0 nova_compute[349548]: 2025-12-05 02:11:24.819 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:24 compute-0 kernel: tapd7842201-30: entered promiscuous mode
Dec  5 02:11:24 compute-0 NetworkManager[49092]: <info>  [1764900684.8227] manager: (tapd7842201-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Dec  5 02:11:24 compute-0 nova_compute[349548]: 2025-12-05 02:11:24.823 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.825 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd7842201-30, col_values=(('external_ids', {'iface-id': '9309009c-26a0-4ed9-8142-14ad142ca1c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:11:24 compute-0 nova_compute[349548]: 2025-12-05 02:11:24.827 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:24 compute-0 ovn_controller[89286]: 2025-12-05T02:11:24Z|00135|binding|INFO|Releasing lport 9309009c-26a0-4ed9-8142-14ad142ca1c0 from this chassis (sb_readonly=0)
Dec  5 02:11:24 compute-0 nova_compute[349548]: 2025-12-05 02:11:24.860 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:24 compute-0 nova_compute[349548]: 2025-12-05 02:11:24.862 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.865 287122 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d7842201-32d0-4f34-ad6b-51f98e5f8322.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d7842201-32d0-4f34-ad6b-51f98e5f8322.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.867 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[5a70d8ef-086d-449a-af50-08258ab900ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.869 287122 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]: global
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]:    log         /dev/log local0 debug
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]:    log-tag     haproxy-metadata-proxy-d7842201-32d0-4f34-ad6b-51f98e5f8322
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]:    user        root
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]:    group       root
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]:    maxconn     1024
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]:    pidfile     /var/lib/neutron/external/pids/d7842201-32d0-4f34-ad6b-51f98e5f8322.pid.haproxy
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]:    daemon
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]: 
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]: defaults
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]:    log global
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]:    mode http
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]:    option httplog
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]:    option dontlognull
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]:    option http-server-close
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]:    option forwardfor
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]:    retries                 3
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]:    timeout http-request    30s
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]:    timeout connect         30s
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]:    timeout client          32s
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]:    timeout server          32s
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]:    timeout http-keep-alive 30s
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]: 
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]: 
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]: listen listener
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]:    bind 169.254.169.254:80
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]:    server metadata /var/lib/neutron/metadata_proxy
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]:    http-request add-header X-OVN-Network-ID d7842201-32d0-4f34-ad6b-51f98e5f8322
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  5 02:11:24 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:24.870 287122 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322', 'env', 'PROCESS_TAG=haproxy-d7842201-32d0-4f34-ad6b-51f98e5f8322', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d7842201-32d0-4f34-ad6b-51f98e5f8322.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  5 02:11:25 compute-0 nova_compute[349548]: 2025-12-05 02:11:25.288 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900685.2874427, 292fd084-0808-4a80-adc1-6ab1f28e188a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:11:25 compute-0 nova_compute[349548]: 2025-12-05 02:11:25.288 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] VM Started (Lifecycle Event)#033[00m
Dec  5 02:11:25 compute-0 nova_compute[349548]: 2025-12-05 02:11:25.322 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:11:25 compute-0 nova_compute[349548]: 2025-12-05 02:11:25.331 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900685.2876163, 292fd084-0808-4a80-adc1-6ab1f28e188a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:11:25 compute-0 nova_compute[349548]: 2025-12-05 02:11:25.332 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] VM Paused (Lifecycle Event)#033[00m
Dec  5 02:11:25 compute-0 nova_compute[349548]: 2025-12-05 02:11:25.355 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:11:25 compute-0 nova_compute[349548]: 2025-12-05 02:11:25.363 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  5 02:11:25 compute-0 nova_compute[349548]: 2025-12-05 02:11:25.388 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  5 02:11:25 compute-0 podman[448095]: 2025-12-05 02:11:25.496230564 +0000 UTC m=+0.110235477 container create 41a7f613f1a3d37be573dc0cfc9ba0c7fef5c7c9b4960a56e47c98599276663a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  5 02:11:25 compute-0 podman[448095]: 2025-12-05 02:11:25.439720166 +0000 UTC m=+0.053725129 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  5 02:11:25 compute-0 systemd[1]: Started libpod-conmon-41a7f613f1a3d37be573dc0cfc9ba0c7fef5c7c9b4960a56e47c98599276663a.scope.
Dec  5 02:11:25 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:11:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b0054f478c906d197442626f618ca33515a9d994cb127e662a2ffd07bf0dae3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  5 02:11:25 compute-0 podman[448095]: 2025-12-05 02:11:25.68018297 +0000 UTC m=+0.294187943 container init 41a7f613f1a3d37be573dc0cfc9ba0c7fef5c7c9b4960a56e47c98599276663a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  5 02:11:25 compute-0 podman[448095]: 2025-12-05 02:11:25.697153986 +0000 UTC m=+0.311158869 container start 41a7f613f1a3d37be573dc0cfc9ba0c7fef5c7c9b4960a56e47c98599276663a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  5 02:11:25 compute-0 podman[448109]: 2025-12-05 02:11:25.711843949 +0000 UTC m=+0.145773555 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 02:11:25 compute-0 podman[448108]: 2025-12-05 02:11:25.715350897 +0000 UTC m=+0.152494443 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  5 02:11:25 compute-0 neutron-haproxy-ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322[448122]: [NOTICE]   (448153) : New worker (448155) forked
Dec  5 02:11:25 compute-0 neutron-haproxy-ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322[448122]: [NOTICE]   (448153) : Loading success.
Dec  5 02:11:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1873: 321 pgs: 321 active+clean; 124 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 62 op/s
Dec  5 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Dec  5 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  5 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:11:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 02:11:27 compute-0 nova_compute[349548]: 2025-12-05 02:11:27.166 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:11:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1874: 321 pgs: 321 active+clean; 124 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 72 op/s
Dec  5 02:11:28 compute-0 nova_compute[349548]: 2025-12-05 02:11:28.209 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:29 compute-0 ovn_controller[89286]: 2025-12-05T02:11:29Z|00136|binding|INFO|Releasing lport 9309009c-26a0-4ed9-8142-14ad142ca1c0 from this chassis (sb_readonly=0)
Dec  5 02:11:29 compute-0 nova_compute[349548]: 2025-12-05 02:11:29.025 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:29 compute-0 ovn_controller[89286]: 2025-12-05T02:11:29Z|00137|binding|INFO|Releasing lport 9309009c-26a0-4ed9-8142-14ad142ca1c0 from this chassis (sb_readonly=0)
Dec  5 02:11:29 compute-0 nova_compute[349548]: 2025-12-05 02:11:29.296 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:29 compute-0 podman[158197]: time="2025-12-05T02:11:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:11:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:11:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 02:11:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:11:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8648 "" "Go-http-client/1.1"
Dec  5 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.102 349552 DEBUG nova.compute.manager [req-3eedf5c9-5795-4721-ad25-d222316f9e15 req-e38c6b5c-925c-43de-baee-30f1dea46b2f a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Received event network-vif-plugged-706f9405-4061-481e-a252-9b14f4534a4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.102 349552 DEBUG oslo_concurrency.lockutils [req-3eedf5c9-5795-4721-ad25-d222316f9e15 req-e38c6b5c-925c-43de-baee-30f1dea46b2f a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "292fd084-0808-4a80-adc1-6ab1f28e188a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.103 349552 DEBUG oslo_concurrency.lockutils [req-3eedf5c9-5795-4721-ad25-d222316f9e15 req-e38c6b5c-925c-43de-baee-30f1dea46b2f a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "292fd084-0808-4a80-adc1-6ab1f28e188a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.103 349552 DEBUG oslo_concurrency.lockutils [req-3eedf5c9-5795-4721-ad25-d222316f9e15 req-e38c6b5c-925c-43de-baee-30f1dea46b2f a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "292fd084-0808-4a80-adc1-6ab1f28e188a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.103 349552 DEBUG nova.compute.manager [req-3eedf5c9-5795-4721-ad25-d222316f9e15 req-e38c6b5c-925c-43de-baee-30f1dea46b2f a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Processing event network-vif-plugged-706f9405-4061-481e-a252-9b14f4534a4e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  5 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.105 349552 DEBUG nova.compute.manager [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  5 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.114 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900690.1128616, 292fd084-0808-4a80-adc1-6ab1f28e188a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.115 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] VM Resumed (Lifecycle Event)#033[00m
Dec  5 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.117 349552 DEBUG nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  5 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.126 349552 INFO nova.virt.libvirt.driver [-] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Instance spawned successfully.#033[00m
Dec  5 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.126 349552 DEBUG nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  5 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.141 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.152 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  5 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.158 349552 DEBUG nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.159 349552 DEBUG nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.160 349552 DEBUG nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.161 349552 DEBUG nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.163 349552 DEBUG nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.163 349552 DEBUG nova.virt.libvirt.driver [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.177 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  5 02:11:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1875: 321 pgs: 321 active+clean; 124 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 72 op/s
Dec  5 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.225 349552 INFO nova.compute.manager [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Took 12.23 seconds to spawn the instance on the hypervisor.#033[00m
Dec  5 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.225 349552 DEBUG nova.compute.manager [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.302 349552 INFO nova.compute.manager [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Took 13.63 seconds to build instance.#033[00m
Dec  5 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.320 349552 DEBUG oslo_concurrency.lockutils [None req-e01c443d-f173-4511-b877-5b41ca5a6106 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "292fd084-0808-4a80-adc1-6ab1f28e188a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.673 349552 DEBUG oslo_concurrency.lockutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Acquiring lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.673 349552 DEBUG oslo_concurrency.lockutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.691 349552 DEBUG nova.compute.manager [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  5 02:11:30 compute-0 podman[448165]: 2025-12-05 02:11:30.697819992 +0000 UTC m=+0.110092763 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2)
Dec  5 02:11:30 compute-0 podman[448166]: 2025-12-05 02:11:30.722233767 +0000 UTC m=+0.120359881 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Dec  5 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.783 349552 DEBUG oslo_concurrency.lockutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.783 349552 DEBUG oslo_concurrency.lockutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.795 349552 DEBUG nova.virt.hardware [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  5 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.795 349552 INFO nova.compute.claims [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  5 02:11:30 compute-0 nova_compute[349548]: 2025-12-05 02:11:30.948 349552 DEBUG oslo_concurrency.processutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:11:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:11:31 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2478513699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:11:31 compute-0 openstack_network_exporter[366555]: ERROR   02:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:11:31 compute-0 openstack_network_exporter[366555]: ERROR   02:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:11:31 compute-0 openstack_network_exporter[366555]: ERROR   02:11:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:11:31 compute-0 openstack_network_exporter[366555]: ERROR   02:11:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:11:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:11:31 compute-0 openstack_network_exporter[366555]: ERROR   02:11:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:11:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:11:31 compute-0 nova_compute[349548]: 2025-12-05 02:11:31.447 349552 DEBUG oslo_concurrency.processutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:11:31 compute-0 nova_compute[349548]: 2025-12-05 02:11:31.467 349552 DEBUG nova.compute.provider_tree [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:11:31 compute-0 nova_compute[349548]: 2025-12-05 02:11:31.491 349552 DEBUG nova.scheduler.client.report [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:11:31 compute-0 nova_compute[349548]: 2025-12-05 02:11:31.520 349552 DEBUG oslo_concurrency.lockutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:11:31 compute-0 nova_compute[349548]: 2025-12-05 02:11:31.521 349552 DEBUG nova.compute.manager [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  5 02:11:31 compute-0 nova_compute[349548]: 2025-12-05 02:11:31.584 349552 DEBUG nova.compute.manager [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  5 02:11:31 compute-0 nova_compute[349548]: 2025-12-05 02:11:31.584 349552 DEBUG nova.network.neutron [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  5 02:11:31 compute-0 nova_compute[349548]: 2025-12-05 02:11:31.608 349552 INFO nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  5 02:11:31 compute-0 nova_compute[349548]: 2025-12-05 02:11:31.635 349552 DEBUG nova.compute.manager [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  5 02:11:31 compute-0 nova_compute[349548]: 2025-12-05 02:11:31.739 349552 DEBUG nova.compute.manager [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  5 02:11:31 compute-0 nova_compute[349548]: 2025-12-05 02:11:31.742 349552 DEBUG nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  5 02:11:31 compute-0 nova_compute[349548]: 2025-12-05 02:11:31.743 349552 INFO nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Creating image(s)#033[00m
Dec  5 02:11:31 compute-0 nova_compute[349548]: 2025-12-05 02:11:31.809 349552 DEBUG nova.storage.rbd_utils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] rbd image 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:11:31 compute-0 nova_compute[349548]: 2025-12-05 02:11:31.879 349552 DEBUG nova.storage.rbd_utils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] rbd image 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:11:31 compute-0 nova_compute[349548]: 2025-12-05 02:11:31.940 349552 DEBUG nova.storage.rbd_utils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] rbd image 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:11:31 compute-0 nova_compute[349548]: 2025-12-05 02:11:31.952 349552 DEBUG oslo_concurrency.processutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:11:32 compute-0 nova_compute[349548]: 2025-12-05 02:11:32.005 349552 DEBUG nova.policy [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2e61f46e24a240608d1523fb5265d3ac', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6aaead05b2404fec8f687504ed800a2b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  5 02:11:32 compute-0 nova_compute[349548]: 2025-12-05 02:11:32.071 349552 DEBUG oslo_concurrency.processutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 --force-share --output=json" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:11:32 compute-0 nova_compute[349548]: 2025-12-05 02:11:32.072 349552 DEBUG oslo_concurrency.lockutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Acquiring lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:11:32 compute-0 nova_compute[349548]: 2025-12-05 02:11:32.073 349552 DEBUG oslo_concurrency.lockutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:11:32 compute-0 nova_compute[349548]: 2025-12-05 02:11:32.073 349552 DEBUG oslo_concurrency.lockutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:11:32 compute-0 nova_compute[349548]: 2025-12-05 02:11:32.117 349552 DEBUG nova.storage.rbd_utils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] rbd image 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:11:32 compute-0 nova_compute[349548]: 2025-12-05 02:11:32.128 349552 DEBUG oslo_concurrency.processutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:11:32 compute-0 nova_compute[349548]: 2025-12-05 02:11:32.169 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1876: 321 pgs: 321 active+clean; 124 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 76 op/s
Dec  5 02:11:32 compute-0 nova_compute[349548]: 2025-12-05 02:11:32.229 349552 DEBUG nova.compute.manager [req-40483d47-b453-4021-96c8-e26cd8202297 req-350f0c71-4d1e-429f-af61-3d79ae78c205 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Received event network-vif-plugged-706f9405-4061-481e-a252-9b14f4534a4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:11:32 compute-0 nova_compute[349548]: 2025-12-05 02:11:32.229 349552 DEBUG oslo_concurrency.lockutils [req-40483d47-b453-4021-96c8-e26cd8202297 req-350f0c71-4d1e-429f-af61-3d79ae78c205 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "292fd084-0808-4a80-adc1-6ab1f28e188a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:11:32 compute-0 nova_compute[349548]: 2025-12-05 02:11:32.230 349552 DEBUG oslo_concurrency.lockutils [req-40483d47-b453-4021-96c8-e26cd8202297 req-350f0c71-4d1e-429f-af61-3d79ae78c205 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "292fd084-0808-4a80-adc1-6ab1f28e188a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:11:32 compute-0 nova_compute[349548]: 2025-12-05 02:11:32.230 349552 DEBUG oslo_concurrency.lockutils [req-40483d47-b453-4021-96c8-e26cd8202297 req-350f0c71-4d1e-429f-af61-3d79ae78c205 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "292fd084-0808-4a80-adc1-6ab1f28e188a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:11:32 compute-0 nova_compute[349548]: 2025-12-05 02:11:32.230 349552 DEBUG nova.compute.manager [req-40483d47-b453-4021-96c8-e26cd8202297 req-350f0c71-4d1e-429f-af61-3d79ae78c205 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] No waiting events found dispatching network-vif-plugged-706f9405-4061-481e-a252-9b14f4534a4e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 02:11:32 compute-0 nova_compute[349548]: 2025-12-05 02:11:32.230 349552 WARNING nova.compute.manager [req-40483d47-b453-4021-96c8-e26cd8202297 req-350f0c71-4d1e-429f-af61-3d79ae78c205 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Received unexpected event network-vif-plugged-706f9405-4061-481e-a252-9b14f4534a4e for instance with vm_state active and task_state None.#033[00m
Dec  5 02:11:32 compute-0 nova_compute[349548]: 2025-12-05 02:11:32.670 349552 DEBUG oslo_concurrency.processutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:11:32 compute-0 podman[448317]: 2025-12-05 02:11:32.730060818 +0000 UTC m=+0.134536450 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vendor=Red Hat, Inc., container_name=kepler, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.openshift.tags=base rhel9, architecture=x86_64, release-0.7.12=)
Dec  5 02:11:32 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:32.868 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:c8:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:b5:45:4f:f9:d2'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 02:11:32 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:32.869 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  5 02:11:32 compute-0 nova_compute[349548]: 2025-12-05 02:11:32.878 349552 DEBUG nova.storage.rbd_utils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] resizing rbd image 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  5 02:11:32 compute-0 nova_compute[349548]: 2025-12-05 02:11:32.950 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:32 compute-0 nova_compute[349548]: 2025-12-05 02:11:32.997 349552 DEBUG nova.network.neutron [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Successfully created port: 1e754fc7-106a-43d2-a675-79c30089904b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  5 02:11:33 compute-0 nova_compute[349548]: 2025-12-05 02:11:33.141 349552 DEBUG nova.objects.instance [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lazy-loading 'migration_context' on Instance uuid 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:11:33 compute-0 nova_compute[349548]: 2025-12-05 02:11:33.155 349552 DEBUG nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  5 02:11:33 compute-0 nova_compute[349548]: 2025-12-05 02:11:33.155 349552 DEBUG nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Ensure instance console log exists: /var/lib/nova/instances/1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  5 02:11:33 compute-0 nova_compute[349548]: 2025-12-05 02:11:33.156 349552 DEBUG oslo_concurrency.lockutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:11:33 compute-0 nova_compute[349548]: 2025-12-05 02:11:33.157 349552 DEBUG oslo_concurrency.lockutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:11:33 compute-0 nova_compute[349548]: 2025-12-05 02:11:33.158 349552 DEBUG oslo_concurrency.lockutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:11:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:11:33 compute-0 nova_compute[349548]: 2025-12-05 02:11:33.211 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:33 compute-0 nova_compute[349548]: 2025-12-05 02:11:33.653 349552 DEBUG nova.network.neutron [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Successfully updated port: 1e754fc7-106a-43d2-a675-79c30089904b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  5 02:11:33 compute-0 nova_compute[349548]: 2025-12-05 02:11:33.669 349552 DEBUG oslo_concurrency.lockutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Acquiring lock "refresh_cache-1fcee2c4-ccfc-4651-bc90-a606a4e46e0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:11:33 compute-0 nova_compute[349548]: 2025-12-05 02:11:33.670 349552 DEBUG oslo_concurrency.lockutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Acquired lock "refresh_cache-1fcee2c4-ccfc-4651-bc90-a606a4e46e0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:11:33 compute-0 nova_compute[349548]: 2025-12-05 02:11:33.670 349552 DEBUG nova.network.neutron [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  5 02:11:34 compute-0 nova_compute[349548]: 2025-12-05 02:11:34.177 349552 DEBUG nova.network.neutron [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  5 02:11:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1877: 321 pgs: 321 active+clean; 124 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 702 KiB/s wr, 53 op/s
Dec  5 02:11:34 compute-0 nova_compute[349548]: 2025-12-05 02:11:34.442 349552 DEBUG nova.compute.manager [req-81fbe01c-f284-4855-9d5e-9c1b0dc1b111 req-8bf0dcf1-c689-4504-975f-ebfd6584428c a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Received event network-changed-1e754fc7-106a-43d2-a675-79c30089904b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:11:34 compute-0 nova_compute[349548]: 2025-12-05 02:11:34.443 349552 DEBUG nova.compute.manager [req-81fbe01c-f284-4855-9d5e-9c1b0dc1b111 req-8bf0dcf1-c689-4504-975f-ebfd6584428c a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Refreshing instance network info cache due to event network-changed-1e754fc7-106a-43d2-a675-79c30089904b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  5 02:11:34 compute-0 nova_compute[349548]: 2025-12-05 02:11:34.443 349552 DEBUG oslo_concurrency.lockutils [req-81fbe01c-f284-4855-9d5e-9c1b0dc1b111 req-8bf0dcf1-c689-4504-975f-ebfd6584428c a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-1fcee2c4-ccfc-4651-bc90-a606a4e46e0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.500 349552 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764900680.498941, 59e35a32-9023-4e49-be56-9da10df3027f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.501 349552 INFO nova.compute.manager [-] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] VM Stopped (Lifecycle Event)#033[00m
Dec  5 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.523 349552 DEBUG nova.compute.manager [None req-79a1cbc6-9a1c-47ed-9889-85557fa4e2de - - - - - -] [instance: 59e35a32-9023-4e49-be56-9da10df3027f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.529 349552 DEBUG nova.network.neutron [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Updating instance_info_cache with network_info: [{"id": "1e754fc7-106a-43d2-a675-79c30089904b", "address": "fa:16:3e:ab:49:42", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e754fc7-10", "ovs_interfaceid": "1e754fc7-106a-43d2-a675-79c30089904b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.549 349552 DEBUG oslo_concurrency.lockutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Releasing lock "refresh_cache-1fcee2c4-ccfc-4651-bc90-a606a4e46e0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.549 349552 DEBUG nova.compute.manager [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Instance network_info: |[{"id": "1e754fc7-106a-43d2-a675-79c30089904b", "address": "fa:16:3e:ab:49:42", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e754fc7-10", "ovs_interfaceid": "1e754fc7-106a-43d2-a675-79c30089904b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  5 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.550 349552 DEBUG oslo_concurrency.lockutils [req-81fbe01c-f284-4855-9d5e-9c1b0dc1b111 req-8bf0dcf1-c689-4504-975f-ebfd6584428c a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-1fcee2c4-ccfc-4651-bc90-a606a4e46e0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.551 349552 DEBUG nova.network.neutron [req-81fbe01c-f284-4855-9d5e-9c1b0dc1b111 req-8bf0dcf1-c689-4504-975f-ebfd6584428c a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Refreshing network info cache for port 1e754fc7-106a-43d2-a675-79c30089904b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  5 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.556 349552 DEBUG nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Start _get_guest_xml network_info=[{"id": "1e754fc7-106a-43d2-a675-79c30089904b", "address": "fa:16:3e:ab:49:42", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e754fc7-10", "ovs_interfaceid": "1e754fc7-106a-43d2-a675-79c30089904b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T02:07:35Z,direct_url=<?>,disk_format='qcow2',id=e9091bfb-b431-47c9-a284-79372046956b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T02:07:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'image_id': 'e9091bfb-b431-47c9-a284-79372046956b'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  5 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.569 349552 WARNING nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.586 349552 DEBUG nova.virt.libvirt.host [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  5 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.587 349552 DEBUG nova.virt.libvirt.host [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  5 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.595 349552 DEBUG nova.virt.libvirt.host [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  5 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.596 349552 DEBUG nova.virt.libvirt.host [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  5 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.597 349552 DEBUG nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  5 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.598 349552 DEBUG nova.virt.hardware [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-05T02:07:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T02:07:35Z,direct_url=<?>,disk_format='qcow2',id=e9091bfb-b431-47c9-a284-79372046956b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T02:07:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  5 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.599 349552 DEBUG nova.virt.hardware [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  5 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.600 349552 DEBUG nova.virt.hardware [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  5 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.600 349552 DEBUG nova.virt.hardware [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  5 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.601 349552 DEBUG nova.virt.hardware [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  5 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.602 349552 DEBUG nova.virt.hardware [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  5 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.602 349552 DEBUG nova.virt.hardware [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  5 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.603 349552 DEBUG nova.virt.hardware [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  5 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.604 349552 DEBUG nova.virt.hardware [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  5 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.605 349552 DEBUG nova.virt.hardware [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  5 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.605 349552 DEBUG nova.virt.hardware [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  5 02:11:35 compute-0 nova_compute[349548]: 2025-12-05 02:11:35.611 349552 DEBUG oslo_concurrency.processutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:11:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1878: 321 pgs: 321 active+clean; 136 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 405 KiB/s rd, 469 KiB/s wr, 51 op/s
Dec  5 02:11:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  5 02:11:36 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1327023170' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  5 02:11:36 compute-0 nova_compute[349548]: 2025-12-05 02:11:36.417 349552 DEBUG oslo_concurrency.processutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.806s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:11:36 compute-0 nova_compute[349548]: 2025-12-05 02:11:36.458 349552 DEBUG nova.storage.rbd_utils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] rbd image 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:11:36 compute-0 nova_compute[349548]: 2025-12-05 02:11:36.481 349552 DEBUG oslo_concurrency.processutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:11:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  5 02:11:36 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1047341207' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  5 02:11:36 compute-0 nova_compute[349548]: 2025-12-05 02:11:36.989 349552 DEBUG oslo_concurrency.processutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:11:36 compute-0 nova_compute[349548]: 2025-12-05 02:11:36.993 349552 DEBUG nova.virt.libvirt.vif [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T02:11:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-593464214',display_name='tempest-TestNetworkBasicOps-server-593464214',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-593464214',id=12,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPalP/AzwmHbA95rHCd/QJUJ7wbPS0Rqk62UPUO5FJAN2XrqFXhwvH10HGMSigesY1L3ja9sPfGII3cyjD9vy9gcLVsBBYGCRjTM6JwQSUcRRAf5rls2BCt8IBDTT+ISQg==',key_name='tempest-TestNetworkBasicOps-727356260',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6aaead05b2404fec8f687504ed800a2b',ramdisk_id='',reservation_id='r-bpaczbpy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-576606253',owner_user_name='tempest-TestNetworkBasicOps-576606253-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T02:11:31Z,user_data=None,user_id='2e61f46e24a240608d1523fb5265d3ac',uuid=1fcee2c4-ccfc-4651-bc90-a606a4e46e0f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1e754fc7-106a-43d2-a675-79c30089904b", "address": "fa:16:3e:ab:49:42", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e754fc7-10", "ovs_interfaceid": "1e754fc7-106a-43d2-a675-79c30089904b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  5 02:11:36 compute-0 nova_compute[349548]: 2025-12-05 02:11:36.994 349552 DEBUG nova.network.os_vif_util [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Converting VIF {"id": "1e754fc7-106a-43d2-a675-79c30089904b", "address": "fa:16:3e:ab:49:42", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e754fc7-10", "ovs_interfaceid": "1e754fc7-106a-43d2-a675-79c30089904b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 02:11:36 compute-0 nova_compute[349548]: 2025-12-05 02:11:36.996 349552 DEBUG nova.network.os_vif_util [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:49:42,bridge_name='br-int',has_traffic_filtering=True,id=1e754fc7-106a-43d2-a675-79c30089904b,network=Network(580f50f3-cfd1-4167-ba29-a8edbd53ee0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e754fc7-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:36.999 349552 DEBUG nova.objects.instance [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lazy-loading 'pci_devices' on Instance uuid 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.022 349552 DEBUG nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] End _get_guest_xml xml=<domain type="kvm">
Dec  5 02:11:37 compute-0 nova_compute[349548]:  <uuid>1fcee2c4-ccfc-4651-bc90-a606a4e46e0f</uuid>
Dec  5 02:11:37 compute-0 nova_compute[349548]:  <name>instance-0000000c</name>
Dec  5 02:11:37 compute-0 nova_compute[349548]:  <memory>131072</memory>
Dec  5 02:11:37 compute-0 nova_compute[349548]:  <vcpu>1</vcpu>
Dec  5 02:11:37 compute-0 nova_compute[349548]:  <metadata>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  5 02:11:37 compute-0 nova_compute[349548]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:      <nova:name>tempest-TestNetworkBasicOps-server-593464214</nova:name>
Dec  5 02:11:37 compute-0 nova_compute[349548]:      <nova:creationTime>2025-12-05 02:11:35</nova:creationTime>
Dec  5 02:11:37 compute-0 nova_compute[349548]:      <nova:flavor name="m1.nano">
Dec  5 02:11:37 compute-0 nova_compute[349548]:        <nova:memory>128</nova:memory>
Dec  5 02:11:37 compute-0 nova_compute[349548]:        <nova:disk>1</nova:disk>
Dec  5 02:11:37 compute-0 nova_compute[349548]:        <nova:swap>0</nova:swap>
Dec  5 02:11:37 compute-0 nova_compute[349548]:        <nova:ephemeral>0</nova:ephemeral>
Dec  5 02:11:37 compute-0 nova_compute[349548]:        <nova:vcpus>1</nova:vcpus>
Dec  5 02:11:37 compute-0 nova_compute[349548]:      </nova:flavor>
Dec  5 02:11:37 compute-0 nova_compute[349548]:      <nova:owner>
Dec  5 02:11:37 compute-0 nova_compute[349548]:        <nova:user uuid="2e61f46e24a240608d1523fb5265d3ac">tempest-TestNetworkBasicOps-576606253-project-member</nova:user>
Dec  5 02:11:37 compute-0 nova_compute[349548]:        <nova:project uuid="6aaead05b2404fec8f687504ed800a2b">tempest-TestNetworkBasicOps-576606253</nova:project>
Dec  5 02:11:37 compute-0 nova_compute[349548]:      </nova:owner>
Dec  5 02:11:37 compute-0 nova_compute[349548]:      <nova:root type="image" uuid="e9091bfb-b431-47c9-a284-79372046956b"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:      <nova:ports>
Dec  5 02:11:37 compute-0 nova_compute[349548]:        <nova:port uuid="1e754fc7-106a-43d2-a675-79c30089904b">
Dec  5 02:11:37 compute-0 nova_compute[349548]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:        </nova:port>
Dec  5 02:11:37 compute-0 nova_compute[349548]:      </nova:ports>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    </nova:instance>
Dec  5 02:11:37 compute-0 nova_compute[349548]:  </metadata>
Dec  5 02:11:37 compute-0 nova_compute[349548]:  <sysinfo type="smbios">
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <system>
Dec  5 02:11:37 compute-0 nova_compute[349548]:      <entry name="manufacturer">RDO</entry>
Dec  5 02:11:37 compute-0 nova_compute[349548]:      <entry name="product">OpenStack Compute</entry>
Dec  5 02:11:37 compute-0 nova_compute[349548]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  5 02:11:37 compute-0 nova_compute[349548]:      <entry name="serial">1fcee2c4-ccfc-4651-bc90-a606a4e46e0f</entry>
Dec  5 02:11:37 compute-0 nova_compute[349548]:      <entry name="uuid">1fcee2c4-ccfc-4651-bc90-a606a4e46e0f</entry>
Dec  5 02:11:37 compute-0 nova_compute[349548]:      <entry name="family">Virtual Machine</entry>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    </system>
Dec  5 02:11:37 compute-0 nova_compute[349548]:  </sysinfo>
Dec  5 02:11:37 compute-0 nova_compute[349548]:  <os>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <boot dev="hd"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <smbios mode="sysinfo"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:  </os>
Dec  5 02:11:37 compute-0 nova_compute[349548]:  <features>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <acpi/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <apic/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <vmcoreinfo/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:  </features>
Dec  5 02:11:37 compute-0 nova_compute[349548]:  <clock offset="utc">
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <timer name="pit" tickpolicy="delay"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <timer name="hpet" present="no"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:  </clock>
Dec  5 02:11:37 compute-0 nova_compute[349548]:  <cpu mode="host-model" match="exact">
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <topology sockets="1" cores="1" threads="1"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:  </cpu>
Dec  5 02:11:37 compute-0 nova_compute[349548]:  <devices>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <disk type="network" device="disk">
Dec  5 02:11:37 compute-0 nova_compute[349548]:      <driver type="raw" cache="none"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:      <source protocol="rbd" name="vms/1fcee2c4-ccfc-4651-bc90-a606a4e46e0f_disk">
Dec  5 02:11:37 compute-0 nova_compute[349548]:        <host name="192.168.122.100" port="6789"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:      </source>
Dec  5 02:11:37 compute-0 nova_compute[349548]:      <auth username="openstack">
Dec  5 02:11:37 compute-0 nova_compute[349548]:        <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:      </auth>
Dec  5 02:11:37 compute-0 nova_compute[349548]:      <target dev="vda" bus="virtio"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    </disk>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <disk type="network" device="cdrom">
Dec  5 02:11:37 compute-0 nova_compute[349548]:      <driver type="raw" cache="none"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:      <source protocol="rbd" name="vms/1fcee2c4-ccfc-4651-bc90-a606a4e46e0f_disk.config">
Dec  5 02:11:37 compute-0 nova_compute[349548]:        <host name="192.168.122.100" port="6789"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:      </source>
Dec  5 02:11:37 compute-0 nova_compute[349548]:      <auth username="openstack">
Dec  5 02:11:37 compute-0 nova_compute[349548]:        <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:      </auth>
Dec  5 02:11:37 compute-0 nova_compute[349548]:      <target dev="sda" bus="sata"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    </disk>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <interface type="ethernet">
Dec  5 02:11:37 compute-0 nova_compute[349548]:      <mac address="fa:16:3e:ab:49:42"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:      <model type="virtio"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:      <driver name="vhost" rx_queue_size="512"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:      <mtu size="1442"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:      <target dev="tap1e754fc7-10"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    </interface>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <serial type="pty">
Dec  5 02:11:37 compute-0 nova_compute[349548]:      <log file="/var/lib/nova/instances/1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/console.log" append="off"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    </serial>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <video>
Dec  5 02:11:37 compute-0 nova_compute[349548]:      <model type="virtio"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    </video>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <input type="tablet" bus="usb"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <rng model="virtio">
Dec  5 02:11:37 compute-0 nova_compute[349548]:      <backend model="random">/dev/urandom</backend>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    </rng>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <controller type="usb" index="0"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    <memballoon model="virtio">
Dec  5 02:11:37 compute-0 nova_compute[349548]:      <stats period="10"/>
Dec  5 02:11:37 compute-0 nova_compute[349548]:    </memballoon>
Dec  5 02:11:37 compute-0 nova_compute[349548]:  </devices>
Dec  5 02:11:37 compute-0 nova_compute[349548]: </domain>
Dec  5 02:11:37 compute-0 nova_compute[349548]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  5 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.042 349552 DEBUG nova.compute.manager [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Preparing to wait for external event network-vif-plugged-1e754fc7-106a-43d2-a675-79c30089904b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  5 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.043 349552 DEBUG oslo_concurrency.lockutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Acquiring lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.043 349552 DEBUG oslo_concurrency.lockutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.044 349552 DEBUG oslo_concurrency.lockutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.045 349552 DEBUG nova.virt.libvirt.vif [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T02:11:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-593464214',display_name='tempest-TestNetworkBasicOps-server-593464214',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-593464214',id=12,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPalP/AzwmHbA95rHCd/QJUJ7wbPS0Rqk62UPUO5FJAN2XrqFXhwvH10HGMSigesY1L3ja9sPfGII3cyjD9vy9gcLVsBBYGCRjTM6JwQSUcRRAf5rls2BCt8IBDTT+ISQg==',key_name='tempest-TestNetworkBasicOps-727356260',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6aaead05b2404fec8f687504ed800a2b',ramdisk_id='',reservation_id='r-bpaczbpy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-576606253',owner_user_name='tempest-TestNetworkBasicOps-576606253-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T02:11:31Z,user_data=None,user_id='2e61f46e24a240608d1523fb5265d3ac',uuid=1fcee2c4-ccfc-4651-bc90-a606a4e46e0f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1e754fc7-106a-43d2-a675-79c30089904b", "address": "fa:16:3e:ab:49:42", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e754fc7-10", "ovs_interfaceid": "1e754fc7-106a-43d2-a675-79c30089904b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  5 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.046 349552 DEBUG nova.network.os_vif_util [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Converting VIF {"id": "1e754fc7-106a-43d2-a675-79c30089904b", "address": "fa:16:3e:ab:49:42", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e754fc7-10", "ovs_interfaceid": "1e754fc7-106a-43d2-a675-79c30089904b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.047 349552 DEBUG nova.network.os_vif_util [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:49:42,bridge_name='br-int',has_traffic_filtering=True,id=1e754fc7-106a-43d2-a675-79c30089904b,network=Network(580f50f3-cfd1-4167-ba29-a8edbd53ee0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e754fc7-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.048 349552 DEBUG os_vif [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:49:42,bridge_name='br-int',has_traffic_filtering=True,id=1e754fc7-106a-43d2-a675-79c30089904b,network=Network(580f50f3-cfd1-4167-ba29-a8edbd53ee0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e754fc7-10') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  5 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.049 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.050 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.051 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.056 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.057 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1e754fc7-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.058 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1e754fc7-10, col_values=(('external_ids', {'iface-id': '1e754fc7-106a-43d2-a675-79c30089904b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ab:49:42', 'vm-uuid': '1fcee2c4-ccfc-4651-bc90-a606a4e46e0f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.060 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.062 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  5 02:11:37 compute-0 NetworkManager[49092]: <info>  [1764900697.0625] manager: (tap1e754fc7-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Dec  5 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.074 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.076 349552 INFO os_vif [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:49:42,bridge_name='br-int',has_traffic_filtering=True,id=1e754fc7-106a-43d2-a675-79c30089904b,network=Network(580f50f3-cfd1-4167-ba29-a8edbd53ee0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e754fc7-10')#033[00m
Dec  5 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.164 349552 DEBUG nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  5 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.166 349552 DEBUG nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  5 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.166 349552 DEBUG nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] No VIF found with MAC fa:16:3e:ab:49:42, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  5 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.167 349552 INFO nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Using config drive#033[00m
Dec  5 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.220 349552 DEBUG nova.storage.rbd_utils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] rbd image 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.249 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.927 349552 DEBUG nova.network.neutron [req-81fbe01c-f284-4855-9d5e-9c1b0dc1b111 req-8bf0dcf1-c689-4504-975f-ebfd6584428c a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Updated VIF entry in instance network info cache for port 1e754fc7-106a-43d2-a675-79c30089904b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  5 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.928 349552 DEBUG nova.network.neutron [req-81fbe01c-f284-4855-9d5e-9c1b0dc1b111 req-8bf0dcf1-c689-4504-975f-ebfd6584428c a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Updating instance_info_cache with network_info: [{"id": "1e754fc7-106a-43d2-a675-79c30089904b", "address": "fa:16:3e:ab:49:42", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e754fc7-10", "ovs_interfaceid": "1e754fc7-106a-43d2-a675-79c30089904b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:11:37 compute-0 nova_compute[349548]: 2025-12-05 02:11:37.960 349552 DEBUG oslo_concurrency.lockutils [req-81fbe01c-f284-4855-9d5e-9c1b0dc1b111 req-8bf0dcf1-c689-4504-975f-ebfd6584428c a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-1fcee2c4-ccfc-4651-bc90-a606a4e46e0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:11:38 compute-0 nova_compute[349548]: 2025-12-05 02:11:38.048 349552 INFO nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Creating config drive at /var/lib/nova/instances/1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.config#033[00m
Dec  5 02:11:38 compute-0 nova_compute[349548]: 2025-12-05 02:11:38.060 349552 DEBUG oslo_concurrency.processutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcv0km2fq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:11:38 compute-0 nova_compute[349548]: 2025-12-05 02:11:38.097 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:11:38 compute-0 nova_compute[349548]: 2025-12-05 02:11:38.098 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 02:11:38 compute-0 nova_compute[349548]: 2025-12-05 02:11:38.099 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  5 02:11:38 compute-0 nova_compute[349548]: 2025-12-05 02:11:38.123 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec  5 02:11:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:11:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1879: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Dec  5 02:11:38 compute-0 nova_compute[349548]: 2025-12-05 02:11:38.214 349552 DEBUG oslo_concurrency.processutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcv0km2fq" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:11:38 compute-0 nova_compute[349548]: 2025-12-05 02:11:38.274 349552 DEBUG nova.storage.rbd_utils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] rbd image 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:11:38 compute-0 nova_compute[349548]: 2025-12-05 02:11:38.285 349552 DEBUG oslo_concurrency.processutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.config 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:11:38 compute-0 nova_compute[349548]: 2025-12-05 02:11:38.319 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:11:38 compute-0 nova_compute[349548]: 2025-12-05 02:11:38.320 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:11:38 compute-0 nova_compute[349548]: 2025-12-05 02:11:38.320 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  5 02:11:38 compute-0 nova_compute[349548]: 2025-12-05 02:11:38.320 349552 DEBUG nova.objects.instance [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 292fd084-0808-4a80-adc1-6ab1f28e188a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:11:38 compute-0 nova_compute[349548]: 2025-12-05 02:11:38.589 349552 DEBUG oslo_concurrency.processutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.config 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:11:38 compute-0 nova_compute[349548]: 2025-12-05 02:11:38.590 349552 INFO nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Deleting local config drive /var/lib/nova/instances/1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.config because it was imported into RBD.#033[00m
Dec  5 02:11:38 compute-0 kernel: tap1e754fc7-10: entered promiscuous mode
Dec  5 02:11:38 compute-0 NetworkManager[49092]: <info>  [1764900698.6787] manager: (tap1e754fc7-10): new Tun device (/org/freedesktop/NetworkManager/Devices/67)
Dec  5 02:11:38 compute-0 ovn_controller[89286]: 2025-12-05T02:11:38Z|00138|binding|INFO|Claiming lport 1e754fc7-106a-43d2-a675-79c30089904b for this chassis.
Dec  5 02:11:38 compute-0 ovn_controller[89286]: 2025-12-05T02:11:38Z|00139|binding|INFO|1e754fc7-106a-43d2-a675-79c30089904b: Claiming fa:16:3e:ab:49:42 10.100.0.11
Dec  5 02:11:38 compute-0 nova_compute[349548]: 2025-12-05 02:11:38.681 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:38 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:38.695 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:49:42 10.100.0.11'], port_security=['fa:16:3e:ab:49:42 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '1fcee2c4-ccfc-4651-bc90-a606a4e46e0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6aaead05b2404fec8f687504ed800a2b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6637e5fa-33c5-4d8a-98b9-4b42baed7ff5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee2a399c-ba53-4ea4-9f46-ca7b46a10984, chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=1e754fc7-106a-43d2-a675-79c30089904b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 02:11:38 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:38.698 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 1e754fc7-106a-43d2-a675-79c30089904b in datapath 580f50f3-cfd1-4167-ba29-a8edbd53ee0f bound to our chassis#033[00m
Dec  5 02:11:38 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:38.701 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 580f50f3-cfd1-4167-ba29-a8edbd53ee0f#033[00m
Dec  5 02:11:38 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:38.719 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[9bb45931-aa84-4655-853b-649248d45649]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:11:38 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:38.720 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap580f50f3-c1 in ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  5 02:11:38 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:38.726 412744 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap580f50f3-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  5 02:11:38 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:38.727 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[da5bfd09-0799-4144-afdb-423f6dea9298]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:11:38 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:38.728 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[88d8a3a6-d87a-41b3-a0f0-585a6e3a034a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:11:38 compute-0 systemd-machined[138700]: New machine qemu-13-instance-0000000c.
Dec  5 02:11:38 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:38.748 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[2d436138-9abc-4c83-862b-b277da3dc9dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:11:38 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-0000000c.
Dec  5 02:11:38 compute-0 ovn_controller[89286]: 2025-12-05T02:11:38Z|00140|binding|INFO|Setting lport 1e754fc7-106a-43d2-a675-79c30089904b ovn-installed in OVS
Dec  5 02:11:38 compute-0 ovn_controller[89286]: 2025-12-05T02:11:38Z|00141|binding|INFO|Setting lport 1e754fc7-106a-43d2-a675-79c30089904b up in Southbound
Dec  5 02:11:38 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:38.778 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[02fac094-c095-4bd6-ba12-18a60bc24781]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:11:38 compute-0 nova_compute[349548]: 2025-12-05 02:11:38.783 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:38 compute-0 systemd-udevd[448548]: Network interface NamePolicy= disabled on kernel command line.
Dec  5 02:11:38 compute-0 NetworkManager[49092]: <info>  [1764900698.8223] device (tap1e754fc7-10): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  5 02:11:38 compute-0 NetworkManager[49092]: <info>  [1764900698.8283] device (tap1e754fc7-10): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  5 02:11:38 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:38.828 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[a812f1dd-2923-41d7-9728-6fc6258570c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:11:38 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:38.837 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[1c81b45e-13cc-4a42-b4fe-58721b8ef84c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:11:38 compute-0 systemd-udevd[448551]: Network interface NamePolicy= disabled on kernel command line.
Dec  5 02:11:38 compute-0 NetworkManager[49092]: <info>  [1764900698.8400] manager: (tap580f50f3-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/68)
Dec  5 02:11:38 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:38.878 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[444708fa-ea63-4b8d-8cac-0b20b4a98253]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:11:38 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:38.884 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[a73a8d47-0a0c-4ca4-8da0-759cc1acca74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:11:38 compute-0 NetworkManager[49092]: <info>  [1764900698.9195] device (tap580f50f3-c0): carrier: link connected
Dec  5 02:11:38 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:38.931 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[72cba477-7ca0-4fe8-bc65-3d327b013f2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:11:38 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:38.964 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[424da552-26ec-44f9-8981-5680b791a26f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap580f50f3-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:c2:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 42], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678558, 'reachable_time': 24166, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 448577, 'error': None, 'target': 'ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:11:38 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:38.991 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[00490c83-dcdf-490a-9d8f-7a2df154ef66]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6d:c292'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678558, 'tstamp': 678558}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 448578, 'error': None, 'target': 'ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:11:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:39.028 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[75156faa-16ca-41ae-95f7-70ceffded114]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap580f50f3-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:c2:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 42], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678558, 'reachable_time': 24166, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 448579, 'error': None, 'target': 'ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:11:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:39.097 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[f1f2cc28-aa7a-42a3-9ef9-b54d58a01223]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:11:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:39.229 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[9bfc2303-16df-4851-bc31-d0357e0fe098]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:11:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:39.230 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap580f50f3-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:11:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:39.231 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 02:11:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:39.232 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap580f50f3-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.235 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:39 compute-0 NetworkManager[49092]: <info>  [1764900699.2364] manager: (tap580f50f3-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Dec  5 02:11:39 compute-0 kernel: tap580f50f3-c0: entered promiscuous mode
Dec  5 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.241 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:39.243 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap580f50f3-c0, col_values=(('external_ids', {'iface-id': '29ff39a2-9491-44bb-a004-0de689e8aadc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.246 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:39 compute-0 ovn_controller[89286]: 2025-12-05T02:11:39Z|00142|binding|INFO|Releasing lport 29ff39a2-9491-44bb-a004-0de689e8aadc from this chassis (sb_readonly=0)
Dec  5 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.279 349552 DEBUG nova.compute.manager [req-ed2c6d8f-3f36-42bf-8fe2-b59fb5443dda req-d8e6f61e-72a6-422e-ba8e-d8235e928fc6 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Received event network-vif-plugged-1e754fc7-106a-43d2-a675-79c30089904b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:11:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:39.279 287122 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/580f50f3-cfd1-4167-ba29-a8edbd53ee0f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/580f50f3-cfd1-4167-ba29-a8edbd53ee0f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  5 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.281 349552 DEBUG oslo_concurrency.lockutils [req-ed2c6d8f-3f36-42bf-8fe2-b59fb5443dda req-d8e6f61e-72a6-422e-ba8e-d8235e928fc6 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:11:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:39.281 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[ed38c737-c474-4298-991b-ba9d7277bd26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:11:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:39.282 287122 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  5 02:11:39 compute-0 ovn_metadata_agent[287107]: global
Dec  5 02:11:39 compute-0 ovn_metadata_agent[287107]:    log         /dev/log local0 debug
Dec  5 02:11:39 compute-0 ovn_metadata_agent[287107]:    log-tag     haproxy-metadata-proxy-580f50f3-cfd1-4167-ba29-a8edbd53ee0f
Dec  5 02:11:39 compute-0 ovn_metadata_agent[287107]:    user        root
Dec  5 02:11:39 compute-0 ovn_metadata_agent[287107]:    group       root
Dec  5 02:11:39 compute-0 ovn_metadata_agent[287107]:    maxconn     1024
Dec  5 02:11:39 compute-0 ovn_metadata_agent[287107]:    pidfile     /var/lib/neutron/external/pids/580f50f3-cfd1-4167-ba29-a8edbd53ee0f.pid.haproxy
Dec  5 02:11:39 compute-0 ovn_metadata_agent[287107]:    daemon
Dec  5 02:11:39 compute-0 ovn_metadata_agent[287107]: 
Dec  5 02:11:39 compute-0 ovn_metadata_agent[287107]: defaults
Dec  5 02:11:39 compute-0 ovn_metadata_agent[287107]:    log global
Dec  5 02:11:39 compute-0 ovn_metadata_agent[287107]:    mode http
Dec  5 02:11:39 compute-0 ovn_metadata_agent[287107]:    option httplog
Dec  5 02:11:39 compute-0 ovn_metadata_agent[287107]:    option dontlognull
Dec  5 02:11:39 compute-0 ovn_metadata_agent[287107]:    option http-server-close
Dec  5 02:11:39 compute-0 ovn_metadata_agent[287107]:    option forwardfor
Dec  5 02:11:39 compute-0 ovn_metadata_agent[287107]:    retries                 3
Dec  5 02:11:39 compute-0 ovn_metadata_agent[287107]:    timeout http-request    30s
Dec  5 02:11:39 compute-0 ovn_metadata_agent[287107]:    timeout connect         30s
Dec  5 02:11:39 compute-0 ovn_metadata_agent[287107]:    timeout client          32s
Dec  5 02:11:39 compute-0 ovn_metadata_agent[287107]:    timeout server          32s
Dec  5 02:11:39 compute-0 ovn_metadata_agent[287107]:    timeout http-keep-alive 30s
Dec  5 02:11:39 compute-0 ovn_metadata_agent[287107]: 
Dec  5 02:11:39 compute-0 ovn_metadata_agent[287107]: 
Dec  5 02:11:39 compute-0 ovn_metadata_agent[287107]: listen listener
Dec  5 02:11:39 compute-0 ovn_metadata_agent[287107]:    bind 169.254.169.254:80
Dec  5 02:11:39 compute-0 ovn_metadata_agent[287107]:    server metadata /var/lib/neutron/metadata_proxy
Dec  5 02:11:39 compute-0 ovn_metadata_agent[287107]:    http-request add-header X-OVN-Network-ID 580f50f3-cfd1-4167-ba29-a8edbd53ee0f
Dec  5 02:11:39 compute-0 ovn_metadata_agent[287107]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  5 02:11:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:39.283 287122 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'env', 'PROCESS_TAG=haproxy-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/580f50f3-cfd1-4167-ba29-a8edbd53ee0f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  5 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.287 349552 DEBUG oslo_concurrency.lockutils [req-ed2c6d8f-3f36-42bf-8fe2-b59fb5443dda req-d8e6f61e-72a6-422e-ba8e-d8235e928fc6 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.288 349552 DEBUG oslo_concurrency.lockutils [req-ed2c6d8f-3f36-42bf-8fe2-b59fb5443dda req-d8e6f61e-72a6-422e-ba8e-d8235e928fc6 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.288 349552 DEBUG nova.compute.manager [req-ed2c6d8f-3f36-42bf-8fe2-b59fb5443dda req-d8e6f61e-72a6-422e-ba8e-d8235e928fc6 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Processing event network-vif-plugged-1e754fc7-106a-43d2-a675-79c30089904b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  5 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.289 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.646 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900699.645317, 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.648 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] VM Started (Lifecycle Event)#033[00m
Dec  5 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.650 349552 DEBUG nova.compute.manager [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  5 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.656 349552 DEBUG nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  5 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.665 349552 INFO nova.virt.libvirt.driver [-] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Instance spawned successfully.#033[00m
Dec  5 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.665 349552 DEBUG nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  5 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.746 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.754 349552 DEBUG nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.754 349552 DEBUG nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.755 349552 DEBUG nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.755 349552 DEBUG nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.756 349552 DEBUG nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.757 349552 DEBUG nova.virt.libvirt.driver [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.760 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  5 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.791 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  5 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.792 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900699.6455514, 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.792 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] VM Paused (Lifecycle Event)#033[00m
Dec  5 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.818 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.828 349552 INFO nova.compute.manager [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Took 8.09 seconds to spawn the instance on the hypervisor.#033[00m
Dec  5 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.828 349552 DEBUG nova.compute.manager [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.829 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900699.654527, 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.829 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] VM Resumed (Lifecycle Event)#033[00m
Dec  5 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.864 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:11:39 compute-0 podman[448650]: 2025-12-05 02:11:39.8668692 +0000 UTC m=+0.105998398 container create df4bb467200e60b85325558aa5683d0298efdeef6e06afa38b71f727f10b580e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec  5 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.871 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  5 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.897 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  5 02:11:39 compute-0 podman[448650]: 2025-12-05 02:11:39.810819776 +0000 UTC m=+0.049949054 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  5 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.907 349552 INFO nova.compute.manager [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Took 9.16 seconds to build instance.#033[00m
Dec  5 02:11:39 compute-0 systemd[1]: Started libpod-conmon-df4bb467200e60b85325558aa5683d0298efdeef6e06afa38b71f727f10b580e.scope.
Dec  5 02:11:39 compute-0 nova_compute[349548]: 2025-12-05 02:11:39.924 349552 DEBUG oslo_concurrency.lockutils [None req-4bbc1d26-a2e4-4865-b386-9f69f8465508 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.250s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:11:39 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:11:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be1f4a9f0791655bc892fe852878dc488b3de35fa469b0c521274d81205f10f4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  5 02:11:40 compute-0 podman[448650]: 2025-12-05 02:11:39.999595178 +0000 UTC m=+0.238724406 container init df4bb467200e60b85325558aa5683d0298efdeef6e06afa38b71f727f10b580e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec  5 02:11:40 compute-0 podman[448650]: 2025-12-05 02:11:40.016138862 +0000 UTC m=+0.255268090 container start df4bb467200e60b85325558aa5683d0298efdeef6e06afa38b71f727f10b580e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec  5 02:11:40 compute-0 neutron-haproxy-ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f[448664]: [NOTICE]   (448668) : New worker (448670) forked
Dec  5 02:11:40 compute-0 neutron-haproxy-ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f[448664]: [NOTICE]   (448668) : Loading success.
Dec  5 02:11:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1880: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 90 op/s
Dec  5 02:11:40 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:40.871 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8dd76c1c-ab01-42af-b35e-2e870841b6ad, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:11:40 compute-0 nova_compute[349548]: 2025-12-05 02:11:40.903 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updating instance_info_cache with network_info: [{"id": "706f9405-4061-481e-a252-9b14f4534a4e", "address": "fa:16:3e:cf:10:bc", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.151", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706f9405-40", "ovs_interfaceid": "706f9405-4061-481e-a252-9b14f4534a4e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:11:40 compute-0 nova_compute[349548]: 2025-12-05 02:11:40.922 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:11:40 compute-0 nova_compute[349548]: 2025-12-05 02:11:40.922 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  5 02:11:40 compute-0 nova_compute[349548]: 2025-12-05 02:11:40.924 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:11:40 compute-0 nova_compute[349548]: 2025-12-05 02:11:40.924 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:11:40 compute-0 nova_compute[349548]: 2025-12-05 02:11:40.925 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:11:40 compute-0 nova_compute[349548]: 2025-12-05 02:11:40.925 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:11:40 compute-0 nova_compute[349548]: 2025-12-05 02:11:40.926 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 02:11:40 compute-0 nova_compute[349548]: 2025-12-05 02:11:40.926 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:11:40 compute-0 nova_compute[349548]: 2025-12-05 02:11:40.962 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:11:40 compute-0 nova_compute[349548]: 2025-12-05 02:11:40.963 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:11:40 compute-0 nova_compute[349548]: 2025-12-05 02:11:40.964 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:11:40 compute-0 nova_compute[349548]: 2025-12-05 02:11:40.964 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 02:11:40 compute-0 nova_compute[349548]: 2025-12-05 02:11:40.964 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:11:41 compute-0 nova_compute[349548]: 2025-12-05 02:11:41.399 349552 DEBUG nova.compute.manager [req-38b20a08-3cbf-47a3-9b02-ebafc3e4a353 req-43ccb437-3fbc-461c-b0c0-ba9fab8cc71a a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Received event network-vif-plugged-1e754fc7-106a-43d2-a675-79c30089904b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:11:41 compute-0 nova_compute[349548]: 2025-12-05 02:11:41.400 349552 DEBUG oslo_concurrency.lockutils [req-38b20a08-3cbf-47a3-9b02-ebafc3e4a353 req-43ccb437-3fbc-461c-b0c0-ba9fab8cc71a a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:11:41 compute-0 nova_compute[349548]: 2025-12-05 02:11:41.402 349552 DEBUG oslo_concurrency.lockutils [req-38b20a08-3cbf-47a3-9b02-ebafc3e4a353 req-43ccb437-3fbc-461c-b0c0-ba9fab8cc71a a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:11:41 compute-0 nova_compute[349548]: 2025-12-05 02:11:41.402 349552 DEBUG oslo_concurrency.lockutils [req-38b20a08-3cbf-47a3-9b02-ebafc3e4a353 req-43ccb437-3fbc-461c-b0c0-ba9fab8cc71a a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:11:41 compute-0 nova_compute[349548]: 2025-12-05 02:11:41.403 349552 DEBUG nova.compute.manager [req-38b20a08-3cbf-47a3-9b02-ebafc3e4a353 req-43ccb437-3fbc-461c-b0c0-ba9fab8cc71a a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] No waiting events found dispatching network-vif-plugged-1e754fc7-106a-43d2-a675-79c30089904b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 02:11:41 compute-0 nova_compute[349548]: 2025-12-05 02:11:41.403 349552 WARNING nova.compute.manager [req-38b20a08-3cbf-47a3-9b02-ebafc3e4a353 req-43ccb437-3fbc-461c-b0c0-ba9fab8cc71a a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Received unexpected event network-vif-plugged-1e754fc7-106a-43d2-a675-79c30089904b for instance with vm_state active and task_state None.#033[00m
Dec  5 02:11:41 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:11:41 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/421907135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:11:41 compute-0 nova_compute[349548]: 2025-12-05 02:11:41.491 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:11:41 compute-0 nova_compute[349548]: 2025-12-05 02:11:41.608 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:11:41 compute-0 nova_compute[349548]: 2025-12-05 02:11:41.610 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:11:41 compute-0 nova_compute[349548]: 2025-12-05 02:11:41.619 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:11:41 compute-0 nova_compute[349548]: 2025-12-05 02:11:41.619 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:11:42 compute-0 nova_compute[349548]: 2025-12-05 02:11:42.061 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:42 compute-0 nova_compute[349548]: 2025-12-05 02:11:42.173 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:42 compute-0 nova_compute[349548]: 2025-12-05 02:11:42.183 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:11:42 compute-0 nova_compute[349548]: 2025-12-05 02:11:42.185 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3737MB free_disk=59.94660568237305GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 02:11:42 compute-0 nova_compute[349548]: 2025-12-05 02:11:42.187 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:11:42 compute-0 nova_compute[349548]: 2025-12-05 02:11:42.188 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:11:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1881: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Dec  5 02:11:42 compute-0 nova_compute[349548]: 2025-12-05 02:11:42.297 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 292fd084-0808-4a80-adc1-6ab1f28e188a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:11:42 compute-0 nova_compute[349548]: 2025-12-05 02:11:42.298 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:11:42 compute-0 nova_compute[349548]: 2025-12-05 02:11:42.299 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 02:11:42 compute-0 nova_compute[349548]: 2025-12-05 02:11:42.299 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 02:11:42 compute-0 nova_compute[349548]: 2025-12-05 02:11:42.356 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:11:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:11:42 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4030712250' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:11:42 compute-0 nova_compute[349548]: 2025-12-05 02:11:42.909 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:11:42 compute-0 nova_compute[349548]: 2025-12-05 02:11:42.919 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:11:42 compute-0 nova_compute[349548]: 2025-12-05 02:11:42.935 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:11:42 compute-0 nova_compute[349548]: 2025-12-05 02:11:42.952 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 02:11:42 compute-0 nova_compute[349548]: 2025-12-05 02:11:42.952 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:11:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:11:43 compute-0 podman[448724]: 2025-12-05 02:11:43.700336795 +0000 UTC m=+0.095816872 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  5 02:11:43 compute-0 podman[448727]: 2025-12-05 02:11:43.702763283 +0000 UTC m=+0.108120767 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., version=9.6, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, config_id=edpm, managed_by=edpm_ansible, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  5 02:11:43 compute-0 podman[448725]: 2025-12-05 02:11:43.718553467 +0000 UTC m=+0.123701325 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 02:11:43 compute-0 podman[448726]: 2025-12-05 02:11:43.737573721 +0000 UTC m=+0.129564740 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  5 02:11:43 compute-0 nova_compute[349548]: 2025-12-05 02:11:43.928 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:43 compute-0 NetworkManager[49092]: <info>  [1764900703.9381] manager: (patch-provnet-f36f4e0f-0425-4742-afb6-bfffeac36335-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Dec  5 02:11:43 compute-0 NetworkManager[49092]: <info>  [1764900703.9393] manager: (patch-br-int-to-provnet-f36f4e0f-0425-4742-afb6-bfffeac36335): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/71)
Dec  5 02:11:44 compute-0 nova_compute[349548]: 2025-12-05 02:11:44.124 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:44 compute-0 ovn_controller[89286]: 2025-12-05T02:11:44Z|00143|binding|INFO|Releasing lport 29ff39a2-9491-44bb-a004-0de689e8aadc from this chassis (sb_readonly=0)
Dec  5 02:11:44 compute-0 ovn_controller[89286]: 2025-12-05T02:11:44Z|00144|binding|INFO|Releasing lport 9309009c-26a0-4ed9-8142-14ad142ca1c0 from this chassis (sb_readonly=0)
Dec  5 02:11:44 compute-0 nova_compute[349548]: 2025-12-05 02:11:44.163 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1882: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 116 op/s
Dec  5 02:11:44 compute-0 nova_compute[349548]: 2025-12-05 02:11:44.528 349552 DEBUG nova.compute.manager [req-323cc9d4-29c3-47bb-9a87-c133d4375944 req-4a8ec755-fb6d-4073-91da-a00dcd162850 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Received event network-changed-1e754fc7-106a-43d2-a675-79c30089904b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:11:44 compute-0 nova_compute[349548]: 2025-12-05 02:11:44.529 349552 DEBUG nova.compute.manager [req-323cc9d4-29c3-47bb-9a87-c133d4375944 req-4a8ec755-fb6d-4073-91da-a00dcd162850 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Refreshing instance network info cache due to event network-changed-1e754fc7-106a-43d2-a675-79c30089904b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  5 02:11:44 compute-0 nova_compute[349548]: 2025-12-05 02:11:44.537 349552 DEBUG oslo_concurrency.lockutils [req-323cc9d4-29c3-47bb-9a87-c133d4375944 req-4a8ec755-fb6d-4073-91da-a00dcd162850 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-1fcee2c4-ccfc-4651-bc90-a606a4e46e0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:11:44 compute-0 nova_compute[349548]: 2025-12-05 02:11:44.538 349552 DEBUG oslo_concurrency.lockutils [req-323cc9d4-29c3-47bb-9a87-c133d4375944 req-4a8ec755-fb6d-4073-91da-a00dcd162850 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-1fcee2c4-ccfc-4651-bc90-a606a4e46e0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:11:44 compute-0 nova_compute[349548]: 2025-12-05 02:11:44.538 349552 DEBUG nova.network.neutron [req-323cc9d4-29c3-47bb-9a87-c133d4375944 req-4a8ec755-fb6d-4073-91da-a00dcd162850 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Refreshing network info cache for port 1e754fc7-106a-43d2-a675-79c30089904b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  5 02:11:45 compute-0 nova_compute[349548]: 2025-12-05 02:11:45.095 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:11:45 compute-0 nova_compute[349548]: 2025-12-05 02:11:45.095 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:11:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 02:11:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/882585099' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 02:11:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 02:11:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/882585099' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 02:11:46 compute-0 nova_compute[349548]: 2025-12-05 02:11:46.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:11:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1883: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Dec  5 02:11:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:11:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:11:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:11:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:11:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:11:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:11:46 compute-0 nova_compute[349548]: 2025-12-05 02:11:46.654 349552 DEBUG nova.network.neutron [req-323cc9d4-29c3-47bb-9a87-c133d4375944 req-4a8ec755-fb6d-4073-91da-a00dcd162850 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Updated VIF entry in instance network info cache for port 1e754fc7-106a-43d2-a675-79c30089904b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  5 02:11:46 compute-0 nova_compute[349548]: 2025-12-05 02:11:46.654 349552 DEBUG nova.network.neutron [req-323cc9d4-29c3-47bb-9a87-c133d4375944 req-4a8ec755-fb6d-4073-91da-a00dcd162850 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Updating instance_info_cache with network_info: [{"id": "1e754fc7-106a-43d2-a675-79c30089904b", "address": "fa:16:3e:ab:49:42", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e754fc7-10", "ovs_interfaceid": "1e754fc7-106a-43d2-a675-79c30089904b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:11:46 compute-0 nova_compute[349548]: 2025-12-05 02:11:46.699 349552 DEBUG oslo_concurrency.lockutils [req-323cc9d4-29c3-47bb-9a87-c133d4375944 req-4a8ec755-fb6d-4073-91da-a00dcd162850 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-1fcee2c4-ccfc-4651-bc90-a606a4e46e0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:11:47 compute-0 nova_compute[349548]: 2025-12-05 02:11:47.066 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:47 compute-0 nova_compute[349548]: 2025-12-05 02:11:47.180 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:48 compute-0 nova_compute[349548]: 2025-12-05 02:11:48.062 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:11:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:11:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1884: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 1.5 MiB/s wr, 138 op/s
Dec  5 02:11:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1885: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec  5 02:11:52 compute-0 nova_compute[349548]: 2025-12-05 02:11:52.072 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:52 compute-0 nova_compute[349548]: 2025-12-05 02:11:52.180 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1886: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec  5 02:11:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:11:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1887: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 12 KiB/s wr, 54 op/s
Dec  5 02:11:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:56.207 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:11:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:56.208 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:11:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:11:56.209 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:11:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1888: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 43 op/s
Dec  5 02:11:56 compute-0 podman[448807]: 2025-12-05 02:11:56.734348104 +0000 UTC m=+0.122889752 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  5 02:11:56 compute-0 podman[448806]: 2025-12-05 02:11:56.756503377 +0000 UTC m=+0.153162103 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Dec  5 02:11:57 compute-0 nova_compute[349548]: 2025-12-05 02:11:57.075 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:57 compute-0 nova_compute[349548]: 2025-12-05 02:11:57.183 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:11:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:11:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1889: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 962 KiB/s rd, 30 op/s
Dec  5 02:11:59 compute-0 podman[158197]: time="2025-12-05T02:11:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:11:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:11:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45046 "" "Go-http-client/1.1"
Dec  5 02:11:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:11:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9127 "" "Go-http-client/1.1"
Dec  5 02:12:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1890: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:12:01 compute-0 openstack_network_exporter[366555]: ERROR   02:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:12:01 compute-0 openstack_network_exporter[366555]: ERROR   02:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:12:01 compute-0 openstack_network_exporter[366555]: ERROR   02:12:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:12:01 compute-0 openstack_network_exporter[366555]: ERROR   02:12:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:12:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:12:01 compute-0 openstack_network_exporter[366555]: ERROR   02:12:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:12:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:12:01 compute-0 podman[448844]: 2025-12-05 02:12:01.698685759 +0000 UTC m=+0.112549042 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  5 02:12:01 compute-0 podman[448845]: 2025-12-05 02:12:01.736057158 +0000 UTC m=+0.140359713 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm)
Dec  5 02:12:02 compute-0 nova_compute[349548]: 2025-12-05 02:12:02.078 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:02 compute-0 nova_compute[349548]: 2025-12-05 02:12:02.187 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1891: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:12:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:12:03 compute-0 nova_compute[349548]: 2025-12-05 02:12:03.360 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:03 compute-0 podman[448880]: 2025-12-05 02:12:03.707479267 +0000 UTC m=+0.111871793 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, io.openshift.expose-services=, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, release-0.7.12=, vcs-type=git, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., version=9.4, container_name=kepler, managed_by=edpm_ansible, config_id=edpm)
Dec  5 02:12:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1892: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:12:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1893: 321 pgs: 321 active+clean; 170 MiB data, 334 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 6.0 KiB/s wr, 3 op/s
Dec  5 02:12:06 compute-0 ovn_controller[89286]: 2025-12-05T02:12:06Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cf:10:bc 10.100.0.151
Dec  5 02:12:06 compute-0 ovn_controller[89286]: 2025-12-05T02:12:06Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cf:10:bc 10.100.0.151
Dec  5 02:12:07 compute-0 nova_compute[349548]: 2025-12-05 02:12:07.083 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:07 compute-0 nova_compute[349548]: 2025-12-05 02:12:07.191 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:12:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1894: 321 pgs: 321 active+clean; 190 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 250 KiB/s rd, 1.9 MiB/s wr, 41 op/s
Dec  5 02:12:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:12:09 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:12:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 02:12:09 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:12:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 02:12:09 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:12:09 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev d3beff60-2d6b-47bb-8586-5b04f26833f6 does not exist
Dec  5 02:12:09 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 57461ed9-d14e-47dd-837f-dd2e64aed7e8 does not exist
Dec  5 02:12:09 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev cbb5181d-add0-4460-9326-806497e89c59 does not exist
Dec  5 02:12:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 02:12:09 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 02:12:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 02:12:09 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:12:09 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:12:09 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:12:09 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:12:09 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:12:09 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:12:10 compute-0 nova_compute[349548]: 2025-12-05 02:12:10.064 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:10 compute-0 podman[449168]: 2025-12-05 02:12:10.176073032 +0000 UTC m=+0.053598656 container create 6496a8d34e1ce72413650c09955a55f246548e4396956253d02a9df5d4fd1a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chandrasekhar, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:12:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1895: 321 pgs: 321 active+clean; 190 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 250 KiB/s rd, 1.9 MiB/s wr, 41 op/s
Dec  5 02:12:10 compute-0 systemd[1]: Started libpod-conmon-6496a8d34e1ce72413650c09955a55f246548e4396956253d02a9df5d4fd1a15.scope.
Dec  5 02:12:10 compute-0 podman[449168]: 2025-12-05 02:12:10.151524442 +0000 UTC m=+0.029050096 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:12:10 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:12:10 compute-0 podman[449168]: 2025-12-05 02:12:10.29600643 +0000 UTC m=+0.173532074 container init 6496a8d34e1ce72413650c09955a55f246548e4396956253d02a9df5d4fd1a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chandrasekhar, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:12:10 compute-0 podman[449168]: 2025-12-05 02:12:10.312496024 +0000 UTC m=+0.190021638 container start 6496a8d34e1ce72413650c09955a55f246548e4396956253d02a9df5d4fd1a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:12:10 compute-0 podman[449168]: 2025-12-05 02:12:10.317168525 +0000 UTC m=+0.194694239 container attach 6496a8d34e1ce72413650c09955a55f246548e4396956253d02a9df5d4fd1a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:12:10 compute-0 stoic_chandrasekhar[449183]: 167 167
Dec  5 02:12:10 compute-0 systemd[1]: libpod-6496a8d34e1ce72413650c09955a55f246548e4396956253d02a9df5d4fd1a15.scope: Deactivated successfully.
Dec  5 02:12:10 compute-0 conmon[449183]: conmon 6496a8d34e1ce7241365 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6496a8d34e1ce72413650c09955a55f246548e4396956253d02a9df5d4fd1a15.scope/container/memory.events
Dec  5 02:12:10 compute-0 podman[449168]: 2025-12-05 02:12:10.323801201 +0000 UTC m=+0.201326855 container died 6496a8d34e1ce72413650c09955a55f246548e4396956253d02a9df5d4fd1a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:12:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa7a9d975cdde2c61ef571e5e657356c134f96ef94af76f1d193eaad0abe9905-merged.mount: Deactivated successfully.
Dec  5 02:12:10 compute-0 podman[449168]: 2025-12-05 02:12:10.402707117 +0000 UTC m=+0.280232751 container remove 6496a8d34e1ce72413650c09955a55f246548e4396956253d02a9df5d4fd1a15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Dec  5 02:12:10 compute-0 systemd[1]: libpod-conmon-6496a8d34e1ce72413650c09955a55f246548e4396956253d02a9df5d4fd1a15.scope: Deactivated successfully.
Dec  5 02:12:10 compute-0 podman[449206]: 2025-12-05 02:12:10.686139518 +0000 UTC m=+0.106552184 container create 9d47f16afef6145db0d0121f5401d30bc0e12e702ae0bbac3abcffbcbbd1ba91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bhabha, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  5 02:12:10 compute-0 podman[449206]: 2025-12-05 02:12:10.634445716 +0000 UTC m=+0.054858442 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:12:10 compute-0 systemd[1]: Started libpod-conmon-9d47f16afef6145db0d0121f5401d30bc0e12e702ae0bbac3abcffbcbbd1ba91.scope.
Dec  5 02:12:10 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:12:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d85a71b0d2297eba99571aca7b3755c5ac3d662c110c56d401e7c8b533f124c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:12:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d85a71b0d2297eba99571aca7b3755c5ac3d662c110c56d401e7c8b533f124c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:12:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d85a71b0d2297eba99571aca7b3755c5ac3d662c110c56d401e7c8b533f124c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:12:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d85a71b0d2297eba99571aca7b3755c5ac3d662c110c56d401e7c8b533f124c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:12:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d85a71b0d2297eba99571aca7b3755c5ac3d662c110c56d401e7c8b533f124c6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 02:12:10 compute-0 podman[449206]: 2025-12-05 02:12:10.915786147 +0000 UTC m=+0.336198843 container init 9d47f16afef6145db0d0121f5401d30bc0e12e702ae0bbac3abcffbcbbd1ba91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bhabha, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  5 02:12:10 compute-0 podman[449206]: 2025-12-05 02:12:10.927560968 +0000 UTC m=+0.347973614 container start 9d47f16afef6145db0d0121f5401d30bc0e12e702ae0bbac3abcffbcbbd1ba91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bhabha, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:12:10 compute-0 podman[449206]: 2025-12-05 02:12:10.931566801 +0000 UTC m=+0.351979447 container attach 9d47f16afef6145db0d0121f5401d30bc0e12e702ae0bbac3abcffbcbbd1ba91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bhabha, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:12:12 compute-0 sad_bhabha[449222]: --> passed data devices: 0 physical, 3 LVM
Dec  5 02:12:12 compute-0 sad_bhabha[449222]: --> relative data size: 1.0
Dec  5 02:12:12 compute-0 sad_bhabha[449222]: --> All data devices are unavailable
Dec  5 02:12:12 compute-0 systemd[1]: libpod-9d47f16afef6145db0d0121f5401d30bc0e12e702ae0bbac3abcffbcbbd1ba91.scope: Deactivated successfully.
Dec  5 02:12:12 compute-0 systemd[1]: libpod-9d47f16afef6145db0d0121f5401d30bc0e12e702ae0bbac3abcffbcbbd1ba91.scope: Consumed 1.015s CPU time.
Dec  5 02:12:12 compute-0 podman[449206]: 2025-12-05 02:12:12.069276173 +0000 UTC m=+1.489688879 container died 9d47f16afef6145db0d0121f5401d30bc0e12e702ae0bbac3abcffbcbbd1ba91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bhabha, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:12:12 compute-0 nova_compute[349548]: 2025-12-05 02:12:12.087 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-d85a71b0d2297eba99571aca7b3755c5ac3d662c110c56d401e7c8b533f124c6-merged.mount: Deactivated successfully.
Dec  5 02:12:12 compute-0 podman[449206]: 2025-12-05 02:12:12.160724842 +0000 UTC m=+1.581137498 container remove 9d47f16afef6145db0d0121f5401d30bc0e12e702ae0bbac3abcffbcbbd1ba91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:12:12 compute-0 systemd[1]: libpod-conmon-9d47f16afef6145db0d0121f5401d30bc0e12e702ae0bbac3abcffbcbbd1ba91.scope: Deactivated successfully.
Dec  5 02:12:12 compute-0 nova_compute[349548]: 2025-12-05 02:12:12.194 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1896: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 415 KiB/s rd, 3.5 MiB/s wr, 79 op/s
Dec  5 02:12:12 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Dec  5 02:12:12 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:12.370381) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  5 02:12:12 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Dec  5 02:12:12 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900732370469, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 1434, "num_deletes": 251, "total_data_size": 2243493, "memory_usage": 2277712, "flush_reason": "Manual Compaction"}
Dec  5 02:12:12 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Dec  5 02:12:12 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900732387997, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 2200042, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37697, "largest_seqno": 39130, "table_properties": {"data_size": 2193304, "index_size": 3873, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14178, "raw_average_key_size": 19, "raw_value_size": 2179789, "raw_average_value_size": 3074, "num_data_blocks": 173, "num_entries": 709, "num_filter_entries": 709, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764900589, "oldest_key_time": 1764900589, "file_creation_time": 1764900732, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Dec  5 02:12:12 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 17675 microseconds, and 8319 cpu microseconds.
Dec  5 02:12:12 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 02:12:12 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:12.388069) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 2200042 bytes OK
Dec  5 02:12:12 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:12.388096) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Dec  5 02:12:12 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:12.390546) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Dec  5 02:12:12 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:12.390566) EVENT_LOG_v1 {"time_micros": 1764900732390559, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  5 02:12:12 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:12.390588) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  5 02:12:12 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 2237152, prev total WAL file size 2237152, number of live WAL files 2.
Dec  5 02:12:12 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:12:12 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:12.392019) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Dec  5 02:12:12 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  5 02:12:12 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(2148KB)], [86(9393KB)]
Dec  5 02:12:12 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900732392117, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 11818832, "oldest_snapshot_seqno": -1}
Dec  5 02:12:12 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 5850 keys, 10116314 bytes, temperature: kUnknown
Dec  5 02:12:12 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900732463545, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 10116314, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10075674, "index_size": 24914, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14661, "raw_key_size": 148414, "raw_average_key_size": 25, "raw_value_size": 9968515, "raw_average_value_size": 1704, "num_data_blocks": 1020, "num_entries": 5850, "num_filter_entries": 5850, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764900732, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Dec  5 02:12:12 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 02:12:12 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:12.463851) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 10116314 bytes
Dec  5 02:12:12 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:12.466282) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 165.3 rd, 141.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 9.2 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(10.0) write-amplify(4.6) OK, records in: 6368, records dropped: 518 output_compression: NoCompression
Dec  5 02:12:12 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:12.466311) EVENT_LOG_v1 {"time_micros": 1764900732466297, "job": 50, "event": "compaction_finished", "compaction_time_micros": 71509, "compaction_time_cpu_micros": 40332, "output_level": 6, "num_output_files": 1, "total_output_size": 10116314, "num_input_records": 6368, "num_output_records": 5850, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  5 02:12:12 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:12:12 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900732467189, "job": 50, "event": "table_file_deletion", "file_number": 88}
Dec  5 02:12:12 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:12:12 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900732471071, "job": 50, "event": "table_file_deletion", "file_number": 86}
Dec  5 02:12:12 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:12.391717) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:12:12 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:12.471273) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:12:12 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:12.471280) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:12:12 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:12.471284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:12:12 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:12.471287) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:12:12 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:12.471290) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:12:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:12:13 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Dec  5 02:12:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:13.183497) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  5 02:12:13 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Dec  5 02:12:13 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900733183559, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 255, "num_deletes": 250, "total_data_size": 14332, "memory_usage": 20208, "flush_reason": "Manual Compaction"}
Dec  5 02:12:13 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Dec  5 02:12:13 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900733186643, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 13846, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39131, "largest_seqno": 39385, "table_properties": {"data_size": 12094, "index_size": 49, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 645, "raw_key_size": 5124, "raw_average_key_size": 20, "raw_value_size": 8697, "raw_average_value_size": 34, "num_data_blocks": 2, "num_entries": 255, "num_filter_entries": 255, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764900733, "oldest_key_time": 1764900733, "file_creation_time": 1764900733, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Dec  5 02:12:13 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 3178 microseconds, and 900 cpu microseconds.
Dec  5 02:12:13 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 02:12:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:13.186684) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 13846 bytes OK
Dec  5 02:12:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:13.186695) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Dec  5 02:12:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:13.188503) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Dec  5 02:12:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:13.188513) EVENT_LOG_v1 {"time_micros": 1764900733188510, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  5 02:12:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:13.188528) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  5 02:12:13 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 12321, prev total WAL file size 12321, number of live WAL files 2.
Dec  5 02:12:13 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:12:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:13.189164) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353033' seq:72057594037927935, type:22 .. '6D6772737461740031373534' seq:0, type:0; will stop at (end)
Dec  5 02:12:13 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  5 02:12:13 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(13KB)], [89(9879KB)]
Dec  5 02:12:13 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900733189224, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 10130160, "oldest_snapshot_seqno": -1}
Dec  5 02:12:13 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 5601 keys, 6842209 bytes, temperature: kUnknown
Dec  5 02:12:13 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900733257762, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 6842209, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6808160, "index_size": 18963, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14021, "raw_key_size": 143432, "raw_average_key_size": 25, "raw_value_size": 6710216, "raw_average_value_size": 1198, "num_data_blocks": 771, "num_entries": 5601, "num_filter_entries": 5601, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764900733, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Dec  5 02:12:13 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 02:12:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:13.258198) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 6842209 bytes
Dec  5 02:12:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:13.260704) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.2 rd, 99.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 9.6 +0.0 blob) out(6.5 +0.0 blob), read-write-amplify(1225.8) write-amplify(494.2) OK, records in: 6105, records dropped: 504 output_compression: NoCompression
Dec  5 02:12:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:13.260725) EVENT_LOG_v1 {"time_micros": 1764900733260715, "job": 52, "event": "compaction_finished", "compaction_time_micros": 68825, "compaction_time_cpu_micros": 37880, "output_level": 6, "num_output_files": 1, "total_output_size": 6842209, "num_input_records": 6105, "num_output_records": 5601, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  5 02:12:13 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:12:13 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900733260853, "job": 52, "event": "table_file_deletion", "file_number": 91}
Dec  5 02:12:13 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:12:13 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900733263248, "job": 52, "event": "table_file_deletion", "file_number": 89}
Dec  5 02:12:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:13.188938) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:12:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:13.263520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:12:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:13.263527) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:12:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:13.263530) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:12:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:13.263533) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:12:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:13.263536) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:12:13 compute-0 podman[449400]: 2025-12-05 02:12:13.28399781 +0000 UTC m=+0.093384064 container create 514aca16c9ef90148289540b3ad3e14f7427e151d267acfcd8fe146147519bc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_blackburn, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:12:13 compute-0 podman[449400]: 2025-12-05 02:12:13.244312845 +0000 UTC m=+0.053699179 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:12:13 compute-0 systemd[1]: Started libpod-conmon-514aca16c9ef90148289540b3ad3e14f7427e151d267acfcd8fe146147519bc7.scope.
Dec  5 02:12:13 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:12:13 compute-0 podman[449400]: 2025-12-05 02:12:13.428511059 +0000 UTC m=+0.237897383 container init 514aca16c9ef90148289540b3ad3e14f7427e151d267acfcd8fe146147519bc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_blackburn, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:12:13 compute-0 podman[449400]: 2025-12-05 02:12:13.444657122 +0000 UTC m=+0.254043406 container start 514aca16c9ef90148289540b3ad3e14f7427e151d267acfcd8fe146147519bc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  5 02:12:13 compute-0 affectionate_blackburn[449415]: 167 167
Dec  5 02:12:13 compute-0 systemd[1]: libpod-514aca16c9ef90148289540b3ad3e14f7427e151d267acfcd8fe146147519bc7.scope: Deactivated successfully.
Dec  5 02:12:13 compute-0 conmon[449415]: conmon 514aca16c9ef90148289 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-514aca16c9ef90148289540b3ad3e14f7427e151d267acfcd8fe146147519bc7.scope/container/memory.events
Dec  5 02:12:13 compute-0 podman[449400]: 2025-12-05 02:12:13.452735849 +0000 UTC m=+0.262122103 container attach 514aca16c9ef90148289540b3ad3e14f7427e151d267acfcd8fe146147519bc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_blackburn, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  5 02:12:13 compute-0 podman[449400]: 2025-12-05 02:12:13.457309737 +0000 UTC m=+0.266695991 container died 514aca16c9ef90148289540b3ad3e14f7427e151d267acfcd8fe146147519bc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_blackburn, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Dec  5 02:12:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4e47439577f7ee75a0cb74c942084d08e687a19779e999c1c8fa07facc1af10-merged.mount: Deactivated successfully.
Dec  5 02:12:13 compute-0 podman[449400]: 2025-12-05 02:12:13.511327545 +0000 UTC m=+0.320713789 container remove 514aca16c9ef90148289540b3ad3e14f7427e151d267acfcd8fe146147519bc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_blackburn, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:12:13 compute-0 systemd[1]: libpod-conmon-514aca16c9ef90148289540b3ad3e14f7427e151d267acfcd8fe146147519bc7.scope: Deactivated successfully.
Dec  5 02:12:13 compute-0 ovn_controller[89286]: 2025-12-05T02:12:13Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ab:49:42 10.100.0.11
Dec  5 02:12:13 compute-0 ovn_controller[89286]: 2025-12-05T02:12:13Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ab:49:42 10.100.0.11
Dec  5 02:12:13 compute-0 podman[449438]: 2025-12-05 02:12:13.7589915 +0000 UTC m=+0.056633911 container create 17bd1bf079754874a9faf22c86ea4422d0c2a7291eee1b075247304ab4a6d863 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  5 02:12:13 compute-0 podman[449438]: 2025-12-05 02:12:13.738725071 +0000 UTC m=+0.036367502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:12:13 compute-0 systemd[1]: Started libpod-conmon-17bd1bf079754874a9faf22c86ea4422d0c2a7291eee1b075247304ab4a6d863.scope.
Dec  5 02:12:13 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:12:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc92e3bd7314cbe4460b06279fb992e744d1ad01aa116aa5781531e1b570c9f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:12:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc92e3bd7314cbe4460b06279fb992e744d1ad01aa116aa5781531e1b570c9f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:12:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc92e3bd7314cbe4460b06279fb992e744d1ad01aa116aa5781531e1b570c9f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:12:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc92e3bd7314cbe4460b06279fb992e744d1ad01aa116aa5781531e1b570c9f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:12:13 compute-0 podman[449438]: 2025-12-05 02:12:13.920196818 +0000 UTC m=+0.217839259 container init 17bd1bf079754874a9faf22c86ea4422d0c2a7291eee1b075247304ab4a6d863 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_goldberg, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:12:13 compute-0 podman[449438]: 2025-12-05 02:12:13.929855949 +0000 UTC m=+0.227498350 container start 17bd1bf079754874a9faf22c86ea4422d0c2a7291eee1b075247304ab4a6d863 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_goldberg, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:12:13 compute-0 podman[449438]: 2025-12-05 02:12:13.934045067 +0000 UTC m=+0.231687478 container attach 17bd1bf079754874a9faf22c86ea4422d0c2a7291eee1b075247304ab4a6d863 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_goldberg, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  5 02:12:13 compute-0 podman[449457]: 2025-12-05 02:12:13.936491726 +0000 UTC m=+0.121701679 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., managed_by=edpm_ansible, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, architecture=x86_64, vendor=Red Hat, Inc., config_id=edpm, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.openshift.expose-services=)
Dec  5 02:12:13 compute-0 podman[449455]: 2025-12-05 02:12:13.957349212 +0000 UTC m=+0.149876051 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  5 02:12:13 compute-0 podman[449456]: 2025-12-05 02:12:13.978379752 +0000 UTC m=+0.164709787 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller)
Dec  5 02:12:13 compute-0 podman[449452]: 2025-12-05 02:12:13.98896861 +0000 UTC m=+0.184293157 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Dec  5 02:12:14 compute-0 nova_compute[349548]: 2025-12-05 02:12:14.158 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1897: 321 pgs: 321 active+clean; 218 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 415 KiB/s rd, 3.5 MiB/s wr, 79 op/s
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]: {
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:    "0": [
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:        {
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:            "devices": [
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:                "/dev/loop3"
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:            ],
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:            "lv_name": "ceph_lv0",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:            "lv_size": "21470642176",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:            "name": "ceph_lv0",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:            "tags": {
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:                "ceph.cluster_name": "ceph",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:                "ceph.crush_device_class": "",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:                "ceph.encrypted": "0",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:                "ceph.osd_id": "0",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:                "ceph.type": "block",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:                "ceph.vdo": "0"
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:            },
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:            "type": "block",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:            "vg_name": "ceph_vg0"
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:        }
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:    ],
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:    "1": [
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:        {
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:            "devices": [
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:                "/dev/loop4"
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:            ],
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:            "lv_name": "ceph_lv1",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:            "lv_size": "21470642176",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:            "name": "ceph_lv1",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:            "tags": {
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:                "ceph.cluster_name": "ceph",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:                "ceph.crush_device_class": "",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:                "ceph.encrypted": "0",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:                "ceph.osd_id": "1",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:                "ceph.type": "block",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:                "ceph.vdo": "0"
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:            },
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:            "type": "block",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:            "vg_name": "ceph_vg1"
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:        }
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:    ],
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:    "2": [
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:        {
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:            "devices": [
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:                "/dev/loop5"
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:            ],
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:            "lv_name": "ceph_lv2",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:            "lv_size": "21470642176",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:            "name": "ceph_lv2",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:            "tags": {
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:                "ceph.cluster_name": "ceph",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:                "ceph.crush_device_class": "",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:                "ceph.encrypted": "0",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:                "ceph.osd_id": "2",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:                "ceph.type": "block",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:                "ceph.vdo": "0"
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:            },
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:            "type": "block",
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:            "vg_name": "ceph_vg2"
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:        }
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]:    ]
Dec  5 02:12:14 compute-0 xenodochial_goldberg[449486]: }
Dec  5 02:12:14 compute-0 systemd[1]: libpod-17bd1bf079754874a9faf22c86ea4422d0c2a7291eee1b075247304ab4a6d863.scope: Deactivated successfully.
Dec  5 02:12:14 compute-0 podman[449438]: 2025-12-05 02:12:14.753105681 +0000 UTC m=+1.050748102 container died 17bd1bf079754874a9faf22c86ea4422d0c2a7291eee1b075247304ab4a6d863 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_goldberg, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:12:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc92e3bd7314cbe4460b06279fb992e744d1ad01aa116aa5781531e1b570c9f0-merged.mount: Deactivated successfully.
Dec  5 02:12:14 compute-0 podman[449438]: 2025-12-05 02:12:14.837177122 +0000 UTC m=+1.134819533 container remove 17bd1bf079754874a9faf22c86ea4422d0c2a7291eee1b075247304ab4a6d863 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_goldberg, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  5 02:12:14 compute-0 systemd[1]: libpod-conmon-17bd1bf079754874a9faf22c86ea4422d0c2a7291eee1b075247304ab4a6d863.scope: Deactivated successfully.
Dec  5 02:12:15 compute-0 podman[449697]: 2025-12-05 02:12:15.903700176 +0000 UTC m=+0.056642332 container create fcc7936363cb99ef392902327e89402697ab024e400786b3fc58bab40ec6ad36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_gould, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:12:15 compute-0 systemd[1]: Started libpod-conmon-fcc7936363cb99ef392902327e89402697ab024e400786b3fc58bab40ec6ad36.scope.
Dec  5 02:12:15 compute-0 podman[449697]: 2025-12-05 02:12:15.884779024 +0000 UTC m=+0.037721200 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:12:16 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:12:16 compute-0 podman[449697]: 2025-12-05 02:12:16.035824887 +0000 UTC m=+0.188767123 container init fcc7936363cb99ef392902327e89402697ab024e400786b3fc58bab40ec6ad36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_gould, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  5 02:12:16 compute-0 podman[449697]: 2025-12-05 02:12:16.054436179 +0000 UTC m=+0.207378375 container start fcc7936363cb99ef392902327e89402697ab024e400786b3fc58bab40ec6ad36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  5 02:12:16 compute-0 podman[449697]: 2025-12-05 02:12:16.061762845 +0000 UTC m=+0.214705101 container attach fcc7936363cb99ef392902327e89402697ab024e400786b3fc58bab40ec6ad36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_gould, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Dec  5 02:12:16 compute-0 vibrant_gould[449713]: 167 167
Dec  5 02:12:16 compute-0 systemd[1]: libpod-fcc7936363cb99ef392902327e89402697ab024e400786b3fc58bab40ec6ad36.scope: Deactivated successfully.
Dec  5 02:12:16 compute-0 podman[449718]: 2025-12-05 02:12:16.137239585 +0000 UTC m=+0.054275316 container died fcc7936363cb99ef392902327e89402697ab024e400786b3fc58bab40ec6ad36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:12:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ab3e7026b96fd1b792662e358428bc82f1009973040f4813dd6f9c5693342c2-merged.mount: Deactivated successfully.
Dec  5 02:12:16 compute-0 podman[449718]: 2025-12-05 02:12:16.198271469 +0000 UTC m=+0.115307160 container remove fcc7936363cb99ef392902327e89402697ab024e400786b3fc58bab40ec6ad36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_gould, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Dec  5 02:12:16 compute-0 systemd[1]: libpod-conmon-fcc7936363cb99ef392902327e89402697ab024e400786b3fc58bab40ec6ad36.scope: Deactivated successfully.
Dec  5 02:12:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1898: 321 pgs: 321 active+clean; 232 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 585 KiB/s rd, 4.2 MiB/s wr, 116 op/s
Dec  5 02:12:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:12:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:12:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:12:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:12:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:12:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:12:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:12:16
Dec  5 02:12:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 02:12:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 02:12:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'vms', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'backups', 'default.rgw.meta', 'images', '.rgw.root']
Dec  5 02:12:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 02:12:16 compute-0 podman[449739]: 2025-12-05 02:12:16.465037821 +0000 UTC m=+0.080421789 container create 8480bc07c1f9d46f7c4f781f6190f38268a30d8c485897bb29f380ff475425f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bohr, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  5 02:12:16 compute-0 podman[449739]: 2025-12-05 02:12:16.436217502 +0000 UTC m=+0.051601560 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:12:16 compute-0 systemd[1]: Started libpod-conmon-8480bc07c1f9d46f7c4f781f6190f38268a30d8c485897bb29f380ff475425f2.scope.
Dec  5 02:12:16 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:12:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3827056187f30237f0321a9f26f9ab4a1f2e6ede149772f2745a847fc848e40f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:12:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3827056187f30237f0321a9f26f9ab4a1f2e6ede149772f2745a847fc848e40f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:12:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3827056187f30237f0321a9f26f9ab4a1f2e6ede149772f2745a847fc848e40f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:12:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3827056187f30237f0321a9f26f9ab4a1f2e6ede149772f2745a847fc848e40f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:12:16 compute-0 podman[449739]: 2025-12-05 02:12:16.642839445 +0000 UTC m=+0.258223493 container init 8480bc07c1f9d46f7c4f781f6190f38268a30d8c485897bb29f380ff475425f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec  5 02:12:16 compute-0 podman[449739]: 2025-12-05 02:12:16.663875986 +0000 UTC m=+0.279259964 container start 8480bc07c1f9d46f7c4f781f6190f38268a30d8c485897bb29f380ff475425f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bohr, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  5 02:12:16 compute-0 podman[449739]: 2025-12-05 02:12:16.669429512 +0000 UTC m=+0.284813530 container attach 8480bc07c1f9d46f7c4f781f6190f38268a30d8c485897bb29f380ff475425f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Dec  5 02:12:17 compute-0 nova_compute[349548]: 2025-12-05 02:12:17.096 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:17 compute-0 nova_compute[349548]: 2025-12-05 02:12:17.196 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:17 compute-0 epic_bohr[449755]: {
Dec  5 02:12:17 compute-0 epic_bohr[449755]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 02:12:17 compute-0 epic_bohr[449755]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:12:17 compute-0 epic_bohr[449755]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 02:12:17 compute-0 epic_bohr[449755]:        "osd_id": 0,
Dec  5 02:12:17 compute-0 epic_bohr[449755]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:12:17 compute-0 epic_bohr[449755]:        "type": "bluestore"
Dec  5 02:12:17 compute-0 epic_bohr[449755]:    },
Dec  5 02:12:17 compute-0 epic_bohr[449755]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 02:12:17 compute-0 epic_bohr[449755]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:12:17 compute-0 epic_bohr[449755]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 02:12:17 compute-0 epic_bohr[449755]:        "osd_id": 1,
Dec  5 02:12:17 compute-0 epic_bohr[449755]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:12:17 compute-0 epic_bohr[449755]:        "type": "bluestore"
Dec  5 02:12:17 compute-0 epic_bohr[449755]:    },
Dec  5 02:12:17 compute-0 epic_bohr[449755]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 02:12:17 compute-0 epic_bohr[449755]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:12:17 compute-0 epic_bohr[449755]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 02:12:17 compute-0 epic_bohr[449755]:        "osd_id": 2,
Dec  5 02:12:17 compute-0 epic_bohr[449755]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:12:17 compute-0 epic_bohr[449755]:        "type": "bluestore"
Dec  5 02:12:17 compute-0 epic_bohr[449755]:    }
Dec  5 02:12:17 compute-0 epic_bohr[449755]: }
Dec  5 02:12:17 compute-0 systemd[1]: libpod-8480bc07c1f9d46f7c4f781f6190f38268a30d8c485897bb29f380ff475425f2.scope: Deactivated successfully.
Dec  5 02:12:17 compute-0 podman[449739]: 2025-12-05 02:12:17.87245175 +0000 UTC m=+1.487835758 container died 8480bc07c1f9d46f7c4f781f6190f38268a30d8c485897bb29f380ff475425f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bohr, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:12:17 compute-0 systemd[1]: libpod-8480bc07c1f9d46f7c4f781f6190f38268a30d8c485897bb29f380ff475425f2.scope: Consumed 1.192s CPU time.
Dec  5 02:12:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-3827056187f30237f0321a9f26f9ab4a1f2e6ede149772f2745a847fc848e40f-merged.mount: Deactivated successfully.
Dec  5 02:12:17 compute-0 podman[449739]: 2025-12-05 02:12:17.985716021 +0000 UTC m=+1.601100009 container remove 8480bc07c1f9d46f7c4f781f6190f38268a30d8c485897bb29f380ff475425f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  5 02:12:18 compute-0 systemd[1]: libpod-conmon-8480bc07c1f9d46f7c4f781f6190f38268a30d8c485897bb29f380ff475425f2.scope: Deactivated successfully.
Dec  5 02:12:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 02:12:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 02:12:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:12:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 02:12:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:12:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:12:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:12:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:12:18 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:12:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:12:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:12:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:12:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 02:12:18 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:12:18 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev ecf263fb-c773-4c85-982f-1bc8e2bba288 does not exist
Dec  5 02:12:18 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 0c3db574-89d6-49ac-9e15-efeb59a73aaa does not exist
Dec  5 02:12:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:12:18 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Dec  5 02:12:18 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:18.195600) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  5 02:12:18 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Dec  5 02:12:18 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900738195703, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 306, "num_deletes": 250, "total_data_size": 115444, "memory_usage": 121848, "flush_reason": "Manual Compaction"}
Dec  5 02:12:18 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Dec  5 02:12:18 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900738201357, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 115837, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39386, "largest_seqno": 39691, "table_properties": {"data_size": 113807, "index_size": 258, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4101, "raw_average_key_size": 14, "raw_value_size": 109843, "raw_average_value_size": 392, "num_data_blocks": 11, "num_entries": 280, "num_filter_entries": 280, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764900733, "oldest_key_time": 1764900733, "file_creation_time": 1764900738, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Dec  5 02:12:18 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 5861 microseconds, and 1984 cpu microseconds.
Dec  5 02:12:18 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 02:12:18 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:18.201469) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 115837 bytes OK
Dec  5 02:12:18 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:18.201492) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Dec  5 02:12:18 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:18.204519) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Dec  5 02:12:18 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:18.204551) EVENT_LOG_v1 {"time_micros": 1764900738204542, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  5 02:12:18 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:18.204574) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  5 02:12:18 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 113244, prev total WAL file size 113244, number of live WAL files 2.
Dec  5 02:12:18 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:12:18 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:18.206681) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Dec  5 02:12:18 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  5 02:12:18 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(113KB)], [92(6681KB)]
Dec  5 02:12:18 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900738206779, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 6958046, "oldest_snapshot_seqno": -1}
Dec  5 02:12:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1899: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 597 KiB/s rd, 4.3 MiB/s wr, 118 op/s
Dec  5 02:12:18 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 5370 keys, 6238176 bytes, temperature: kUnknown
Dec  5 02:12:18 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900738257965, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 6238176, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6205798, "index_size": 17868, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13445, "raw_key_size": 140412, "raw_average_key_size": 26, "raw_value_size": 6111869, "raw_average_value_size": 1138, "num_data_blocks": 706, "num_entries": 5370, "num_filter_entries": 5370, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764900738, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Dec  5 02:12:18 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 02:12:18 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:18.258175) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 6238176 bytes
Dec  5 02:12:18 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:18.260126) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.8 rd, 121.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 6.5 +0.0 blob) out(5.9 +0.0 blob), read-write-amplify(113.9) write-amplify(53.9) OK, records in: 5881, records dropped: 511 output_compression: NoCompression
Dec  5 02:12:18 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:18.260146) EVENT_LOG_v1 {"time_micros": 1764900738260137, "job": 54, "event": "compaction_finished", "compaction_time_micros": 51242, "compaction_time_cpu_micros": 31897, "output_level": 6, "num_output_files": 1, "total_output_size": 6238176, "num_input_records": 5881, "num_output_records": 5370, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  5 02:12:18 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:12:18 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900738260292, "job": 54, "event": "table_file_deletion", "file_number": 94}
Dec  5 02:12:18 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:12:18 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900738261566, "job": 54, "event": "table_file_deletion", "file_number": 92}
Dec  5 02:12:18 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:18.206361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:12:18 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:18.261717) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:12:18 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:18.261722) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:12:18 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:18.261725) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:12:18 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:18.261727) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:12:18 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:12:18.261730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:12:19 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:12:19 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:12:19 compute-0 nova_compute[349548]: 2025-12-05 02:12:19.858 349552 INFO nova.compute.manager [None req-f37f4a2b-3f22-45df-82a1-0a41e59316dd 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Get console output#033[00m
Dec  5 02:12:19 compute-0 nova_compute[349548]: 2025-12-05 02:12:19.879 349552 INFO oslo.privsep.daemon [None req-f37f4a2b-3f22-45df-82a1-0a41e59316dd 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpzw0yexz7/privsep.sock']#033[00m
Dec  5 02:12:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1900: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 360 KiB/s rd, 2.3 MiB/s wr, 79 op/s
Dec  5 02:12:20 compute-0 nova_compute[349548]: 2025-12-05 02:12:20.673 349552 INFO oslo.privsep.daemon [None req-f37f4a2b-3f22-45df-82a1-0a41e59316dd 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Spawned new privsep daemon via rootwrap#033[00m
Dec  5 02:12:20 compute-0 nova_compute[349548]: 2025-12-05 02:12:20.532 449857 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  5 02:12:20 compute-0 nova_compute[349548]: 2025-12-05 02:12:20.541 449857 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  5 02:12:20 compute-0 nova_compute[349548]: 2025-12-05 02:12:20.545 449857 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Dec  5 02:12:20 compute-0 nova_compute[349548]: 2025-12-05 02:12:20.546 449857 INFO oslo.privsep.daemon [-] privsep daemon running as pid 449857#033[00m
Dec  5 02:12:20 compute-0 nova_compute[349548]: 2025-12-05 02:12:20.789 449857 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  5 02:12:21 compute-0 nova_compute[349548]: 2025-12-05 02:12:21.975 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:22 compute-0 nova_compute[349548]: 2025-12-05 02:12:22.100 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:22 compute-0 nova_compute[349548]: 2025-12-05 02:12:22.198 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1901: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 360 KiB/s rd, 2.4 MiB/s wr, 80 op/s
Dec  5 02:12:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:12:23 compute-0 nova_compute[349548]: 2025-12-05 02:12:23.189 349552 DEBUG nova.compute.manager [req-0541147a-3c3e-4f4e-b763-d161ddbdab7d req-5b74aa63-8a08-4d8f-90b3-197c2caadbd6 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Received event network-changed-1e754fc7-106a-43d2-a675-79c30089904b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:12:23 compute-0 nova_compute[349548]: 2025-12-05 02:12:23.190 349552 DEBUG nova.compute.manager [req-0541147a-3c3e-4f4e-b763-d161ddbdab7d req-5b74aa63-8a08-4d8f-90b3-197c2caadbd6 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Refreshing instance network info cache due to event network-changed-1e754fc7-106a-43d2-a675-79c30089904b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  5 02:12:23 compute-0 nova_compute[349548]: 2025-12-05 02:12:23.191 349552 DEBUG oslo_concurrency.lockutils [req-0541147a-3c3e-4f4e-b763-d161ddbdab7d req-5b74aa63-8a08-4d8f-90b3-197c2caadbd6 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-1fcee2c4-ccfc-4651-bc90-a606a4e46e0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:12:23 compute-0 nova_compute[349548]: 2025-12-05 02:12:23.192 349552 DEBUG oslo_concurrency.lockutils [req-0541147a-3c3e-4f4e-b763-d161ddbdab7d req-5b74aa63-8a08-4d8f-90b3-197c2caadbd6 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-1fcee2c4-ccfc-4651-bc90-a606a4e46e0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:12:23 compute-0 nova_compute[349548]: 2025-12-05 02:12:23.193 349552 DEBUG nova.network.neutron [req-0541147a-3c3e-4f4e-b763-d161ddbdab7d req-5b74aa63-8a08-4d8f-90b3-197c2caadbd6 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Refreshing network info cache for port 1e754fc7-106a-43d2-a675-79c30089904b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  5 02:12:23 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:23.242 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:c8:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:b5:45:4f:f9:d2'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 02:12:23 compute-0 nova_compute[349548]: 2025-12-05 02:12:23.243 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:23 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:23.244 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  5 02:12:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1902: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 196 KiB/s rd, 814 KiB/s wr, 42 op/s
Dec  5 02:12:26 compute-0 nova_compute[349548]: 2025-12-05 02:12:26.005 349552 DEBUG nova.network.neutron [req-0541147a-3c3e-4f4e-b763-d161ddbdab7d req-5b74aa63-8a08-4d8f-90b3-197c2caadbd6 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Updated VIF entry in instance network info cache for port 1e754fc7-106a-43d2-a675-79c30089904b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  5 02:12:26 compute-0 nova_compute[349548]: 2025-12-05 02:12:26.007 349552 DEBUG nova.network.neutron [req-0541147a-3c3e-4f4e-b763-d161ddbdab7d req-5b74aa63-8a08-4d8f-90b3-197c2caadbd6 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Updating instance_info_cache with network_info: [{"id": "1e754fc7-106a-43d2-a675-79c30089904b", "address": "fa:16:3e:ab:49:42", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e754fc7-10", "ovs_interfaceid": "1e754fc7-106a-43d2-a675-79c30089904b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:12:26 compute-0 nova_compute[349548]: 2025-12-05 02:12:26.033 349552 DEBUG oslo_concurrency.lockutils [req-0541147a-3c3e-4f4e-b763-d161ddbdab7d req-5b74aa63-8a08-4d8f-90b3-197c2caadbd6 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-1fcee2c4-ccfc-4651-bc90-a606a4e46e0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:12:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1903: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 202 KiB/s rd, 818 KiB/s wr, 42 op/s
Dec  5 02:12:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:26.247 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8dd76c1c-ab01-42af-b35e-2e870841b6ad, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:12:26 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 02:12:26 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.0 total, 600.0 interval#012Cumulative writes: 8693 writes, 39K keys, 8693 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 8693 writes, 8693 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1389 writes, 7001 keys, 1389 commit groups, 1.0 writes per commit group, ingest: 8.84 MB, 0.01 MB/s#012Interval WAL: 1389 writes, 1389 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    103.0      0.47              0.21        27    0.017       0      0       0.0       0.0#012  L6      1/0    5.95 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.0    119.9     98.2      1.96              0.84        26    0.075    134K    14K       0.0       0.0#012 Sum      1/0    5.95 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   5.0     96.7     99.2      2.43              1.05        53    0.046    134K    14K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.9    110.0    106.4      0.68              0.35        16    0.042     48K   4102       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    119.9     98.2      1.96              0.84        26    0.075    134K    14K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    104.0      0.46              0.21        26    0.018       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.4      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3600.0 total, 600.0 interval#012Flush(GB): cumulative 0.047, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.23 GB write, 0.07 MB/s write, 0.23 GB read, 0.07 MB/s read, 2.4 seconds#012Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56463779d1f0#2 capacity: 304.00 MB usage: 27.45 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000186 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1763,26.45 MB,8.69984%) FilterBlock(54,381.86 KB,0.122668%) IndexBlock(54,641.39 KB,0.206039%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  5 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 02:12:27 compute-0 nova_compute[349548]: 2025-12-05 02:12:27.106 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015168927443511102 of space, bias 1.0, pg target 0.4550678233053331 quantized to 32 (current 32)
Dec  5 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  5 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:12:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 02:12:27 compute-0 nova_compute[349548]: 2025-12-05 02:12:27.203 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:27 compute-0 podman[449860]: 2025-12-05 02:12:27.692315747 +0000 UTC m=+0.095988127 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 02:12:27 compute-0 podman[449859]: 2025-12-05 02:12:27.715628622 +0000 UTC m=+0.119122767 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 02:12:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:12:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1904: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 36 KiB/s wr, 5 op/s
Dec  5 02:12:29 compute-0 podman[158197]: time="2025-12-05T02:12:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:12:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:12:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 45046 "" "Go-http-client/1.1"
Dec  5 02:12:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:12:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9111 "" "Go-http-client/1.1"
Dec  5 02:12:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1905: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 16 KiB/s wr, 1 op/s
Dec  5 02:12:30 compute-0 nova_compute[349548]: 2025-12-05 02:12:30.430 349552 DEBUG oslo_concurrency.lockutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Acquiring lock "e184a71d-1d91-4999-bb53-73c2caa1110a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:12:30 compute-0 nova_compute[349548]: 2025-12-05 02:12:30.431 349552 DEBUG oslo_concurrency.lockutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "e184a71d-1d91-4999-bb53-73c2caa1110a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:12:30 compute-0 nova_compute[349548]: 2025-12-05 02:12:30.468 349552 DEBUG nova.compute.manager [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  5 02:12:30 compute-0 nova_compute[349548]: 2025-12-05 02:12:30.588 349552 DEBUG oslo_concurrency.lockutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:12:30 compute-0 nova_compute[349548]: 2025-12-05 02:12:30.589 349552 DEBUG oslo_concurrency.lockutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:12:30 compute-0 nova_compute[349548]: 2025-12-05 02:12:30.606 349552 DEBUG nova.virt.hardware [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  5 02:12:30 compute-0 nova_compute[349548]: 2025-12-05 02:12:30.607 349552 INFO nova.compute.claims [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  5 02:12:30 compute-0 nova_compute[349548]: 2025-12-05 02:12:30.774 349552 DEBUG oslo_concurrency.processutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:12:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:12:31 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4048453107' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.266 349552 DEBUG oslo_concurrency.processutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.278 349552 DEBUG nova.compute.provider_tree [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.304 349552 DEBUG nova.scheduler.client.report [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.359 349552 DEBUG oslo_concurrency.lockutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.360 349552 DEBUG nova.compute.manager [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  5 02:12:31 compute-0 openstack_network_exporter[366555]: ERROR   02:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:12:31 compute-0 openstack_network_exporter[366555]: ERROR   02:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:12:31 compute-0 openstack_network_exporter[366555]: ERROR   02:12:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:12:31 compute-0 openstack_network_exporter[366555]: ERROR   02:12:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:12:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:12:31 compute-0 openstack_network_exporter[366555]: ERROR   02:12:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:12:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.426 349552 DEBUG nova.compute.manager [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  5 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.427 349552 DEBUG nova.network.neutron [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  5 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.453 349552 INFO nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  5 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.479 349552 DEBUG nova.compute.manager [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  5 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.570 349552 DEBUG nova.compute.manager [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  5 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.572 349552 DEBUG nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  5 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.573 349552 INFO nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Creating image(s)#033[00m
Dec  5 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.621 349552 DEBUG nova.storage.rbd_utils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] rbd image e184a71d-1d91-4999-bb53-73c2caa1110a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.671 349552 DEBUG nova.storage.rbd_utils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] rbd image e184a71d-1d91-4999-bb53-73c2caa1110a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.723 349552 DEBUG nova.storage.rbd_utils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] rbd image e184a71d-1d91-4999-bb53-73c2caa1110a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.733 349552 DEBUG oslo_concurrency.processutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.767 349552 DEBUG nova.policy [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2e61f46e24a240608d1523fb5265d3ac', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6aaead05b2404fec8f687504ed800a2b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  5 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.825 349552 DEBUG oslo_concurrency.processutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.826 349552 DEBUG oslo_concurrency.lockutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Acquiring lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.827 349552 DEBUG oslo_concurrency.lockutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.827 349552 DEBUG oslo_concurrency.lockutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.879 349552 DEBUG nova.storage.rbd_utils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] rbd image e184a71d-1d91-4999-bb53-73c2caa1110a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:12:31 compute-0 nova_compute[349548]: 2025-12-05 02:12:31.889 349552 DEBUG oslo_concurrency.processutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 e184a71d-1d91-4999-bb53-73c2caa1110a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:12:32 compute-0 nova_compute[349548]: 2025-12-05 02:12:32.112 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:32 compute-0 nova_compute[349548]: 2025-12-05 02:12:32.207 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1906: 321 pgs: 321 active+clean; 240 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 273 KiB/s wr, 4 op/s
Dec  5 02:12:32 compute-0 nova_compute[349548]: 2025-12-05 02:12:32.307 349552 DEBUG oslo_concurrency.processutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 e184a71d-1d91-4999-bb53-73c2caa1110a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:12:32 compute-0 nova_compute[349548]: 2025-12-05 02:12:32.463 349552 DEBUG nova.storage.rbd_utils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] resizing rbd image e184a71d-1d91-4999-bb53-73c2caa1110a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  5 02:12:32 compute-0 nova_compute[349548]: 2025-12-05 02:12:32.537 349552 DEBUG oslo_concurrency.lockutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Acquiring lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:12:32 compute-0 nova_compute[349548]: 2025-12-05 02:12:32.538 349552 DEBUG oslo_concurrency.lockutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:12:32 compute-0 nova_compute[349548]: 2025-12-05 02:12:32.552 349552 DEBUG nova.compute.manager [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  5 02:12:32 compute-0 nova_compute[349548]: 2025-12-05 02:12:32.597 349552 DEBUG nova.network.neutron [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Successfully created port: 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  5 02:12:32 compute-0 nova_compute[349548]: 2025-12-05 02:12:32.619 349552 DEBUG oslo_concurrency.lockutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:12:32 compute-0 nova_compute[349548]: 2025-12-05 02:12:32.620 349552 DEBUG oslo_concurrency.lockutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:12:32 compute-0 nova_compute[349548]: 2025-12-05 02:12:32.630 349552 DEBUG nova.virt.hardware [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  5 02:12:32 compute-0 nova_compute[349548]: 2025-12-05 02:12:32.630 349552 INFO nova.compute.claims [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  5 02:12:32 compute-0 podman[450071]: 2025-12-05 02:12:32.684394443 +0000 UTC m=+0.093440815 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  5 02:12:32 compute-0 podman[450072]: 2025-12-05 02:12:32.727254117 +0000 UTC m=+0.119920239 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  5 02:12:32 compute-0 nova_compute[349548]: 2025-12-05 02:12:32.748 349552 DEBUG nova.objects.instance [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lazy-loading 'migration_context' on Instance uuid e184a71d-1d91-4999-bb53-73c2caa1110a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:12:32 compute-0 nova_compute[349548]: 2025-12-05 02:12:32.767 349552 DEBUG nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  5 02:12:32 compute-0 nova_compute[349548]: 2025-12-05 02:12:32.767 349552 DEBUG nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Ensure instance console log exists: /var/lib/nova/instances/e184a71d-1d91-4999-bb53-73c2caa1110a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  5 02:12:32 compute-0 nova_compute[349548]: 2025-12-05 02:12:32.768 349552 DEBUG oslo_concurrency.lockutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:12:32 compute-0 nova_compute[349548]: 2025-12-05 02:12:32.768 349552 DEBUG oslo_concurrency.lockutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:12:32 compute-0 nova_compute[349548]: 2025-12-05 02:12:32.769 349552 DEBUG oslo_concurrency.lockutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:12:32 compute-0 nova_compute[349548]: 2025-12-05 02:12:32.837 349552 DEBUG oslo_concurrency.processutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:12:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:12:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:12:33 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/486296399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:12:33 compute-0 nova_compute[349548]: 2025-12-05 02:12:33.374 349552 DEBUG oslo_concurrency.processutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:12:33 compute-0 nova_compute[349548]: 2025-12-05 02:12:33.385 349552 DEBUG nova.compute.provider_tree [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:12:33 compute-0 nova_compute[349548]: 2025-12-05 02:12:33.412 349552 DEBUG nova.scheduler.client.report [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:12:33 compute-0 nova_compute[349548]: 2025-12-05 02:12:33.453 349552 DEBUG oslo_concurrency.lockutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:12:33 compute-0 nova_compute[349548]: 2025-12-05 02:12:33.454 349552 DEBUG nova.compute.manager [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  5 02:12:33 compute-0 nova_compute[349548]: 2025-12-05 02:12:33.519 349552 DEBUG nova.compute.manager [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  5 02:12:33 compute-0 nova_compute[349548]: 2025-12-05 02:12:33.519 349552 DEBUG nova.network.neutron [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  5 02:12:33 compute-0 nova_compute[349548]: 2025-12-05 02:12:33.544 349552 INFO nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  5 02:12:33 compute-0 nova_compute[349548]: 2025-12-05 02:12:33.577 349552 DEBUG nova.compute.manager [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  5 02:12:33 compute-0 nova_compute[349548]: 2025-12-05 02:12:33.691 349552 DEBUG nova.compute.manager [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  5 02:12:33 compute-0 nova_compute[349548]: 2025-12-05 02:12:33.692 349552 DEBUG nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  5 02:12:33 compute-0 nova_compute[349548]: 2025-12-05 02:12:33.693 349552 INFO nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Creating image(s)#033[00m
Dec  5 02:12:33 compute-0 nova_compute[349548]: 2025-12-05 02:12:33.746 349552 DEBUG nova.storage.rbd_utils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] rbd image 117d1772-87cc-4a3d-bf07-3f9b49ac0c63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:12:33 compute-0 nova_compute[349548]: 2025-12-05 02:12:33.822 349552 DEBUG nova.storage.rbd_utils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] rbd image 117d1772-87cc-4a3d-bf07-3f9b49ac0c63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:12:33 compute-0 nova_compute[349548]: 2025-12-05 02:12:33.880 349552 DEBUG nova.storage.rbd_utils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] rbd image 117d1772-87cc-4a3d-bf07-3f9b49ac0c63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:12:33 compute-0 nova_compute[349548]: 2025-12-05 02:12:33.890 349552 DEBUG oslo_concurrency.processutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:12:33 compute-0 nova_compute[349548]: 2025-12-05 02:12:33.979 349552 DEBUG oslo_concurrency.processutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:12:33 compute-0 nova_compute[349548]: 2025-12-05 02:12:33.980 349552 DEBUG oslo_concurrency.lockutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Acquiring lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:12:33 compute-0 nova_compute[349548]: 2025-12-05 02:12:33.981 349552 DEBUG oslo_concurrency.lockutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:12:33 compute-0 nova_compute[349548]: 2025-12-05 02:12:33.982 349552 DEBUG oslo_concurrency.lockutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Lock "ffce62741223dc66a92b5b29c88e68e15f46caf3" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:12:34 compute-0 nova_compute[349548]: 2025-12-05 02:12:34.041 349552 DEBUG nova.storage.rbd_utils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] rbd image 117d1772-87cc-4a3d-bf07-3f9b49ac0c63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:12:34 compute-0 nova_compute[349548]: 2025-12-05 02:12:34.053 349552 DEBUG oslo_concurrency.processutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 117d1772-87cc-4a3d-bf07-3f9b49ac0c63_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:12:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1907: 321 pgs: 321 active+clean; 240 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 7.8 KiB/s rd, 262 KiB/s wr, 4 op/s
Dec  5 02:12:34 compute-0 nova_compute[349548]: 2025-12-05 02:12:34.467 349552 DEBUG oslo_concurrency.processutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3 117d1772-87cc-4a3d-bf07-3f9b49ac0c63_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:12:34 compute-0 nova_compute[349548]: 2025-12-05 02:12:34.633 349552 DEBUG nova.storage.rbd_utils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] resizing rbd image 117d1772-87cc-4a3d-bf07-3f9b49ac0c63_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  5 02:12:34 compute-0 podman[450270]: 2025-12-05 02:12:34.708501901 +0000 UTC m=+0.125188127 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, container_name=kepler, distribution-scope=public, io.openshift.expose-services=, name=ubi9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release=1214.1726694543, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, config_id=edpm, maintainer=Red Hat, Inc.)
Dec  5 02:12:34 compute-0 nova_compute[349548]: 2025-12-05 02:12:34.884 349552 DEBUG nova.objects.instance [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Lazy-loading 'migration_context' on Instance uuid 117d1772-87cc-4a3d-bf07-3f9b49ac0c63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:12:34 compute-0 nova_compute[349548]: 2025-12-05 02:12:34.910 349552 DEBUG nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  5 02:12:34 compute-0 nova_compute[349548]: 2025-12-05 02:12:34.910 349552 DEBUG nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Ensure instance console log exists: /var/lib/nova/instances/117d1772-87cc-4a3d-bf07-3f9b49ac0c63/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  5 02:12:34 compute-0 nova_compute[349548]: 2025-12-05 02:12:34.913 349552 DEBUG oslo_concurrency.lockutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:12:34 compute-0 nova_compute[349548]: 2025-12-05 02:12:34.914 349552 DEBUG oslo_concurrency.lockutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:12:34 compute-0 nova_compute[349548]: 2025-12-05 02:12:34.915 349552 DEBUG oslo_concurrency.lockutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:12:34 compute-0 nova_compute[349548]: 2025-12-05 02:12:34.983 349552 DEBUG nova.policy [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '69e134c969b04dc58a1d1556d8ecf4a8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '286d2d767009421bb0c889a0ff65b2a2', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  5 02:12:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1908: 321 pgs: 321 active+clean; 277 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 8.6 KiB/s rd, 956 KiB/s wr, 9 op/s
Dec  5 02:12:36 compute-0 nova_compute[349548]: 2025-12-05 02:12:36.763 349552 DEBUG nova.network.neutron [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Successfully updated port: 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  5 02:12:36 compute-0 nova_compute[349548]: 2025-12-05 02:12:36.781 349552 DEBUG oslo_concurrency.lockutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Acquiring lock "refresh_cache-e184a71d-1d91-4999-bb53-73c2caa1110a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:12:36 compute-0 nova_compute[349548]: 2025-12-05 02:12:36.781 349552 DEBUG oslo_concurrency.lockutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Acquired lock "refresh_cache-e184a71d-1d91-4999-bb53-73c2caa1110a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:12:36 compute-0 nova_compute[349548]: 2025-12-05 02:12:36.782 349552 DEBUG nova.network.neutron [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  5 02:12:36 compute-0 nova_compute[349548]: 2025-12-05 02:12:36.894 349552 DEBUG nova.compute.manager [req-7948c396-f586-464c-91dc-5b1543e66ab0 req-602e42d3-e9f0-405b-baa4-9877d1c76a34 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Received event network-changed-94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:12:36 compute-0 nova_compute[349548]: 2025-12-05 02:12:36.895 349552 DEBUG nova.compute.manager [req-7948c396-f586-464c-91dc-5b1543e66ab0 req-602e42d3-e9f0-405b-baa4-9877d1c76a34 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Refreshing instance network info cache due to event network-changed-94c7e2c9-6aeb-4be2-a022-8cd7ad27d978. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  5 02:12:36 compute-0 nova_compute[349548]: 2025-12-05 02:12:36.896 349552 DEBUG oslo_concurrency.lockutils [req-7948c396-f586-464c-91dc-5b1543e66ab0 req-602e42d3-e9f0-405b-baa4-9877d1c76a34 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-e184a71d-1d91-4999-bb53-73c2caa1110a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:12:36 compute-0 nova_compute[349548]: 2025-12-05 02:12:36.981 349552 DEBUG nova.network.neutron [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Successfully created port: d5201944-8184-405e-ae5f-b743e1bd7399 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  5 02:12:36 compute-0 nova_compute[349548]: 2025-12-05 02:12:36.987 349552 DEBUG nova.network.neutron [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  5 02:12:37 compute-0 nova_compute[349548]: 2025-12-05 02:12:37.123 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:37 compute-0 nova_compute[349548]: 2025-12-05 02:12:37.214 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  5 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.099 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec  5 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.099 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec  5 02:12:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:12:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1909: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Dec  5 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.323 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  5 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.324 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  5 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.326 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.334 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  5 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.338 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/1fcee2c4-ccfc-4651-bc90-a606a4e46e0f -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}03a5c5085f72a10a14834caf2c8f725d7bea9761ee1da0af3d318eb89d91a8ae" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  5 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:12:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:38.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.386 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.387 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.388 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  5 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.389 349552 DEBUG nova.objects.instance [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 292fd084-0808-4a80-adc1-6ab1f28e188a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.753 349552 DEBUG nova.network.neutron [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Updating instance_info_cache with network_info: [{"id": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "address": "fa:16:3e:de:22:fb", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94c7e2c9-6a", "ovs_interfaceid": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.772 349552 DEBUG oslo_concurrency.lockutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Releasing lock "refresh_cache-e184a71d-1d91-4999-bb53-73c2caa1110a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.773 349552 DEBUG nova.compute.manager [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Instance network_info: |[{"id": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "address": "fa:16:3e:de:22:fb", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94c7e2c9-6a", "ovs_interfaceid": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  5 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.773 349552 DEBUG oslo_concurrency.lockutils [req-7948c396-f586-464c-91dc-5b1543e66ab0 req-602e42d3-e9f0-405b-baa4-9877d1c76a34 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-e184a71d-1d91-4999-bb53-73c2caa1110a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.774 349552 DEBUG nova.network.neutron [req-7948c396-f586-464c-91dc-5b1543e66ab0 req-602e42d3-e9f0-405b-baa4-9877d1c76a34 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Refreshing network info cache for port 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  5 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.777 349552 DEBUG nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Start _get_guest_xml network_info=[{"id": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "address": "fa:16:3e:de:22:fb", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94c7e2c9-6a", "ovs_interfaceid": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T02:07:35Z,direct_url=<?>,disk_format='qcow2',id=e9091bfb-b431-47c9-a284-79372046956b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T02:07:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'image_id': 'e9091bfb-b431-47c9-a284-79372046956b'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  5 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.787 349552 WARNING nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.800 349552 DEBUG nova.virt.libvirt.host [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  5 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.801 349552 DEBUG nova.virt.libvirt.host [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  5 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.807 349552 DEBUG nova.virt.libvirt.host [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  5 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.808 349552 DEBUG nova.virt.libvirt.host [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  5 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.809 349552 DEBUG nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  5 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.809 349552 DEBUG nova.virt.hardware [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-05T02:07:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T02:07:35Z,direct_url=<?>,disk_format='qcow2',id=e9091bfb-b431-47c9-a284-79372046956b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T02:07:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  5 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.810 349552 DEBUG nova.virt.hardware [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  5 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.811 349552 DEBUG nova.virt.hardware [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  5 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.811 349552 DEBUG nova.virt.hardware [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  5 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.811 349552 DEBUG nova.virt.hardware [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  5 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.812 349552 DEBUG nova.virt.hardware [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  5 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.813 349552 DEBUG nova.virt.hardware [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  5 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.813 349552 DEBUG nova.virt.hardware [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  5 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.814 349552 DEBUG nova.virt.hardware [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  5 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.814 349552 DEBUG nova.virt.hardware [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  5 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.815 349552 DEBUG nova.virt.hardware [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  5 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.819 349552 DEBUG oslo_concurrency.processutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.897 349552 DEBUG nova.network.neutron [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Successfully updated port: d5201944-8184-405e-ae5f-b743e1bd7399 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  5 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.912 349552 DEBUG oslo_concurrency.lockutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Acquiring lock "refresh_cache-117d1772-87cc-4a3d-bf07-3f9b49ac0c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.913 349552 DEBUG oslo_concurrency.lockutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Acquired lock "refresh_cache-117d1772-87cc-4a3d-bf07-3f9b49ac0c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:12:38 compute-0 nova_compute[349548]: 2025-12-05 02:12:38.913 349552 DEBUG nova.network.neutron [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  5 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.009 349552 DEBUG nova.compute.manager [req-8b091dad-9831-42f8-90a2-9762ff3e7737 req-e199f0b1-85d9-4e4a-86ba-7282587ea851 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Received event network-changed-d5201944-8184-405e-ae5f-b743e1bd7399 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.010 349552 DEBUG nova.compute.manager [req-8b091dad-9831-42f8-90a2-9762ff3e7737 req-e199f0b1-85d9-4e4a-86ba-7282587ea851 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Refreshing instance network info cache due to event network-changed-d5201944-8184-405e-ae5f-b743e1bd7399. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  5 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.010 349552 DEBUG oslo_concurrency.lockutils [req-8b091dad-9831-42f8-90a2-9762ff3e7737 req-e199f0b1-85d9-4e4a-86ba-7282587ea851 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-117d1772-87cc-4a3d-bf07-3f9b49ac0c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:12:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:39.050 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1852 Content-Type: application/json Date: Fri, 05 Dec 2025 02:12:38 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-3110e6ad-c7c8-4e93-96cc-20e82be6ebd4 x-openstack-request-id: req-3110e6ad-c7c8-4e93-96cc-20e82be6ebd4 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  5 02:12:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:39.050 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f", "name": "tempest-TestNetworkBasicOps-server-593464214", "status": "ACTIVE", "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "user_id": "2e61f46e24a240608d1523fb5265d3ac", "metadata": {}, "hostId": "10fe85d51a16ea11dc2b9c4c45121e1df0a1e83cc5f4e895a8b24c00", "image": {"id": "e9091bfb-b431-47c9-a284-79372046956b", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/e9091bfb-b431-47c9-a284-79372046956b"}]}, "flavor": {"id": "bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49"}]}, "created": "2025-12-05T02:11:29Z", "updated": "2025-12-05T02:11:39Z", "addresses": {"tempest-network-smoke--2137061445": [{"version": 4, "addr": "10.100.0.11", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:ab:49:42"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/1fcee2c4-ccfc-4651-bc90-a606a4e46e0f"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/1fcee2c4-ccfc-4651-bc90-a606a4e46e0f"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestNetworkBasicOps-727356260", "OS-SRV-USG:launched_at": "2025-12-05T02:11:39.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-secgroup-smoke-843142180"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000c", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  5 02:12:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:39.051 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/1fcee2c4-ccfc-4651-bc90-a606a4e46e0f used request id req-3110e6ad-c7c8-4e93-96cc-20e82be6ebd4 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  5 02:12:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:39.052 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1fcee2c4-ccfc-4651-bc90-a606a4e46e0f', 'name': 'tempest-TestNetworkBasicOps-server-593464214', 'flavor': {'id': 'bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'e9091bfb-b431-47c9-a284-79372046956b'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000c', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '6aaead05b2404fec8f687504ed800a2b', 'user_id': '2e61f46e24a240608d1523fb5265d3ac', 'hostId': '10fe85d51a16ea11dc2b9c4c45121e1df0a1e83cc5f4e895a8b24c00', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  5 02:12:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:39.055 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 292fd084-0808-4a80-adc1-6ab1f28e188a from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  5 02:12:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:39.055 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/292fd084-0808-4a80-adc1-6ab1f28e188a -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}03a5c5085f72a10a14834caf2c8f725d7bea9761ee1da0af3d318eb89d91a8ae" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  5 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.112 349552 DEBUG nova.network.neutron [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  5 02:12:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  5 02:12:39 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/127959064' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  5 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.368 349552 DEBUG oslo_concurrency.processutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.436 349552 DEBUG nova.storage.rbd_utils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] rbd image e184a71d-1d91-4999-bb53-73c2caa1110a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.446 349552 DEBUG oslo_concurrency.processutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:12:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  5 02:12:39 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/502564395' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  5 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.944 349552 DEBUG oslo_concurrency.processutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.945 349552 DEBUG nova.virt.libvirt.vif [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T02:12:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-246991198',display_name='tempest-TestNetworkBasicOps-server-246991198',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-246991198',id=13,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOPiJuAHnAJu46IBGrW2KCWpzoZreiuIkGq3//er4nG+5eIgXpgWi9tSl+igSSp8Nl6if+KEJaz1jLll0XICHyeubF/iswJE5bpcW/PYkhqz7B8mkIP3gi3Vhw5yfXTbIg==',key_name='tempest-TestNetworkBasicOps-994593786',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6aaead05b2404fec8f687504ed800a2b',ramdisk_id='',reservation_id='r-9tqk8ujr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-576606253',owner_user_name='tempest-TestNetworkBasicOps-576606253-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T02:12:31Z,user_data=None,user_id='2e61f46e24a240608d1523fb5265d3ac',uuid=e184a71d-1d91-4999-bb53-73c2caa1110a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "address": "fa:16:3e:de:22:fb", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94c7e2c9-6a", "ovs_interfaceid": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  5 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.946 349552 DEBUG nova.network.os_vif_util [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Converting VIF {"id": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "address": "fa:16:3e:de:22:fb", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94c7e2c9-6a", "ovs_interfaceid": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.946 349552 DEBUG nova.network.os_vif_util [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:22:fb,bridge_name='br-int',has_traffic_filtering=True,id=94c7e2c9-6aeb-4be2-a022-8cd7ad27d978,network=Network(580f50f3-cfd1-4167-ba29-a8edbd53ee0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94c7e2c9-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.947 349552 DEBUG nova.objects.instance [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lazy-loading 'pci_devices' on Instance uuid e184a71d-1d91-4999-bb53-73c2caa1110a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.963 349552 DEBUG nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] End _get_guest_xml xml=<domain type="kvm">
Dec  5 02:12:39 compute-0 nova_compute[349548]:  <uuid>e184a71d-1d91-4999-bb53-73c2caa1110a</uuid>
Dec  5 02:12:39 compute-0 nova_compute[349548]:  <name>instance-0000000d</name>
Dec  5 02:12:39 compute-0 nova_compute[349548]:  <memory>131072</memory>
Dec  5 02:12:39 compute-0 nova_compute[349548]:  <vcpu>1</vcpu>
Dec  5 02:12:39 compute-0 nova_compute[349548]:  <metadata>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  5 02:12:39 compute-0 nova_compute[349548]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:      <nova:name>tempest-TestNetworkBasicOps-server-246991198</nova:name>
Dec  5 02:12:39 compute-0 nova_compute[349548]:      <nova:creationTime>2025-12-05 02:12:38</nova:creationTime>
Dec  5 02:12:39 compute-0 nova_compute[349548]:      <nova:flavor name="m1.nano">
Dec  5 02:12:39 compute-0 nova_compute[349548]:        <nova:memory>128</nova:memory>
Dec  5 02:12:39 compute-0 nova_compute[349548]:        <nova:disk>1</nova:disk>
Dec  5 02:12:39 compute-0 nova_compute[349548]:        <nova:swap>0</nova:swap>
Dec  5 02:12:39 compute-0 nova_compute[349548]:        <nova:ephemeral>0</nova:ephemeral>
Dec  5 02:12:39 compute-0 nova_compute[349548]:        <nova:vcpus>1</nova:vcpus>
Dec  5 02:12:39 compute-0 nova_compute[349548]:      </nova:flavor>
Dec  5 02:12:39 compute-0 nova_compute[349548]:      <nova:owner>
Dec  5 02:12:39 compute-0 nova_compute[349548]:        <nova:user uuid="2e61f46e24a240608d1523fb5265d3ac">tempest-TestNetworkBasicOps-576606253-project-member</nova:user>
Dec  5 02:12:39 compute-0 nova_compute[349548]:        <nova:project uuid="6aaead05b2404fec8f687504ed800a2b">tempest-TestNetworkBasicOps-576606253</nova:project>
Dec  5 02:12:39 compute-0 nova_compute[349548]:      </nova:owner>
Dec  5 02:12:39 compute-0 nova_compute[349548]:      <nova:root type="image" uuid="e9091bfb-b431-47c9-a284-79372046956b"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:      <nova:ports>
Dec  5 02:12:39 compute-0 nova_compute[349548]:        <nova:port uuid="94c7e2c9-6aeb-4be2-a022-8cd7ad27d978">
Dec  5 02:12:39 compute-0 nova_compute[349548]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:        </nova:port>
Dec  5 02:12:39 compute-0 nova_compute[349548]:      </nova:ports>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    </nova:instance>
Dec  5 02:12:39 compute-0 nova_compute[349548]:  </metadata>
Dec  5 02:12:39 compute-0 nova_compute[349548]:  <sysinfo type="smbios">
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <system>
Dec  5 02:12:39 compute-0 nova_compute[349548]:      <entry name="manufacturer">RDO</entry>
Dec  5 02:12:39 compute-0 nova_compute[349548]:      <entry name="product">OpenStack Compute</entry>
Dec  5 02:12:39 compute-0 nova_compute[349548]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  5 02:12:39 compute-0 nova_compute[349548]:      <entry name="serial">e184a71d-1d91-4999-bb53-73c2caa1110a</entry>
Dec  5 02:12:39 compute-0 nova_compute[349548]:      <entry name="uuid">e184a71d-1d91-4999-bb53-73c2caa1110a</entry>
Dec  5 02:12:39 compute-0 nova_compute[349548]:      <entry name="family">Virtual Machine</entry>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    </system>
Dec  5 02:12:39 compute-0 nova_compute[349548]:  </sysinfo>
Dec  5 02:12:39 compute-0 nova_compute[349548]:  <os>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <boot dev="hd"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <smbios mode="sysinfo"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:  </os>
Dec  5 02:12:39 compute-0 nova_compute[349548]:  <features>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <acpi/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <apic/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <vmcoreinfo/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:  </features>
Dec  5 02:12:39 compute-0 nova_compute[349548]:  <clock offset="utc">
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <timer name="pit" tickpolicy="delay"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <timer name="hpet" present="no"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:  </clock>
Dec  5 02:12:39 compute-0 nova_compute[349548]:  <cpu mode="host-model" match="exact">
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <topology sockets="1" cores="1" threads="1"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:  </cpu>
Dec  5 02:12:39 compute-0 nova_compute[349548]:  <devices>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <disk type="network" device="disk">
Dec  5 02:12:39 compute-0 nova_compute[349548]:      <driver type="raw" cache="none"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:      <source protocol="rbd" name="vms/e184a71d-1d91-4999-bb53-73c2caa1110a_disk">
Dec  5 02:12:39 compute-0 nova_compute[349548]:        <host name="192.168.122.100" port="6789"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:      </source>
Dec  5 02:12:39 compute-0 nova_compute[349548]:      <auth username="openstack">
Dec  5 02:12:39 compute-0 nova_compute[349548]:        <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:      </auth>
Dec  5 02:12:39 compute-0 nova_compute[349548]:      <target dev="vda" bus="virtio"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    </disk>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <disk type="network" device="cdrom">
Dec  5 02:12:39 compute-0 nova_compute[349548]:      <driver type="raw" cache="none"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:      <source protocol="rbd" name="vms/e184a71d-1d91-4999-bb53-73c2caa1110a_disk.config">
Dec  5 02:12:39 compute-0 nova_compute[349548]:        <host name="192.168.122.100" port="6789"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:      </source>
Dec  5 02:12:39 compute-0 nova_compute[349548]:      <auth username="openstack">
Dec  5 02:12:39 compute-0 nova_compute[349548]:        <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:      </auth>
Dec  5 02:12:39 compute-0 nova_compute[349548]:      <target dev="sda" bus="sata"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    </disk>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <interface type="ethernet">
Dec  5 02:12:39 compute-0 nova_compute[349548]:      <mac address="fa:16:3e:de:22:fb"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:      <model type="virtio"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:      <driver name="vhost" rx_queue_size="512"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:      <mtu size="1442"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:      <target dev="tap94c7e2c9-6a"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    </interface>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <serial type="pty">
Dec  5 02:12:39 compute-0 nova_compute[349548]:      <log file="/var/lib/nova/instances/e184a71d-1d91-4999-bb53-73c2caa1110a/console.log" append="off"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    </serial>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <video>
Dec  5 02:12:39 compute-0 nova_compute[349548]:      <model type="virtio"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    </video>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <input type="tablet" bus="usb"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <rng model="virtio">
Dec  5 02:12:39 compute-0 nova_compute[349548]:      <backend model="random">/dev/urandom</backend>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    </rng>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <controller type="usb" index="0"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    <memballoon model="virtio">
Dec  5 02:12:39 compute-0 nova_compute[349548]:      <stats period="10"/>
Dec  5 02:12:39 compute-0 nova_compute[349548]:    </memballoon>
Dec  5 02:12:39 compute-0 nova_compute[349548]:  </devices>
Dec  5 02:12:39 compute-0 nova_compute[349548]: </domain>
Dec  5 02:12:39 compute-0 nova_compute[349548]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  5 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.964 349552 DEBUG nova.compute.manager [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Preparing to wait for external event network-vif-plugged-94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  5 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.964 349552 DEBUG oslo_concurrency.lockutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Acquiring lock "e184a71d-1d91-4999-bb53-73c2caa1110a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.965 349552 DEBUG oslo_concurrency.lockutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "e184a71d-1d91-4999-bb53-73c2caa1110a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.965 349552 DEBUG oslo_concurrency.lockutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "e184a71d-1d91-4999-bb53-73c2caa1110a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.965 349552 DEBUG nova.virt.libvirt.vif [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T02:12:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-246991198',display_name='tempest-TestNetworkBasicOps-server-246991198',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-246991198',id=13,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOPiJuAHnAJu46IBGrW2KCWpzoZreiuIkGq3//er4nG+5eIgXpgWi9tSl+igSSp8Nl6if+KEJaz1jLll0XICHyeubF/iswJE5bpcW/PYkhqz7B8mkIP3gi3Vhw5yfXTbIg==',key_name='tempest-TestNetworkBasicOps-994593786',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6aaead05b2404fec8f687504ed800a2b',ramdisk_id='',reservation_id='r-9tqk8ujr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-576606253',owner_user_name='tempest-TestNetworkBasicOps-576606253-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T02:12:31Z,user_data=None,user_id='2e61f46e24a240608d1523fb5265d3ac',uuid=e184a71d-1d91-4999-bb53-73c2caa1110a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "address": "fa:16:3e:de:22:fb", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94c7e2c9-6a", "ovs_interfaceid": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  5 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.966 349552 DEBUG nova.network.os_vif_util [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Converting VIF {"id": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "address": "fa:16:3e:de:22:fb", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94c7e2c9-6a", "ovs_interfaceid": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.966 349552 DEBUG nova.network.os_vif_util [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:22:fb,bridge_name='br-int',has_traffic_filtering=True,id=94c7e2c9-6aeb-4be2-a022-8cd7ad27d978,network=Network(580f50f3-cfd1-4167-ba29-a8edbd53ee0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94c7e2c9-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.967 349552 DEBUG os_vif [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:22:fb,bridge_name='br-int',has_traffic_filtering=True,id=94c7e2c9-6aeb-4be2-a022-8cd7ad27d978,network=Network(580f50f3-cfd1-4167-ba29-a8edbd53ee0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94c7e2c9-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  5 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.967 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.967 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.968 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.971 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.971 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap94c7e2c9-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.971 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap94c7e2c9-6a, col_values=(('external_ids', {'iface-id': '94c7e2c9-6aeb-4be2-a022-8cd7ad27d978', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:de:22:fb', 'vm-uuid': 'e184a71d-1d91-4999-bb53-73c2caa1110a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.974 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:39 compute-0 NetworkManager[49092]: <info>  [1764900759.9750] manager: (tap94c7e2c9-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Dec  5 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.977 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  5 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.990 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:39 compute-0 nova_compute[349548]: 2025-12-05 02:12:39.991 349552 INFO os_vif [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:22:fb,bridge_name='br-int',has_traffic_filtering=True,id=94c7e2c9-6aeb-4be2-a022-8cd7ad27d978,network=Network(580f50f3-cfd1-4167-ba29-a8edbd53ee0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94c7e2c9-6a')#033[00m
Dec  5 02:12:40 compute-0 nova_compute[349548]: 2025-12-05 02:12:40.045 349552 DEBUG nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  5 02:12:40 compute-0 nova_compute[349548]: 2025-12-05 02:12:40.045 349552 DEBUG nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  5 02:12:40 compute-0 nova_compute[349548]: 2025-12-05 02:12:40.045 349552 DEBUG nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] No VIF found with MAC fa:16:3e:de:22:fb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  5 02:12:40 compute-0 nova_compute[349548]: 2025-12-05 02:12:40.046 349552 INFO nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Using config drive#033[00m
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.068 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1832 Content-Type: application/json Date: Fri, 05 Dec 2025 02:12:39 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-48c8e93c-6a12-4464-9687-988b9aab96fa x-openstack-request-id: req-48c8e93c-6a12-4464-9687-988b9aab96fa _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.069 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "292fd084-0808-4a80-adc1-6ab1f28e188a", "name": "te-3255585-asg-ymkpcnuo2iqm-rsaqvth2jwvx-k3ipymnd45pa", "status": "ACTIVE", "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "user_id": "99591ed8361e41579fee1d14f16bf0f7", "metadata": {"metering.server_group": "92ca195d-98d1-443c-9947-dcb7ca7b926a"}, "hostId": "1d9ee94bfdb0c27cf886050001bab7f2a93221931735791e86b3ac18", "image": {"id": "773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e"}]}, "flavor": {"id": "bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49"}]}, "created": "2025-12-05T02:11:15Z", "updated": "2025-12-05T02:11:30Z", "addresses": {"": [{"version": 4, "addr": "10.100.0.151", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:cf:10:bc"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/292fd084-0808-4a80-adc1-6ab1f28e188a"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/292fd084-0808-4a80-adc1-6ab1f28e188a"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-05T02:11:30.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000b", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.069 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/292fd084-0808-4a80-adc1-6ab1f28e188a used request id req-48c8e93c-6a12-4464-9687-988b9aab96fa request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.072 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '292fd084-0808-4a80-adc1-6ab1f28e188a', 'name': 'te-3255585-asg-ymkpcnuo2iqm-rsaqvth2jwvx-k3ipymnd45pa', 'flavor': {'id': 'bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'b01709a3378347e1a3f25eeb2b8b1bca', 'user_id': '99591ed8361e41579fee1d14f16bf0f7', 'hostId': '1d9ee94bfdb0c27cf886050001bab7f2a93221931735791e86b3ac18', 'status': 'active', 'metadata': {'metering.server_group': '92ca195d-98d1-443c-9947-dcb7ca7b926a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.072 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.073 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd61438050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.073 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd61438050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.073 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.074 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.075 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.075 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.076 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.076 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.077 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-05T02:12:40.073517) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.077 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-05T02:12:40.076336) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:12:40 compute-0 nova_compute[349548]: 2025-12-05 02:12:40.092 349552 DEBUG nova.storage.rbd_utils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] rbd image e184a71d-1d91-4999-bb53-73c2caa1110a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.102 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.103 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.127 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.127 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.128 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.129 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.129 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.129 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.130 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.130 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.131 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.131 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.131 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.132 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.132 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.132 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-05T02:12:40.130276) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.132 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.132 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.133 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-TestNetworkBasicOps-server-593464214>, <NovaLikeServer: te-3255585-asg-ymkpcnuo2iqm-rsaqvth2jwvx-k3ipymnd45pa>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestNetworkBasicOps-server-593464214>, <NovaLikeServer: te-3255585-asg-ymkpcnuo2iqm-rsaqvth2jwvx-k3ipymnd45pa>]
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.133 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.133 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.134 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.134 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.134 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.135 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-05T02:12:40.132595) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.135 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-05T02:12:40.134368) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.193 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.device.read.bytes volume: 31119872 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.194 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.device.read.bytes volume: 274750 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1910: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.262 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.bytes volume: 29961216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.262 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.263 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.263 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.264 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.264 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.264 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.264 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.264 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.device.read.latency volume: 3189139202 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.264 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.device.read.latency volume: 134745289 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.265 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.latency volume: 3090417276 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.265 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.latency volume: 214244219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.266 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.266 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.266 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.266 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.267 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-05T02:12:40.264367) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.267 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.267 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.267 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.device.read.requests volume: 1143 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.267 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.device.read.requests volume: 108 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.268 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.requests volume: 1059 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.268 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.269 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.269 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.269 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.269 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.270 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.270 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-05T02:12:40.267268) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.270 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.270 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-05T02:12:40.270139) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.270 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.270 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.271 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.271 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.272 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.272 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.272 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.272 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.273 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.273 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.273 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.device.write.bytes volume: 72970240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.273 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.274 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.bytes volume: 72802304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.274 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.274 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.275 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-05T02:12:40.273217) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.275 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.276 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.276 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.276 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.277 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.278 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-05T02:12:40.277369) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.300 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.320 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.320 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.321 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.321 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.321 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.321 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.321 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.321 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.device.write.latency volume: 11092676280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.321 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.321 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-05T02:12:40.321378) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.322 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.latency volume: 10839664673 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.322 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.322 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.322 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.322 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.322 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.323 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.323 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.device.write.requests volume: 289 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.323 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-05T02:12:40.323016) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.323 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.323 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.requests volume: 284 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.323 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.324 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.324 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.324 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.324 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.324 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.325 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-05T02:12:40.324660) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.327 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f / tap1e754fc7-10 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.327 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/network.incoming.packets volume: 115 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.330 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 292fd084-0808-4a80-adc1-6ab1f28e188a / tap706f9405-40 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.330 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.packets volume: 9 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.331 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.331 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.331 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.331 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.331 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.331 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.331 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-05T02:12:40.331535) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.332 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.332 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.332 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.332 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.332 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.332 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.333 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.333 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.333 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-05T02:12:40.333008) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.333 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/disk.device.allocation volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.333 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.334 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.334 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.334 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.334 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.334 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.335 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.335 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-05T02:12:40.335025) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.335 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.335 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.335 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.336 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.336 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.336 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.336 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.336 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/network.outgoing.bytes volume: 16034 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.336 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.337 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-05T02:12:40.336437) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.337 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.337 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.337 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.337 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.338 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.338 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.338 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.338 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.339 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.339 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.339 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-05T02:12:40.337970) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.339 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.339 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.339 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-05T02:12:40.339546) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.339 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.340 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-TestNetworkBasicOps-server-593464214>, <NovaLikeServer: te-3255585-asg-ymkpcnuo2iqm-rsaqvth2jwvx-k3ipymnd45pa>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestNetworkBasicOps-server-593464214>, <NovaLikeServer: te-3255585-asg-ymkpcnuo2iqm-rsaqvth2jwvx-k3ipymnd45pa>]
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.340 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.340 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.340 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.340 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.340 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-05T02:12:40.340628) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.340 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/memory.usage volume: 42.78125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.341 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/memory.usage volume: 43.5078125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.341 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.341 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.341 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.342 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.342 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.342 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/network.incoming.bytes volume: 20202 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.342 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-05T02:12:40.342117) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.342 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.bytes volume: 1352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.343 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.343 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.343 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.343 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.343 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.343 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/network.outgoing.packets volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.343 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-05T02:12:40.343542) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.344 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.344 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.344 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.344 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.344 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.345 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.345 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.345 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-05T02:12:40.344996) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.345 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.345 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.346 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.346 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.346 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.346 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.346 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/cpu volume: 32290000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.346 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-05T02:12:40.346414) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.346 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/cpu volume: 66970000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.347 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.347 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.347 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.347 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.347 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.347 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.348 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-05T02:12:40.347839) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.348 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.348 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.348 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.348 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.349 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.349 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.349 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.349 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.349 14 DEBUG ceilometer.compute.pollsters [-] 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.349 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.350 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.350 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-05T02:12:40.349270) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:12:40 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:12:40.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.238 349552 INFO nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Creating config drive at /var/lib/nova/instances/e184a71d-1d91-4999-bb53-73c2caa1110a/disk.config#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.250 349552 DEBUG oslo_concurrency.processutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e184a71d-1d91-4999-bb53-73c2caa1110a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1mw69v60 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.330 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updating instance_info_cache with network_info: [{"id": "706f9405-4061-481e-a252-9b14f4534a4e", "address": "fa:16:3e:cf:10:bc", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.151", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706f9405-40", "ovs_interfaceid": "706f9405-4061-481e-a252-9b14f4534a4e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.353 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.354 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.355 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.356 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.356 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.357 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.357 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.358 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.387 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.388 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.388 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.389 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.389 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.424 349552 DEBUG oslo_concurrency.processutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e184a71d-1d91-4999-bb53-73c2caa1110a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1mw69v60" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.507 349552 DEBUG nova.storage.rbd_utils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] rbd image e184a71d-1d91-4999-bb53-73c2caa1110a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.519 349552 DEBUG oslo_concurrency.processutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e184a71d-1d91-4999-bb53-73c2caa1110a/disk.config e184a71d-1d91-4999-bb53-73c2caa1110a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.772 349552 DEBUG nova.network.neutron [req-7948c396-f586-464c-91dc-5b1543e66ab0 req-602e42d3-e9f0-405b-baa4-9877d1c76a34 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Updated VIF entry in instance network info cache for port 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.774 349552 DEBUG nova.network.neutron [req-7948c396-f586-464c-91dc-5b1543e66ab0 req-602e42d3-e9f0-405b-baa4-9877d1c76a34 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Updating instance_info_cache with network_info: [{"id": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "address": "fa:16:3e:de:22:fb", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94c7e2c9-6a", "ovs_interfaceid": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.793 349552 DEBUG oslo_concurrency.lockutils [req-7948c396-f586-464c-91dc-5b1543e66ab0 req-602e42d3-e9f0-405b-baa4-9877d1c76a34 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-e184a71d-1d91-4999-bb53-73c2caa1110a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.807 349552 DEBUG oslo_concurrency.processutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e184a71d-1d91-4999-bb53-73c2caa1110a/disk.config e184a71d-1d91-4999-bb53-73c2caa1110a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.287s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.807 349552 INFO nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Deleting local config drive /var/lib/nova/instances/e184a71d-1d91-4999-bb53-73c2caa1110a/disk.config because it was imported into RBD.#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.812 349552 DEBUG nova.network.neutron [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Updating instance_info_cache with network_info: [{"id": "d5201944-8184-405e-ae5f-b743e1bd7399", "address": "fa:16:3e:8f:b5:d5", "network": {"id": "297ab129-d19a-4a0e-893c-731678c3b7a7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-588084580-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "286d2d767009421bb0c889a0ff65b2a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5201944-81", "ovs_interfaceid": "d5201944-8184-405e-ae5f-b743e1bd7399", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.834 349552 DEBUG oslo_concurrency.lockutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Releasing lock "refresh_cache-117d1772-87cc-4a3d-bf07-3f9b49ac0c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.834 349552 DEBUG nova.compute.manager [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Instance network_info: |[{"id": "d5201944-8184-405e-ae5f-b743e1bd7399", "address": "fa:16:3e:8f:b5:d5", "network": {"id": "297ab129-d19a-4a0e-893c-731678c3b7a7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-588084580-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "286d2d767009421bb0c889a0ff65b2a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5201944-81", "ovs_interfaceid": "d5201944-8184-405e-ae5f-b743e1bd7399", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.835 349552 DEBUG oslo_concurrency.lockutils [req-8b091dad-9831-42f8-90a2-9762ff3e7737 req-e199f0b1-85d9-4e4a-86ba-7282587ea851 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-117d1772-87cc-4a3d-bf07-3f9b49ac0c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.836 349552 DEBUG nova.network.neutron [req-8b091dad-9831-42f8-90a2-9762ff3e7737 req-e199f0b1-85d9-4e4a-86ba-7282587ea851 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Refreshing network info cache for port d5201944-8184-405e-ae5f-b743e1bd7399 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.842 349552 DEBUG nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Start _get_guest_xml network_info=[{"id": "d5201944-8184-405e-ae5f-b743e1bd7399", "address": "fa:16:3e:8f:b5:d5", "network": {"id": "297ab129-d19a-4a0e-893c-731678c3b7a7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-588084580-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "286d2d767009421bb0c889a0ff65b2a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5201944-81", "ovs_interfaceid": "d5201944-8184-405e-ae5f-b743e1bd7399", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T02:07:35Z,direct_url=<?>,disk_format='qcow2',id=e9091bfb-b431-47c9-a284-79372046956b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T02:07:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'image_id': 'e9091bfb-b431-47c9-a284-79372046956b'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.865 349552 WARNING nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.882 349552 DEBUG nova.virt.libvirt.host [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.883 349552 DEBUG nova.virt.libvirt.host [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.890 349552 DEBUG nova.virt.libvirt.host [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.891 349552 DEBUG nova.virt.libvirt.host [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.891 349552 DEBUG nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.892 349552 DEBUG nova.virt.hardware [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-05T02:07:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T02:07:35Z,direct_url=<?>,disk_format='qcow2',id=e9091bfb-b431-47c9-a284-79372046956b,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='6ad982b73954486390215862ee62239f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T02:07:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.893 349552 DEBUG nova.virt.hardware [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.893 349552 DEBUG nova.virt.hardware [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.893 349552 DEBUG nova.virt.hardware [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.894 349552 DEBUG nova.virt.hardware [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.894 349552 DEBUG nova.virt.hardware [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.895 349552 DEBUG nova.virt.hardware [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.895 349552 DEBUG nova.virt.hardware [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.895 349552 DEBUG nova.virt.hardware [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.896 349552 DEBUG nova.virt.hardware [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.896 349552 DEBUG nova.virt.hardware [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  5 02:12:41 compute-0 kernel: tap94c7e2c9-6a: entered promiscuous mode
Dec  5 02:12:41 compute-0 NetworkManager[49092]: <info>  [1764900761.9041] manager: (tap94c7e2c9-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/73)
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.906 349552 DEBUG oslo_concurrency.processutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:12:41 compute-0 ovn_controller[89286]: 2025-12-05T02:12:41Z|00145|binding|INFO|Claiming lport 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 for this chassis.
Dec  5 02:12:41 compute-0 ovn_controller[89286]: 2025-12-05T02:12:41Z|00146|binding|INFO|94c7e2c9-6aeb-4be2-a022-8cd7ad27d978: Claiming fa:16:3e:de:22:fb 10.100.0.3
Dec  5 02:12:41 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:41.920 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:22:fb 10.100.0.3'], port_security=['fa:16:3e:de:22:fb 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e184a71d-1d91-4999-bb53-73c2caa1110a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6aaead05b2404fec8f687504ed800a2b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5dbf4e63-8bae-4a45-8f77-a68eb174185f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee2a399c-ba53-4ea4-9f46-ca7b46a10984, chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=94c7e2c9-6aeb-4be2-a022-8cd7ad27d978) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 02:12:41 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:41.921 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 in datapath 580f50f3-cfd1-4167-ba29-a8edbd53ee0f bound to our chassis#033[00m
Dec  5 02:12:41 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:41.925 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 580f50f3-cfd1-4167-ba29-a8edbd53ee0f#033[00m
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.937 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:41 compute-0 ovn_controller[89286]: 2025-12-05T02:12:41Z|00147|binding|INFO|Setting lport 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 ovn-installed in OVS
Dec  5 02:12:41 compute-0 ovn_controller[89286]: 2025-12-05T02:12:41Z|00148|binding|INFO|Setting lport 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 up in Southbound
Dec  5 02:12:41 compute-0 nova_compute[349548]: 2025-12-05 02:12:41.941 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:41 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:41.955 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[8d65503e-b0b1-447c-aca8-0ceeba9d2f37]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:12:41 compute-0 systemd-udevd[450488]: Network interface NamePolicy= disabled on kernel command line.
Dec  5 02:12:41 compute-0 systemd-machined[138700]: New machine qemu-14-instance-0000000d.
Dec  5 02:12:41 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:12:41 compute-0 NetworkManager[49092]: <info>  [1764900761.9827] device (tap94c7e2c9-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  5 02:12:41 compute-0 NetworkManager[49092]: <info>  [1764900761.9833] device (tap94c7e2c9-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  5 02:12:41 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2536725473' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:12:41 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000000d.
Dec  5 02:12:42 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:42.006 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[d73554d7-b3df-443f-b570-d68c3ff85df8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:12:42 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:42.011 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[0b729650-b5e8-4df7-9cd6-1e0801eec36f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.014 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:12:42 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:42.038 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[4d8bfd54-cf52-4024-9073-bca39f051281]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:12:42 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:42.056 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[53a9fbcb-767f-4ede-8043-747c98822a89]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap580f50f3-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:c2:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 42], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678558, 'reachable_time': 24166, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 450509, 'error': None, 'target': 'ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:12:42 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:42.072 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[c9d31aa3-e5c8-4bc4-80ab-79d0e97db135]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap580f50f3-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678580, 'tstamp': 678580}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 450513, 'error': None, 'target': 'ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap580f50f3-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678588, 'tstamp': 678588}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 450513, 'error': None, 'target': 'ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:12:42 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:42.074 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap580f50f3-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:12:42 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:42.080 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap580f50f3-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:12:42 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:42.080 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 02:12:42 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:42.081 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap580f50f3-c0, col_values=(('external_ids', {'iface-id': '29ff39a2-9491-44bb-a004-0de689e8aadc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.081 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:42 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:42.082 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.121 349552 DEBUG nova.compute.manager [req-16f09f15-9a61-4fe3-89b0-a03dbba4330b req-35b5a165-b764-49af-bd68-90926f7d77d1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Received event network-vif-plugged-94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.121 349552 DEBUG oslo_concurrency.lockutils [req-16f09f15-9a61-4fe3-89b0-a03dbba4330b req-35b5a165-b764-49af-bd68-90926f7d77d1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "e184a71d-1d91-4999-bb53-73c2caa1110a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.122 349552 DEBUG oslo_concurrency.lockutils [req-16f09f15-9a61-4fe3-89b0-a03dbba4330b req-35b5a165-b764-49af-bd68-90926f7d77d1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "e184a71d-1d91-4999-bb53-73c2caa1110a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.123 349552 DEBUG oslo_concurrency.lockutils [req-16f09f15-9a61-4fe3-89b0-a03dbba4330b req-35b5a165-b764-49af-bd68-90926f7d77d1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "e184a71d-1d91-4999-bb53-73c2caa1110a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.123 349552 DEBUG nova.compute.manager [req-16f09f15-9a61-4fe3-89b0-a03dbba4330b req-35b5a165-b764-49af-bd68-90926f7d77d1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Processing event network-vif-plugged-94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.140 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.140 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.146 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.146 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.152 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.152 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.216 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1911: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 3.6 MiB/s wr, 59 op/s
Dec  5 02:12:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  5 02:12:42 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2241966120' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.396 349552 DEBUG oslo_concurrency.processutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.427 349552 DEBUG nova.storage.rbd_utils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] rbd image 117d1772-87cc-4a3d-bf07-3f9b49ac0c63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.435 349552 DEBUG oslo_concurrency.processutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.715 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900762.7150173, e184a71d-1d91-4999-bb53-73c2caa1110a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.716 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] VM Started (Lifecycle Event)#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.718 349552 DEBUG nova.compute.manager [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.722 349552 DEBUG nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.728 349552 INFO nova.virt.libvirt.driver [-] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Instance spawned successfully.#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.728 349552 DEBUG nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.737 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.743 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.758 349552 DEBUG nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.758 349552 DEBUG nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.759 349552 DEBUG nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.759 349552 DEBUG nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.760 349552 DEBUG nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.760 349552 DEBUG nova.virt.libvirt.driver [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.764 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.764 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900762.7151551, e184a71d-1d91-4999-bb53-73c2caa1110a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.765 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] VM Paused (Lifecycle Event)#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.790 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.791 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3550MB free_disk=59.855751037597656GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.791 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.791 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.794 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.799 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900762.7341359, e184a71d-1d91-4999-bb53-73c2caa1110a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.799 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] VM Resumed (Lifecycle Event)#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.824 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.829 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.856 349552 INFO nova.compute.manager [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Took 11.29 seconds to spawn the instance on the hypervisor.#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.857 349552 DEBUG nova.compute.manager [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.857 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  5 02:12:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  5 02:12:42 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4214257080' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.913 349552 DEBUG oslo_concurrency.processutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.914 349552 DEBUG nova.virt.libvirt.vif [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T02:12:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1301152906',display_name='tempest-TestServerBasicOps-server-1301152906',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1301152906',id=14,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDCz5vjwlgDWbvwiwH6Lrc3odqUa7TZ3EfOipPX5fpPxPUspT7EN7quA0kvbAyTCNWf/e9htL6cMWK3K35T7n3AN3hOq0SEzHNsNLt1sUvuz6ePIFT2WS8FYfWxAPVEIpA==',key_name='tempest-TestServerBasicOps-1536427465',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='286d2d767009421bb0c889a0ff65b2a2',ramdisk_id='',reservation_id='r-iqi50j5i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1996691968',owner_user_name='tempest-TestServerBasicOps-1996691968-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T02:12:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='69e134c969b04dc58a1d1556d8ecf4a8',uuid=117d1772-87cc-4a3d-bf07-3f9b49ac0c63,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d5201944-8184-405e-ae5f-b743e1bd7399", "address": "fa:16:3e:8f:b5:d5", "network": {"id": "297ab129-d19a-4a0e-893c-731678c3b7a7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-588084580-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "286d2d767009421bb0c889a0ff65b2a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5201944-81", "ovs_interfaceid": "d5201944-8184-405e-ae5f-b743e1bd7399", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.914 349552 DEBUG nova.network.os_vif_util [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Converting VIF {"id": "d5201944-8184-405e-ae5f-b743e1bd7399", "address": "fa:16:3e:8f:b5:d5", "network": {"id": "297ab129-d19a-4a0e-893c-731678c3b7a7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-588084580-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "286d2d767009421bb0c889a0ff65b2a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5201944-81", "ovs_interfaceid": "d5201944-8184-405e-ae5f-b743e1bd7399", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.915 349552 DEBUG nova.network.os_vif_util [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:b5:d5,bridge_name='br-int',has_traffic_filtering=True,id=d5201944-8184-405e-ae5f-b743e1bd7399,network=Network(297ab129-d19a-4a0e-893c-731678c3b7a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5201944-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.916 349552 DEBUG nova.objects.instance [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 117d1772-87cc-4a3d-bf07-3f9b49ac0c63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.917 349552 INFO nova.compute.manager [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Took 12.37 seconds to build instance.#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.932 349552 DEBUG nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] End _get_guest_xml xml=<domain type="kvm">
Dec  5 02:12:42 compute-0 nova_compute[349548]:  <uuid>117d1772-87cc-4a3d-bf07-3f9b49ac0c63</uuid>
Dec  5 02:12:42 compute-0 nova_compute[349548]:  <name>instance-0000000e</name>
Dec  5 02:12:42 compute-0 nova_compute[349548]:  <memory>131072</memory>
Dec  5 02:12:42 compute-0 nova_compute[349548]:  <vcpu>1</vcpu>
Dec  5 02:12:42 compute-0 nova_compute[349548]:  <metadata>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  5 02:12:42 compute-0 nova_compute[349548]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:      <nova:name>tempest-TestServerBasicOps-server-1301152906</nova:name>
Dec  5 02:12:42 compute-0 nova_compute[349548]:      <nova:creationTime>2025-12-05 02:12:41</nova:creationTime>
Dec  5 02:12:42 compute-0 nova_compute[349548]:      <nova:flavor name="m1.nano">
Dec  5 02:12:42 compute-0 nova_compute[349548]:        <nova:memory>128</nova:memory>
Dec  5 02:12:42 compute-0 nova_compute[349548]:        <nova:disk>1</nova:disk>
Dec  5 02:12:42 compute-0 nova_compute[349548]:        <nova:swap>0</nova:swap>
Dec  5 02:12:42 compute-0 nova_compute[349548]:        <nova:ephemeral>0</nova:ephemeral>
Dec  5 02:12:42 compute-0 nova_compute[349548]:        <nova:vcpus>1</nova:vcpus>
Dec  5 02:12:42 compute-0 nova_compute[349548]:      </nova:flavor>
Dec  5 02:12:42 compute-0 nova_compute[349548]:      <nova:owner>
Dec  5 02:12:42 compute-0 nova_compute[349548]:        <nova:user uuid="69e134c969b04dc58a1d1556d8ecf4a8">tempest-TestServerBasicOps-1996691968-project-member</nova:user>
Dec  5 02:12:42 compute-0 nova_compute[349548]:        <nova:project uuid="286d2d767009421bb0c889a0ff65b2a2">tempest-TestServerBasicOps-1996691968</nova:project>
Dec  5 02:12:42 compute-0 nova_compute[349548]:      </nova:owner>
Dec  5 02:12:42 compute-0 nova_compute[349548]:      <nova:root type="image" uuid="e9091bfb-b431-47c9-a284-79372046956b"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:      <nova:ports>
Dec  5 02:12:42 compute-0 nova_compute[349548]:        <nova:port uuid="d5201944-8184-405e-ae5f-b743e1bd7399">
Dec  5 02:12:42 compute-0 nova_compute[349548]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:        </nova:port>
Dec  5 02:12:42 compute-0 nova_compute[349548]:      </nova:ports>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    </nova:instance>
Dec  5 02:12:42 compute-0 nova_compute[349548]:  </metadata>
Dec  5 02:12:42 compute-0 nova_compute[349548]:  <sysinfo type="smbios">
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <system>
Dec  5 02:12:42 compute-0 nova_compute[349548]:      <entry name="manufacturer">RDO</entry>
Dec  5 02:12:42 compute-0 nova_compute[349548]:      <entry name="product">OpenStack Compute</entry>
Dec  5 02:12:42 compute-0 nova_compute[349548]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  5 02:12:42 compute-0 nova_compute[349548]:      <entry name="serial">117d1772-87cc-4a3d-bf07-3f9b49ac0c63</entry>
Dec  5 02:12:42 compute-0 nova_compute[349548]:      <entry name="uuid">117d1772-87cc-4a3d-bf07-3f9b49ac0c63</entry>
Dec  5 02:12:42 compute-0 nova_compute[349548]:      <entry name="family">Virtual Machine</entry>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    </system>
Dec  5 02:12:42 compute-0 nova_compute[349548]:  </sysinfo>
Dec  5 02:12:42 compute-0 nova_compute[349548]:  <os>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <boot dev="hd"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <smbios mode="sysinfo"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:  </os>
Dec  5 02:12:42 compute-0 nova_compute[349548]:  <features>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <acpi/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <apic/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <vmcoreinfo/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:  </features>
Dec  5 02:12:42 compute-0 nova_compute[349548]:  <clock offset="utc">
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <timer name="pit" tickpolicy="delay"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <timer name="hpet" present="no"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:  </clock>
Dec  5 02:12:42 compute-0 nova_compute[349548]:  <cpu mode="host-model" match="exact">
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <topology sockets="1" cores="1" threads="1"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:  </cpu>
Dec  5 02:12:42 compute-0 nova_compute[349548]:  <devices>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <disk type="network" device="disk">
Dec  5 02:12:42 compute-0 nova_compute[349548]:      <driver type="raw" cache="none"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:      <source protocol="rbd" name="vms/117d1772-87cc-4a3d-bf07-3f9b49ac0c63_disk">
Dec  5 02:12:42 compute-0 nova_compute[349548]:        <host name="192.168.122.100" port="6789"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:      </source>
Dec  5 02:12:42 compute-0 nova_compute[349548]:      <auth username="openstack">
Dec  5 02:12:42 compute-0 nova_compute[349548]:        <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:      </auth>
Dec  5 02:12:42 compute-0 nova_compute[349548]:      <target dev="vda" bus="virtio"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    </disk>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <disk type="network" device="cdrom">
Dec  5 02:12:42 compute-0 nova_compute[349548]:      <driver type="raw" cache="none"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:      <source protocol="rbd" name="vms/117d1772-87cc-4a3d-bf07-3f9b49ac0c63_disk.config">
Dec  5 02:12:42 compute-0 nova_compute[349548]:        <host name="192.168.122.100" port="6789"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:      </source>
Dec  5 02:12:42 compute-0 nova_compute[349548]:      <auth username="openstack">
Dec  5 02:12:42 compute-0 nova_compute[349548]:        <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:      </auth>
Dec  5 02:12:42 compute-0 nova_compute[349548]:      <target dev="sda" bus="sata"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    </disk>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <interface type="ethernet">
Dec  5 02:12:42 compute-0 nova_compute[349548]:      <mac address="fa:16:3e:8f:b5:d5"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:      <model type="virtio"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:      <driver name="vhost" rx_queue_size="512"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:      <mtu size="1442"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:      <target dev="tapd5201944-81"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    </interface>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <serial type="pty">
Dec  5 02:12:42 compute-0 nova_compute[349548]:      <log file="/var/lib/nova/instances/117d1772-87cc-4a3d-bf07-3f9b49ac0c63/console.log" append="off"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    </serial>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <video>
Dec  5 02:12:42 compute-0 nova_compute[349548]:      <model type="virtio"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    </video>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <input type="tablet" bus="usb"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <rng model="virtio">
Dec  5 02:12:42 compute-0 nova_compute[349548]:      <backend model="random">/dev/urandom</backend>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    </rng>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <controller type="usb" index="0"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    <memballoon model="virtio">
Dec  5 02:12:42 compute-0 nova_compute[349548]:      <stats period="10"/>
Dec  5 02:12:42 compute-0 nova_compute[349548]:    </memballoon>
Dec  5 02:12:42 compute-0 nova_compute[349548]:  </devices>
Dec  5 02:12:42 compute-0 nova_compute[349548]: </domain>
Dec  5 02:12:42 compute-0 nova_compute[349548]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.932 349552 DEBUG nova.compute.manager [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Preparing to wait for external event network-vif-plugged-d5201944-8184-405e-ae5f-b743e1bd7399 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.932 349552 DEBUG oslo_concurrency.lockutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Acquiring lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.933 349552 DEBUG oslo_concurrency.lockutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.933 349552 DEBUG oslo_concurrency.lockutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.933 349552 DEBUG nova.virt.libvirt.vif [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T02:12:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1301152906',display_name='tempest-TestServerBasicOps-server-1301152906',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1301152906',id=14,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDCz5vjwlgDWbvwiwH6Lrc3odqUa7TZ3EfOipPX5fpPxPUspT7EN7quA0kvbAyTCNWf/e9htL6cMWK3K35T7n3AN3hOq0SEzHNsNLt1sUvuz6ePIFT2WS8FYfWxAPVEIpA==',key_name='tempest-TestServerBasicOps-1536427465',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='286d2d767009421bb0c889a0ff65b2a2',ramdisk_id='',reservation_id='r-iqi50j5i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1996691968',owner_user_name='tempest-TestServerBasicOps-1996691968-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T02:12:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='69e134c969b04dc58a1d1556d8ecf4a8',uuid=117d1772-87cc-4a3d-bf07-3f9b49ac0c63,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d5201944-8184-405e-ae5f-b743e1bd7399", "address": "fa:16:3e:8f:b5:d5", "network": {"id": "297ab129-d19a-4a0e-893c-731678c3b7a7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-588084580-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "286d2d767009421bb0c889a0ff65b2a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5201944-81", "ovs_interfaceid": "d5201944-8184-405e-ae5f-b743e1bd7399", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.933 349552 DEBUG nova.network.os_vif_util [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Converting VIF {"id": "d5201944-8184-405e-ae5f-b743e1bd7399", "address": "fa:16:3e:8f:b5:d5", "network": {"id": "297ab129-d19a-4a0e-893c-731678c3b7a7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-588084580-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "286d2d767009421bb0c889a0ff65b2a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5201944-81", "ovs_interfaceid": "d5201944-8184-405e-ae5f-b743e1bd7399", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.934 349552 DEBUG nova.network.os_vif_util [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:b5:d5,bridge_name='br-int',has_traffic_filtering=True,id=d5201944-8184-405e-ae5f-b743e1bd7399,network=Network(297ab129-d19a-4a0e-893c-731678c3b7a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5201944-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.934 349552 DEBUG os_vif [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:b5:d5,bridge_name='br-int',has_traffic_filtering=True,id=d5201944-8184-405e-ae5f-b743e1bd7399,network=Network(297ab129-d19a-4a0e-893c-731678c3b7a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5201944-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.935 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.935 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.935 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.936 349552 DEBUG oslo_concurrency.lockutils [None req-310e69a0-94b9-4e7c-81e8-4ee446040631 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "e184a71d-1d91-4999-bb53-73c2caa1110a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.505s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.938 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.938 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd5201944-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.938 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd5201944-81, col_values=(('external_ids', {'iface-id': 'd5201944-8184-405e-ae5f-b743e1bd7399', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8f:b5:d5', 'vm-uuid': '117d1772-87cc-4a3d-bf07-3f9b49ac0c63'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.940 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:42 compute-0 NetworkManager[49092]: <info>  [1764900762.9413] manager: (tapd5201944-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.941 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.943 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 292fd084-0808-4a80-adc1-6ab1f28e188a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.943 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.943 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance e184a71d-1d91-4999-bb53-73c2caa1110a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.943 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 117d1772-87cc-4a3d-bf07-3f9b49ac0c63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.943 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.944 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.948 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:42 compute-0 nova_compute[349548]: 2025-12-05 02:12:42.948 349552 INFO os_vif [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:b5:d5,bridge_name='br-int',has_traffic_filtering=True,id=d5201944-8184-405e-ae5f-b743e1bd7399,network=Network(297ab129-d19a-4a0e-893c-731678c3b7a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5201944-81')#033[00m
Dec  5 02:12:43 compute-0 nova_compute[349548]: 2025-12-05 02:12:43.004 349552 DEBUG nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  5 02:12:43 compute-0 nova_compute[349548]: 2025-12-05 02:12:43.004 349552 DEBUG nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  5 02:12:43 compute-0 nova_compute[349548]: 2025-12-05 02:12:43.004 349552 DEBUG nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] No VIF found with MAC fa:16:3e:8f:b5:d5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  5 02:12:43 compute-0 nova_compute[349548]: 2025-12-05 02:12:43.004 349552 INFO nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Using config drive#033[00m
Dec  5 02:12:43 compute-0 nova_compute[349548]: 2025-12-05 02:12:43.063 349552 DEBUG nova.storage.rbd_utils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] rbd image 117d1772-87cc-4a3d-bf07-3f9b49ac0c63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:12:43 compute-0 nova_compute[349548]: 2025-12-05 02:12:43.133 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:12:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:12:43 compute-0 nova_compute[349548]: 2025-12-05 02:12:43.348 349552 DEBUG nova.network.neutron [req-8b091dad-9831-42f8-90a2-9762ff3e7737 req-e199f0b1-85d9-4e4a-86ba-7282587ea851 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Updated VIF entry in instance network info cache for port d5201944-8184-405e-ae5f-b743e1bd7399. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  5 02:12:43 compute-0 nova_compute[349548]: 2025-12-05 02:12:43.349 349552 DEBUG nova.network.neutron [req-8b091dad-9831-42f8-90a2-9762ff3e7737 req-e199f0b1-85d9-4e4a-86ba-7282587ea851 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Updating instance_info_cache with network_info: [{"id": "d5201944-8184-405e-ae5f-b743e1bd7399", "address": "fa:16:3e:8f:b5:d5", "network": {"id": "297ab129-d19a-4a0e-893c-731678c3b7a7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-588084580-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "286d2d767009421bb0c889a0ff65b2a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5201944-81", "ovs_interfaceid": "d5201944-8184-405e-ae5f-b743e1bd7399", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:12:43 compute-0 nova_compute[349548]: 2025-12-05 02:12:43.370 349552 DEBUG oslo_concurrency.lockutils [req-8b091dad-9831-42f8-90a2-9762ff3e7737 req-e199f0b1-85d9-4e4a-86ba-7282587ea851 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-117d1772-87cc-4a3d-bf07-3f9b49ac0c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:12:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:12:43 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1808277890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:12:43 compute-0 nova_compute[349548]: 2025-12-05 02:12:43.668 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:12:43 compute-0 nova_compute[349548]: 2025-12-05 02:12:43.693 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:12:43 compute-0 nova_compute[349548]: 2025-12-05 02:12:43.721 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:12:43 compute-0 nova_compute[349548]: 2025-12-05 02:12:43.744 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 02:12:43 compute-0 nova_compute[349548]: 2025-12-05 02:12:43.745 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.954s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:12:43 compute-0 nova_compute[349548]: 2025-12-05 02:12:43.942 349552 INFO nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Creating config drive at /var/lib/nova/instances/117d1772-87cc-4a3d-bf07-3f9b49ac0c63/disk.config#033[00m
Dec  5 02:12:43 compute-0 nova_compute[349548]: 2025-12-05 02:12:43.954 349552 DEBUG oslo_concurrency.processutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/117d1772-87cc-4a3d-bf07-3f9b49ac0c63/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk7f84plt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:12:44 compute-0 nova_compute[349548]: 2025-12-05 02:12:44.088 349552 DEBUG oslo_concurrency.processutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/117d1772-87cc-4a3d-bf07-3f9b49ac0c63/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk7f84plt" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:12:44 compute-0 nova_compute[349548]: 2025-12-05 02:12:44.139 349552 DEBUG nova.storage.rbd_utils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] rbd image 117d1772-87cc-4a3d-bf07-3f9b49ac0c63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:12:44 compute-0 nova_compute[349548]: 2025-12-05 02:12:44.149 349552 DEBUG oslo_concurrency.processutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/117d1772-87cc-4a3d-bf07-3f9b49ac0c63/disk.config 117d1772-87cc-4a3d-bf07-3f9b49ac0c63_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:12:44 compute-0 nova_compute[349548]: 2025-12-05 02:12:44.196 349552 DEBUG nova.compute.manager [req-f3bb9c46-60ca-47a7-9936-7e455aace84b req-7625ed6e-0d72-4a73-bda6-8ca82faefb3f a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Received event network-vif-plugged-94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:12:44 compute-0 nova_compute[349548]: 2025-12-05 02:12:44.199 349552 DEBUG oslo_concurrency.lockutils [req-f3bb9c46-60ca-47a7-9936-7e455aace84b req-7625ed6e-0d72-4a73-bda6-8ca82faefb3f a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "e184a71d-1d91-4999-bb53-73c2caa1110a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:12:44 compute-0 nova_compute[349548]: 2025-12-05 02:12:44.199 349552 DEBUG oslo_concurrency.lockutils [req-f3bb9c46-60ca-47a7-9936-7e455aace84b req-7625ed6e-0d72-4a73-bda6-8ca82faefb3f a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "e184a71d-1d91-4999-bb53-73c2caa1110a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:12:44 compute-0 nova_compute[349548]: 2025-12-05 02:12:44.200 349552 DEBUG oslo_concurrency.lockutils [req-f3bb9c46-60ca-47a7-9936-7e455aace84b req-7625ed6e-0d72-4a73-bda6-8ca82faefb3f a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "e184a71d-1d91-4999-bb53-73c2caa1110a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:12:44 compute-0 nova_compute[349548]: 2025-12-05 02:12:44.200 349552 DEBUG nova.compute.manager [req-f3bb9c46-60ca-47a7-9936-7e455aace84b req-7625ed6e-0d72-4a73-bda6-8ca82faefb3f a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] No waiting events found dispatching network-vif-plugged-94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 02:12:44 compute-0 nova_compute[349548]: 2025-12-05 02:12:44.201 349552 WARNING nova.compute.manager [req-f3bb9c46-60ca-47a7-9936-7e455aace84b req-7625ed6e-0d72-4a73-bda6-8ca82faefb3f a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Received unexpected event network-vif-plugged-94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 for instance with vm_state active and task_state None.#033[00m
Dec  5 02:12:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1912: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 3.3 MiB/s wr, 55 op/s
Dec  5 02:12:44 compute-0 nova_compute[349548]: 2025-12-05 02:12:44.415 349552 DEBUG oslo_concurrency.processutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/117d1772-87cc-4a3d-bf07-3f9b49ac0c63/disk.config 117d1772-87cc-4a3d-bf07-3f9b49ac0c63_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.266s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:12:44 compute-0 nova_compute[349548]: 2025-12-05 02:12:44.416 349552 INFO nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Deleting local config drive /var/lib/nova/instances/117d1772-87cc-4a3d-bf07-3f9b49ac0c63/disk.config because it was imported into RBD.#033[00m
Dec  5 02:12:44 compute-0 kernel: tapd5201944-81: entered promiscuous mode
Dec  5 02:12:44 compute-0 NetworkManager[49092]: <info>  [1764900764.4885] manager: (tapd5201944-81): new Tun device (/org/freedesktop/NetworkManager/Devices/75)
Dec  5 02:12:44 compute-0 ovn_controller[89286]: 2025-12-05T02:12:44Z|00149|binding|INFO|Claiming lport d5201944-8184-405e-ae5f-b743e1bd7399 for this chassis.
Dec  5 02:12:44 compute-0 ovn_controller[89286]: 2025-12-05T02:12:44Z|00150|binding|INFO|d5201944-8184-405e-ae5f-b743e1bd7399: Claiming fa:16:3e:8f:b5:d5 10.100.0.12
Dec  5 02:12:44 compute-0 nova_compute[349548]: 2025-12-05 02:12:44.489 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.496 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:b5:d5 10.100.0.12'], port_security=['fa:16:3e:8f:b5:d5 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '117d1772-87cc-4a3d-bf07-3f9b49ac0c63', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-297ab129-d19a-4a0e-893c-731678c3b7a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '286d2d767009421bb0c889a0ff65b2a2', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cea42b97-22e3-42f2-b4a9-e60ab6e5a3f6 f4a2d83a-c7b3-4fde-b9ec-59d46e5208fb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff1f531e-a659-4463-9351-3086ed6c2f8e, chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=d5201944-8184-405e-ae5f-b743e1bd7399) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.497 287122 INFO neutron.agent.ovn.metadata.agent [-] Port d5201944-8184-405e-ae5f-b743e1bd7399 in datapath 297ab129-d19a-4a0e-893c-731678c3b7a7 bound to our chassis#033[00m
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.499 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 297ab129-d19a-4a0e-893c-731678c3b7a7#033[00m
Dec  5 02:12:44 compute-0 NetworkManager[49092]: <info>  [1764900764.5017] device (tapd5201944-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  5 02:12:44 compute-0 NetworkManager[49092]: <info>  [1764900764.5022] device (tapd5201944-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.511 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[20cab368-add5-41ab-9659-f98d638ae7fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.512 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap297ab129-d1 in ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  5 02:12:44 compute-0 ovn_controller[89286]: 2025-12-05T02:12:44Z|00151|binding|INFO|Setting lport d5201944-8184-405e-ae5f-b743e1bd7399 ovn-installed in OVS
Dec  5 02:12:44 compute-0 ovn_controller[89286]: 2025-12-05T02:12:44Z|00152|binding|INFO|Setting lport d5201944-8184-405e-ae5f-b743e1bd7399 up in Southbound
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.513 412744 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap297ab129-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  5 02:12:44 compute-0 nova_compute[349548]: 2025-12-05 02:12:44.515 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.513 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[6e251ffd-8149-4c27-8bbb-363a24af2615]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.517 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[0a33e432-ecdd-43ef-9242-4ba5ccb80cc3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:12:44 compute-0 nova_compute[349548]: 2025-12-05 02:12:44.525 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.537 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[88ee01d2-a135-405b-a3de-9ae31794ba2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:12:44 compute-0 systemd-machined[138700]: New machine qemu-15-instance-0000000e.
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.562 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[2f82d351-2bc6-4d87-a6c4-3ae69b05f74a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:12:44 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000000e.
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.615 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[858c5f2e-2866-480f-89df-4795b0e25adf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:12:44 compute-0 podman[450703]: 2025-12-05 02:12:44.620635239 +0000 UTC m=+0.089304409 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:12:44 compute-0 NetworkManager[49092]: <info>  [1764900764.6311] manager: (tap297ab129-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/76)
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.633 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[0439ab32-36bd-4fc7-b4ac-1253a9f17391]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:12:44 compute-0 podman[450704]: 2025-12-05 02:12:44.662740842 +0000 UTC m=+0.130081005 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.683 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[aca4361f-f8ed-42fd-a731-715d8294dfde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:12:44 compute-0 podman[450707]: 2025-12-05 02:12:44.689534504 +0000 UTC m=+0.136917786 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, release=1755695350, vendor=Red Hat, Inc., config_id=edpm, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, vcs-type=git, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=ubi9-minimal, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.688 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[6fea007a-aae2-4be7-b0a7-cb35a14a27e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:12:44 compute-0 NetworkManager[49092]: <info>  [1764900764.7168] device (tap297ab129-d0): carrier: link connected
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.727 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[3091511f-47d6-4cb9-a3b1-548fb2131ff6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.755 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[2ff7664e-85a6-4654-905e-18e7b4842c1a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap297ab129-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:0d:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685138, 'reachable_time': 17985, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 450816, 'error': None, 'target': 'ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:12:44 compute-0 podman[450706]: 2025-12-05 02:12:44.761610009 +0000 UTC m=+0.212774937 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.772 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[b50c7bc4-7688-4702-977e-06aff0812db6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3e:dbb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 685138, 'tstamp': 685138}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 450817, 'error': None, 'target': 'ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.796 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[6aca4535-d07a-4444-9e9f-db6eb150bfcc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap297ab129-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:0d:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685138, 'reachable_time': 17985, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 450821, 'error': None, 'target': 'ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.838 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[97fbf751-559e-4ebb-b373-ebac5e83d716]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.905 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[1eed7de1-fc18-4e13-bc70-2555fcbfa7ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.906 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap297ab129-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.907 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.907 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap297ab129-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:12:44 compute-0 NetworkManager[49092]: <info>  [1764900764.9101] manager: (tap297ab129-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/77)
Dec  5 02:12:44 compute-0 kernel: tap297ab129-d0: entered promiscuous mode
Dec  5 02:12:44 compute-0 nova_compute[349548]: 2025-12-05 02:12:44.915 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.916 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap297ab129-d0, col_values=(('external_ids', {'iface-id': '9db11503-fcc0-46ec-ad9b-de48fe796de4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:12:44 compute-0 ovn_controller[89286]: 2025-12-05T02:12:44Z|00153|binding|INFO|Releasing lport 9db11503-fcc0-46ec-ad9b-de48fe796de4 from this chassis (sb_readonly=0)
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.919 287122 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/297ab129-d19a-4a0e-893c-731678c3b7a7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/297ab129-d19a-4a0e-893c-731678c3b7a7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.920 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[b00fe178-03de-4515-86ea-9a35669e0186]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.921 287122 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]: global
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]:    log         /dev/log local0 debug
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]:    log-tag     haproxy-metadata-proxy-297ab129-d19a-4a0e-893c-731678c3b7a7
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]:    user        root
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]:    group       root
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]:    maxconn     1024
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]:    pidfile     /var/lib/neutron/external/pids/297ab129-d19a-4a0e-893c-731678c3b7a7.pid.haproxy
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]:    daemon
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]: 
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]: defaults
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]:    log global
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]:    mode http
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]:    option httplog
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]:    option dontlognull
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]:    option http-server-close
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]:    option forwardfor
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]:    retries                 3
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]:    timeout http-request    30s
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]:    timeout connect         30s
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]:    timeout client          32s
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]:    timeout server          32s
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]:    timeout http-keep-alive 30s
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]: 
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]: 
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]: listen listener
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]:    bind 169.254.169.254:80
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]:    server metadata /var/lib/neutron/metadata_proxy
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]:    http-request add-header X-OVN-Network-ID 297ab129-d19a-4a0e-893c-731678c3b7a7
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  5 02:12:44 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:44.922 287122 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7', 'env', 'PROCESS_TAG=haproxy-297ab129-d19a-4a0e-893c-731678c3b7a7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/297ab129-d19a-4a0e-893c-731678c3b7a7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  5 02:12:44 compute-0 nova_compute[349548]: 2025-12-05 02:12:44.931 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.025 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900765.024596, 117d1772-87cc-4a3d-bf07-3f9b49ac0c63 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.026 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] VM Started (Lifecycle Event)#033[00m
Dec  5 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.047 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.054 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900765.024677, 117d1772-87cc-4a3d-bf07-3f9b49ac0c63 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.054 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] VM Paused (Lifecycle Event)#033[00m
Dec  5 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.070 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.076 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  5 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.093 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  5 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.234 349552 DEBUG nova.compute.manager [req-3c80e0b9-c781-4f5a-8873-354a672ae1a7 req-3f1408e7-7f0c-4342-a559-de332c24eda7 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Received event network-vif-plugged-d5201944-8184-405e-ae5f-b743e1bd7399 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.235 349552 DEBUG oslo_concurrency.lockutils [req-3c80e0b9-c781-4f5a-8873-354a672ae1a7 req-3f1408e7-7f0c-4342-a559-de332c24eda7 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.235 349552 DEBUG oslo_concurrency.lockutils [req-3c80e0b9-c781-4f5a-8873-354a672ae1a7 req-3f1408e7-7f0c-4342-a559-de332c24eda7 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.236 349552 DEBUG oslo_concurrency.lockutils [req-3c80e0b9-c781-4f5a-8873-354a672ae1a7 req-3f1408e7-7f0c-4342-a559-de332c24eda7 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.236 349552 DEBUG nova.compute.manager [req-3c80e0b9-c781-4f5a-8873-354a672ae1a7 req-3f1408e7-7f0c-4342-a559-de332c24eda7 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Processing event network-vif-plugged-d5201944-8184-405e-ae5f-b743e1bd7399 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  5 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.239 349552 DEBUG nova.compute.manager [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  5 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.244 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900765.2433593, 117d1772-87cc-4a3d-bf07-3f9b49ac0c63 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.244 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] VM Resumed (Lifecycle Event)#033[00m
Dec  5 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.246 349552 DEBUG nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  5 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.260 349552 INFO nova.virt.libvirt.driver [-] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Instance spawned successfully.#033[00m
Dec  5 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.261 349552 DEBUG nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  5 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.264 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.271 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  5 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.285 349552 DEBUG nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.286 349552 DEBUG nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.287 349552 DEBUG nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.287 349552 DEBUG nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.288 349552 DEBUG nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.289 349552 DEBUG nova.virt.libvirt.driver [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.294 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  5 02:12:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 02:12:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2380504868' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 02:12:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 02:12:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2380504868' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.347 349552 INFO nova.compute.manager [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Took 11.66 seconds to spawn the instance on the hypervisor.#033[00m
Dec  5 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.347 349552 DEBUG nova.compute.manager [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.406 349552 INFO nova.compute.manager [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Took 12.80 seconds to build instance.#033[00m
Dec  5 02:12:45 compute-0 nova_compute[349548]: 2025-12-05 02:12:45.422 349552 DEBUG oslo_concurrency.lockutils [None req-7a488bf3-6b8a-4d4a-bcd2-95663abda0aa 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.884s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:12:45 compute-0 podman[450889]: 2025-12-05 02:12:45.433556401 +0000 UTC m=+0.071016436 container create 6f0c52049aacf7629ee6bf5752ade983525a5a45f00ee3b2ed23eb855c5cc2ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 02:12:45 compute-0 systemd[1]: Started libpod-conmon-6f0c52049aacf7629ee6bf5752ade983525a5a45f00ee3b2ed23eb855c5cc2ad.scope.
Dec  5 02:12:45 compute-0 podman[450889]: 2025-12-05 02:12:45.398151316 +0000 UTC m=+0.035611371 image pull 014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  5 02:12:45 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:12:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/596f961f4772e7095f34b4530a56ee23aa3e77a9b26fe356092ec6991cf0ede7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  5 02:12:45 compute-0 podman[450889]: 2025-12-05 02:12:45.536168223 +0000 UTC m=+0.173628348 container init 6f0c52049aacf7629ee6bf5752ade983525a5a45f00ee3b2ed23eb855c5cc2ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec  5 02:12:45 compute-0 podman[450889]: 2025-12-05 02:12:45.545451964 +0000 UTC m=+0.182912029 container start 6f0c52049aacf7629ee6bf5752ade983525a5a45f00ee3b2ed23eb855c5cc2ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec  5 02:12:45 compute-0 neutron-haproxy-ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7[450903]: [NOTICE]   (450908) : New worker (450910) forked
Dec  5 02:12:45 compute-0 neutron-haproxy-ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7[450903]: [NOTICE]   (450908) : Loading success.
Dec  5 02:12:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1913: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 266 KiB/s rd, 3.3 MiB/s wr, 72 op/s
Dec  5 02:12:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:12:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:12:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:12:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:12:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:12:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:12:46 compute-0 nova_compute[349548]: 2025-12-05 02:12:46.457 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:12:46 compute-0 nova_compute[349548]: 2025-12-05 02:12:46.458 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:12:46 compute-0 nova_compute[349548]: 2025-12-05 02:12:46.460 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:12:47 compute-0 nova_compute[349548]: 2025-12-05 02:12:47.220 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:47 compute-0 nova_compute[349548]: 2025-12-05 02:12:47.326 349552 DEBUG nova.compute.manager [req-0772adba-a63f-4f7e-9b61-ae81918f8b81 req-779863eb-71ae-4614-95a7-87b9e8373a81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Received event network-vif-plugged-d5201944-8184-405e-ae5f-b743e1bd7399 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:12:47 compute-0 nova_compute[349548]: 2025-12-05 02:12:47.327 349552 DEBUG oslo_concurrency.lockutils [req-0772adba-a63f-4f7e-9b61-ae81918f8b81 req-779863eb-71ae-4614-95a7-87b9e8373a81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:12:47 compute-0 nova_compute[349548]: 2025-12-05 02:12:47.329 349552 DEBUG oslo_concurrency.lockutils [req-0772adba-a63f-4f7e-9b61-ae81918f8b81 req-779863eb-71ae-4614-95a7-87b9e8373a81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:12:47 compute-0 nova_compute[349548]: 2025-12-05 02:12:47.330 349552 DEBUG oslo_concurrency.lockutils [req-0772adba-a63f-4f7e-9b61-ae81918f8b81 req-779863eb-71ae-4614-95a7-87b9e8373a81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:12:47 compute-0 nova_compute[349548]: 2025-12-05 02:12:47.331 349552 DEBUG nova.compute.manager [req-0772adba-a63f-4f7e-9b61-ae81918f8b81 req-779863eb-71ae-4614-95a7-87b9e8373a81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] No waiting events found dispatching network-vif-plugged-d5201944-8184-405e-ae5f-b743e1bd7399 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 02:12:47 compute-0 nova_compute[349548]: 2025-12-05 02:12:47.331 349552 WARNING nova.compute.manager [req-0772adba-a63f-4f7e-9b61-ae81918f8b81 req-779863eb-71ae-4614-95a7-87b9e8373a81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Received unexpected event network-vif-plugged-d5201944-8184-405e-ae5f-b743e1bd7399 for instance with vm_state active and task_state None.#033[00m
Dec  5 02:12:47 compute-0 nova_compute[349548]: 2025-12-05 02:12:47.332 349552 DEBUG nova.compute.manager [req-0772adba-a63f-4f7e-9b61-ae81918f8b81 req-779863eb-71ae-4614-95a7-87b9e8373a81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Received event network-changed-94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:12:47 compute-0 nova_compute[349548]: 2025-12-05 02:12:47.333 349552 DEBUG nova.compute.manager [req-0772adba-a63f-4f7e-9b61-ae81918f8b81 req-779863eb-71ae-4614-95a7-87b9e8373a81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Refreshing instance network info cache due to event network-changed-94c7e2c9-6aeb-4be2-a022-8cd7ad27d978. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  5 02:12:47 compute-0 nova_compute[349548]: 2025-12-05 02:12:47.333 349552 DEBUG oslo_concurrency.lockutils [req-0772adba-a63f-4f7e-9b61-ae81918f8b81 req-779863eb-71ae-4614-95a7-87b9e8373a81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-e184a71d-1d91-4999-bb53-73c2caa1110a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:12:47 compute-0 nova_compute[349548]: 2025-12-05 02:12:47.334 349552 DEBUG oslo_concurrency.lockutils [req-0772adba-a63f-4f7e-9b61-ae81918f8b81 req-779863eb-71ae-4614-95a7-87b9e8373a81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-e184a71d-1d91-4999-bb53-73c2caa1110a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:12:47 compute-0 nova_compute[349548]: 2025-12-05 02:12:47.335 349552 DEBUG nova.network.neutron [req-0772adba-a63f-4f7e-9b61-ae81918f8b81 req-779863eb-71ae-4614-95a7-87b9e8373a81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Refreshing network info cache for port 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  5 02:12:47 compute-0 nova_compute[349548]: 2025-12-05 02:12:47.940 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:12:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1914: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.6 MiB/s wr, 133 op/s
Dec  5 02:12:49 compute-0 nova_compute[349548]: 2025-12-05 02:12:49.258 349552 DEBUG nova.network.neutron [req-0772adba-a63f-4f7e-9b61-ae81918f8b81 req-779863eb-71ae-4614-95a7-87b9e8373a81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Updated VIF entry in instance network info cache for port 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  5 02:12:49 compute-0 nova_compute[349548]: 2025-12-05 02:12:49.260 349552 DEBUG nova.network.neutron [req-0772adba-a63f-4f7e-9b61-ae81918f8b81 req-779863eb-71ae-4614-95a7-87b9e8373a81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Updating instance_info_cache with network_info: [{"id": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "address": "fa:16:3e:de:22:fb", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94c7e2c9-6a", "ovs_interfaceid": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:12:49 compute-0 nova_compute[349548]: 2025-12-05 02:12:49.285 349552 DEBUG oslo_concurrency.lockutils [req-0772adba-a63f-4f7e-9b61-ae81918f8b81 req-779863eb-71ae-4614-95a7-87b9e8373a81 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-e184a71d-1d91-4999-bb53-73c2caa1110a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:12:49 compute-0 nova_compute[349548]: 2025-12-05 02:12:49.466 349552 DEBUG nova.compute.manager [req-3db08275-cdc0-412b-9c00-9fa986f5c3d5 req-2adf48b4-dd08-4125-a4c7-9a4b23cf7831 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Received event network-changed-d5201944-8184-405e-ae5f-b743e1bd7399 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:12:49 compute-0 nova_compute[349548]: 2025-12-05 02:12:49.467 349552 DEBUG nova.compute.manager [req-3db08275-cdc0-412b-9c00-9fa986f5c3d5 req-2adf48b4-dd08-4125-a4c7-9a4b23cf7831 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Refreshing instance network info cache due to event network-changed-d5201944-8184-405e-ae5f-b743e1bd7399. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  5 02:12:49 compute-0 nova_compute[349548]: 2025-12-05 02:12:49.468 349552 DEBUG oslo_concurrency.lockutils [req-3db08275-cdc0-412b-9c00-9fa986f5c3d5 req-2adf48b4-dd08-4125-a4c7-9a4b23cf7831 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-117d1772-87cc-4a3d-bf07-3f9b49ac0c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:12:49 compute-0 nova_compute[349548]: 2025-12-05 02:12:49.469 349552 DEBUG oslo_concurrency.lockutils [req-3db08275-cdc0-412b-9c00-9fa986f5c3d5 req-2adf48b4-dd08-4125-a4c7-9a4b23cf7831 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-117d1772-87cc-4a3d-bf07-3f9b49ac0c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:12:49 compute-0 nova_compute[349548]: 2025-12-05 02:12:49.470 349552 DEBUG nova.network.neutron [req-3db08275-cdc0-412b-9c00-9fa986f5c3d5 req-2adf48b4-dd08-4125-a4c7-9a4b23cf7831 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Refreshing network info cache for port d5201944-8184-405e-ae5f-b743e1bd7399 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  5 02:12:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1915: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 30 KiB/s wr, 87 op/s
Dec  5 02:12:51 compute-0 nova_compute[349548]: 2025-12-05 02:12:51.346 349552 DEBUG nova.network.neutron [req-3db08275-cdc0-412b-9c00-9fa986f5c3d5 req-2adf48b4-dd08-4125-a4c7-9a4b23cf7831 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Updated VIF entry in instance network info cache for port d5201944-8184-405e-ae5f-b743e1bd7399. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  5 02:12:51 compute-0 nova_compute[349548]: 2025-12-05 02:12:51.349 349552 DEBUG nova.network.neutron [req-3db08275-cdc0-412b-9c00-9fa986f5c3d5 req-2adf48b4-dd08-4125-a4c7-9a4b23cf7831 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Updating instance_info_cache with network_info: [{"id": "d5201944-8184-405e-ae5f-b743e1bd7399", "address": "fa:16:3e:8f:b5:d5", "network": {"id": "297ab129-d19a-4a0e-893c-731678c3b7a7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-588084580-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "286d2d767009421bb0c889a0ff65b2a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5201944-81", "ovs_interfaceid": "d5201944-8184-405e-ae5f-b743e1bd7399", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:12:51 compute-0 nova_compute[349548]: 2025-12-05 02:12:51.370 349552 DEBUG oslo_concurrency.lockutils [req-3db08275-cdc0-412b-9c00-9fa986f5c3d5 req-2adf48b4-dd08-4125-a4c7-9a4b23cf7831 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-117d1772-87cc-4a3d-bf07-3f9b49ac0c63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:12:52 compute-0 nova_compute[349548]: 2025-12-05 02:12:52.223 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1916: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 30 KiB/s wr, 148 op/s
Dec  5 02:12:52 compute-0 nova_compute[349548]: 2025-12-05 02:12:52.945 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:12:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1917: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 17 KiB/s wr, 143 op/s
Dec  5 02:12:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:56.209 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:12:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:56.210 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:12:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:12:56.211 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:12:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1918: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 144 op/s
Dec  5 02:12:57 compute-0 nova_compute[349548]: 2025-12-05 02:12:57.226 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:57 compute-0 nova_compute[349548]: 2025-12-05 02:12:57.950 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:12:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:12:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1919: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 9.4 KiB/s wr, 127 op/s
Dec  5 02:12:58 compute-0 podman[450920]: 2025-12-05 02:12:58.672556787 +0000 UTC m=+0.083920578 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  5 02:12:58 compute-0 podman[450921]: 2025-12-05 02:12:58.683492424 +0000 UTC m=+0.092407457 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  5 02:12:59 compute-0 podman[158197]: time="2025-12-05T02:12:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:12:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:12:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46279 "" "Go-http-client/1.1"
Dec  5 02:12:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:12:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9588 "" "Go-http-client/1.1"
Dec  5 02:13:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1920: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 8.0 KiB/s wr, 62 op/s
Dec  5 02:13:01 compute-0 openstack_network_exporter[366555]: ERROR   02:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:13:01 compute-0 openstack_network_exporter[366555]: ERROR   02:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:13:01 compute-0 openstack_network_exporter[366555]: ERROR   02:13:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:13:01 compute-0 openstack_network_exporter[366555]: ERROR   02:13:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:13:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:13:01 compute-0 openstack_network_exporter[366555]: ERROR   02:13:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:13:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:13:02 compute-0 nova_compute[349548]: 2025-12-05 02:13:02.229 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1921: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 8.0 KiB/s wr, 62 op/s
Dec  5 02:13:02 compute-0 nova_compute[349548]: 2025-12-05 02:13:02.954 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:13:03 compute-0 podman[450961]: 2025-12-05 02:13:03.739013422 +0000 UTC m=+0.128972273 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm)
Dec  5 02:13:03 compute-0 podman[450960]: 2025-12-05 02:13:03.741861452 +0000 UTC m=+0.126469913 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Dec  5 02:13:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1922: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s wr, 1 op/s
Dec  5 02:13:05 compute-0 podman[450997]: 2025-12-05 02:13:05.731593924 +0000 UTC m=+0.138530921 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, vcs-type=git, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, vendor=Red Hat, Inc., container_name=kepler, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, version=9.4, config_id=edpm, name=ubi9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container)
Dec  5 02:13:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1923: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s wr, 1 op/s
Dec  5 02:13:07 compute-0 nova_compute[349548]: 2025-12-05 02:13:07.233 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:07 compute-0 nova_compute[349548]: 2025-12-05 02:13:07.958 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:13:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1924: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:13:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1925: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:13:12 compute-0 nova_compute[349548]: 2025-12-05 02:13:12.234 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1926: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Dec  5 02:13:12 compute-0 nova_compute[349548]: 2025-12-05 02:13:12.960 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:13:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1927: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Dec  5 02:13:14 compute-0 podman[451022]: 2025-12-05 02:13:14.824581666 +0000 UTC m=+0.093265400 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, build-date=2025-08-20T13:12:41, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, managed_by=edpm_ansible, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, version=9.6, config_id=edpm, io.openshift.tags=minimal rhel9)
Dec  5 02:13:14 compute-0 podman[451015]: 2025-12-05 02:13:14.833538348 +0000 UTC m=+0.139101388 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  5 02:13:14 compute-0 podman[451016]: 2025-12-05 02:13:14.860754172 +0000 UTC m=+0.137067600 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  5 02:13:14 compute-0 podman[451064]: 2025-12-05 02:13:14.935533602 +0000 UTC m=+0.105559375 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec  5 02:13:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1928: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s wr, 0 op/s
Dec  5 02:13:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:13:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:13:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:13:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:13:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:13:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:13:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:13:16
Dec  5 02:13:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 02:13:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 02:13:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'vms', 'backups', 'default.rgw.control', 'default.rgw.meta', 'images', '.mgr', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log']
Dec  5 02:13:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 02:13:17 compute-0 ceph-mgr[193209]: client.0 ms_handle_reset on v2:192.168.122.100:6800/858078637
Dec  5 02:13:17 compute-0 nova_compute[349548]: 2025-12-05 02:13:17.236 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:17 compute-0 nova_compute[349548]: 2025-12-05 02:13:17.963 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 02:13:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:13:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 02:13:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:13:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:13:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:13:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:13:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:13:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:13:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:13:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:13:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1929: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 2.7 KiB/s wr, 0 op/s
Dec  5 02:13:18 compute-0 ovn_controller[89286]: 2025-12-05T02:13:18Z|00154|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Dec  5 02:13:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:13:19 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:13:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 02:13:19 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:13:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 02:13:19 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:13:19 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 03925c2f-fae8-49d3-bd1b-6fc7da0f616e does not exist
Dec  5 02:13:19 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 44f7f950-5b84-4446-8da9-7cb635512fb3 does not exist
Dec  5 02:13:19 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 78c41e78-d5a6-4b8b-972d-f029e888572d does not exist
Dec  5 02:13:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 02:13:19 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 02:13:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 02:13:19 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:13:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:13:19 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:13:20 compute-0 ovn_controller[89286]: 2025-12-05T02:13:20Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:de:22:fb 10.100.0.3
Dec  5 02:13:20 compute-0 ovn_controller[89286]: 2025-12-05T02:13:20Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:de:22:fb 10.100.0.3
Dec  5 02:13:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1930: 321 pgs: 321 active+clean; 329 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 2.7 KiB/s wr, 0 op/s
Dec  5 02:13:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:13:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:13:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:13:20 compute-0 podman[451362]: 2025-12-05 02:13:20.405778357 +0000 UTC m=+0.058434592 container create 689457d98f5acb7f893b470d9a24dcaf62aa071804f1b03f3f0de678b39dcf81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:13:20 compute-0 podman[451362]: 2025-12-05 02:13:20.382136913 +0000 UTC m=+0.034793168 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:13:20 compute-0 systemd[1]: Started libpod-conmon-689457d98f5acb7f893b470d9a24dcaf62aa071804f1b03f3f0de678b39dcf81.scope.
Dec  5 02:13:20 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:13:20 compute-0 podman[451362]: 2025-12-05 02:13:20.561673985 +0000 UTC m=+0.214330240 container init 689457d98f5acb7f893b470d9a24dcaf62aa071804f1b03f3f0de678b39dcf81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  5 02:13:20 compute-0 podman[451362]: 2025-12-05 02:13:20.569749942 +0000 UTC m=+0.222406177 container start 689457d98f5acb7f893b470d9a24dcaf62aa071804f1b03f3f0de678b39dcf81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:13:20 compute-0 podman[451362]: 2025-12-05 02:13:20.573739064 +0000 UTC m=+0.226395299 container attach 689457d98f5acb7f893b470d9a24dcaf62aa071804f1b03f3f0de678b39dcf81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  5 02:13:20 compute-0 unruffled_montalcini[451380]: 167 167
Dec  5 02:13:20 compute-0 systemd[1]: libpod-689457d98f5acb7f893b470d9a24dcaf62aa071804f1b03f3f0de678b39dcf81.scope: Deactivated successfully.
Dec  5 02:13:20 compute-0 podman[451362]: 2025-12-05 02:13:20.58176326 +0000 UTC m=+0.234419495 container died 689457d98f5acb7f893b470d9a24dcaf62aa071804f1b03f3f0de678b39dcf81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  5 02:13:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-90a6732ff0b3c5c103ec064e5762d888a35adfd2dabe45e0a0f58cc3568c4726-merged.mount: Deactivated successfully.
Dec  5 02:13:20 compute-0 podman[451362]: 2025-12-05 02:13:20.627658029 +0000 UTC m=+0.280314264 container remove 689457d98f5acb7f893b470d9a24dcaf62aa071804f1b03f3f0de678b39dcf81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_montalcini, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  5 02:13:20 compute-0 systemd[1]: libpod-conmon-689457d98f5acb7f893b470d9a24dcaf62aa071804f1b03f3f0de678b39dcf81.scope: Deactivated successfully.
Dec  5 02:13:20 compute-0 podman[451403]: 2025-12-05 02:13:20.877189377 +0000 UTC m=+0.071664854 container create 0f8100fc86059c6a944e35fe127ae53e8df6388fa3cefa534d2d4f842d70dfa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_driscoll, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:13:20 compute-0 systemd[1]: Started libpod-conmon-0f8100fc86059c6a944e35fe127ae53e8df6388fa3cefa534d2d4f842d70dfa7.scope.
Dec  5 02:13:20 compute-0 podman[451403]: 2025-12-05 02:13:20.852545625 +0000 UTC m=+0.047021182 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:13:20 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ab464c7a49eaa65b0c16dec954b33190a812f2ecd0579b72889db2e378bbc8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ab464c7a49eaa65b0c16dec954b33190a812f2ecd0579b72889db2e378bbc8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ab464c7a49eaa65b0c16dec954b33190a812f2ecd0579b72889db2e378bbc8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ab464c7a49eaa65b0c16dec954b33190a812f2ecd0579b72889db2e378bbc8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59ab464c7a49eaa65b0c16dec954b33190a812f2ecd0579b72889db2e378bbc8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 02:13:21 compute-0 podman[451403]: 2025-12-05 02:13:21.020037319 +0000 UTC m=+0.214512816 container init 0f8100fc86059c6a944e35fe127ae53e8df6388fa3cefa534d2d4f842d70dfa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_driscoll, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:13:21 compute-0 podman[451403]: 2025-12-05 02:13:21.033443356 +0000 UTC m=+0.227918843 container start 0f8100fc86059c6a944e35fe127ae53e8df6388fa3cefa534d2d4f842d70dfa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  5 02:13:21 compute-0 podman[451403]: 2025-12-05 02:13:21.038472377 +0000 UTC m=+0.232947854 container attach 0f8100fc86059c6a944e35fe127ae53e8df6388fa3cefa534d2d4f842d70dfa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_driscoll, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:13:21 compute-0 ovn_controller[89286]: 2025-12-05T02:13:21Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8f:b5:d5 10.100.0.12
Dec  5 02:13:21 compute-0 ovn_controller[89286]: 2025-12-05T02:13:21Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8f:b5:d5 10.100.0.12
Dec  5 02:13:22 compute-0 affectionate_driscoll[451418]: --> passed data devices: 0 physical, 3 LVM
Dec  5 02:13:22 compute-0 affectionate_driscoll[451418]: --> relative data size: 1.0
Dec  5 02:13:22 compute-0 affectionate_driscoll[451418]: --> All data devices are unavailable
Dec  5 02:13:22 compute-0 nova_compute[349548]: 2025-12-05 02:13:22.240 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1931: 321 pgs: 321 active+clean; 384 MiB data, 471 MiB used, 60 GiB / 60 GiB avail; 388 KiB/s rd, 4.1 MiB/s wr, 87 op/s
Dec  5 02:13:22 compute-0 systemd[1]: libpod-0f8100fc86059c6a944e35fe127ae53e8df6388fa3cefa534d2d4f842d70dfa7.scope: Deactivated successfully.
Dec  5 02:13:22 compute-0 systemd[1]: libpod-0f8100fc86059c6a944e35fe127ae53e8df6388fa3cefa534d2d4f842d70dfa7.scope: Consumed 1.083s CPU time.
Dec  5 02:13:22 compute-0 conmon[451418]: conmon 0f8100fc86059c6a944e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0f8100fc86059c6a944e35fe127ae53e8df6388fa3cefa534d2d4f842d70dfa7.scope/container/memory.events
Dec  5 02:13:22 compute-0 podman[451403]: 2025-12-05 02:13:22.271676672 +0000 UTC m=+1.466152189 container died 0f8100fc86059c6a944e35fe127ae53e8df6388fa3cefa534d2d4f842d70dfa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_driscoll, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  5 02:13:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-59ab464c7a49eaa65b0c16dec954b33190a812f2ecd0579b72889db2e378bbc8-merged.mount: Deactivated successfully.
Dec  5 02:13:22 compute-0 podman[451403]: 2025-12-05 02:13:22.369408287 +0000 UTC m=+1.563883774 container remove 0f8100fc86059c6a944e35fe127ae53e8df6388fa3cefa534d2d4f842d70dfa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_driscoll, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:13:22 compute-0 systemd[1]: libpod-conmon-0f8100fc86059c6a944e35fe127ae53e8df6388fa3cefa534d2d4f842d70dfa7.scope: Deactivated successfully.
Dec  5 02:13:22 compute-0 nova_compute[349548]: 2025-12-05 02:13:22.965 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:13:23 compute-0 podman[451596]: 2025-12-05 02:13:23.275806593 +0000 UTC m=+0.095361129 container create 58ad5985c959447f5b7c4efe58812f3b3f671abb2edc8bd45101018010d4a398 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_dhawan, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:13:23 compute-0 podman[451596]: 2025-12-05 02:13:23.242355434 +0000 UTC m=+0.061909950 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:13:23 compute-0 systemd[1]: Started libpod-conmon-58ad5985c959447f5b7c4efe58812f3b3f671abb2edc8bd45101018010d4a398.scope.
Dec  5 02:13:23 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:13:23 compute-0 podman[451596]: 2025-12-05 02:13:23.419229882 +0000 UTC m=+0.238784428 container init 58ad5985c959447f5b7c4efe58812f3b3f671abb2edc8bd45101018010d4a398 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_dhawan, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:13:23 compute-0 podman[451596]: 2025-12-05 02:13:23.431083794 +0000 UTC m=+0.250638320 container start 58ad5985c959447f5b7c4efe58812f3b3f671abb2edc8bd45101018010d4a398 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  5 02:13:23 compute-0 recursing_dhawan[451610]: 167 167
Dec  5 02:13:23 compute-0 podman[451596]: 2025-12-05 02:13:23.437637199 +0000 UTC m=+0.257191745 container attach 58ad5985c959447f5b7c4efe58812f3b3f671abb2edc8bd45101018010d4a398 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:13:23 compute-0 systemd[1]: libpod-58ad5985c959447f5b7c4efe58812f3b3f671abb2edc8bd45101018010d4a398.scope: Deactivated successfully.
Dec  5 02:13:23 compute-0 podman[451596]: 2025-12-05 02:13:23.439653105 +0000 UTC m=+0.259207631 container died 58ad5985c959447f5b7c4efe58812f3b3f671abb2edc8bd45101018010d4a398 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  5 02:13:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-8695b3713b1a869fb291d2ee15900fcc5615ab58b3bdfa12d7da227712e3b058-merged.mount: Deactivated successfully.
Dec  5 02:13:23 compute-0 podman[451596]: 2025-12-05 02:13:23.499449625 +0000 UTC m=+0.319004131 container remove 58ad5985c959447f5b7c4efe58812f3b3f671abb2edc8bd45101018010d4a398 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  5 02:13:23 compute-0 systemd[1]: libpod-conmon-58ad5985c959447f5b7c4efe58812f3b3f671abb2edc8bd45101018010d4a398.scope: Deactivated successfully.
Dec  5 02:13:23 compute-0 podman[451634]: 2025-12-05 02:13:23.79624013 +0000 UTC m=+0.094656279 container create 88b81d6bb413f9369be44e0d5e202aa8136e5640b88648ec5d875eaa8380ad70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  5 02:13:23 compute-0 podman[451634]: 2025-12-05 02:13:23.750115415 +0000 UTC m=+0.048531604 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:13:23 compute-0 systemd[1]: Started libpod-conmon-88b81d6bb413f9369be44e0d5e202aa8136e5640b88648ec5d875eaa8380ad70.scope.
Dec  5 02:13:23 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:13:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2544ec5cc92c3399577fd39481fa16e3be616d980e5ff68bf91af3eba5bf9723/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:13:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2544ec5cc92c3399577fd39481fa16e3be616d980e5ff68bf91af3eba5bf9723/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:13:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2544ec5cc92c3399577fd39481fa16e3be616d980e5ff68bf91af3eba5bf9723/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:13:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2544ec5cc92c3399577fd39481fa16e3be616d980e5ff68bf91af3eba5bf9723/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:13:23 compute-0 podman[451634]: 2025-12-05 02:13:23.956306506 +0000 UTC m=+0.254722655 container init 88b81d6bb413f9369be44e0d5e202aa8136e5640b88648ec5d875eaa8380ad70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lichterman, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  5 02:13:23 compute-0 podman[451634]: 2025-12-05 02:13:23.989763706 +0000 UTC m=+0.288179845 container start 88b81d6bb413f9369be44e0d5e202aa8136e5640b88648ec5d875eaa8380ad70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lichterman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  5 02:13:23 compute-0 podman[451634]: 2025-12-05 02:13:23.995763014 +0000 UTC m=+0.294179163 container attach 88b81d6bb413f9369be44e0d5e202aa8136e5640b88648ec5d875eaa8380ad70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lichterman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  5 02:13:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1932: 321 pgs: 321 active+clean; 384 MiB data, 471 MiB used, 60 GiB / 60 GiB avail; 388 KiB/s rd, 4.1 MiB/s wr, 87 op/s
Dec  5 02:13:24 compute-0 busy_lichterman[451651]: {
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:    "0": [
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:        {
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:            "devices": [
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:                "/dev/loop3"
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:            ],
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:            "lv_name": "ceph_lv0",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:            "lv_size": "21470642176",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:            "name": "ceph_lv0",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:            "tags": {
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:                "ceph.cluster_name": "ceph",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:                "ceph.crush_device_class": "",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:                "ceph.encrypted": "0",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:                "ceph.osd_id": "0",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:                "ceph.type": "block",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:                "ceph.vdo": "0"
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:            },
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:            "type": "block",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:            "vg_name": "ceph_vg0"
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:        }
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:    ],
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:    "1": [
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:        {
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:            "devices": [
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:                "/dev/loop4"
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:            ],
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:            "lv_name": "ceph_lv1",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:            "lv_size": "21470642176",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:            "name": "ceph_lv1",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:            "tags": {
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:                "ceph.cluster_name": "ceph",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:                "ceph.crush_device_class": "",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:                "ceph.encrypted": "0",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:                "ceph.osd_id": "1",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:                "ceph.type": "block",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:                "ceph.vdo": "0"
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:            },
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:            "type": "block",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:            "vg_name": "ceph_vg1"
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:        }
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:    ],
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:    "2": [
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:        {
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:            "devices": [
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:                "/dev/loop5"
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:            ],
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:            "lv_name": "ceph_lv2",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:            "lv_size": "21470642176",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:            "name": "ceph_lv2",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:            "tags": {
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:                "ceph.cluster_name": "ceph",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:                "ceph.crush_device_class": "",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:                "ceph.encrypted": "0",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:                "ceph.osd_id": "2",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:                "ceph.type": "block",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:                "ceph.vdo": "0"
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:            },
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:            "type": "block",
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:            "vg_name": "ceph_vg2"
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:        }
Dec  5 02:13:24 compute-0 busy_lichterman[451651]:    ]
Dec  5 02:13:24 compute-0 busy_lichterman[451651]: }
Dec  5 02:13:24 compute-0 systemd[1]: libpod-88b81d6bb413f9369be44e0d5e202aa8136e5640b88648ec5d875eaa8380ad70.scope: Deactivated successfully.
Dec  5 02:13:24 compute-0 podman[451634]: 2025-12-05 02:13:24.78820137 +0000 UTC m=+1.086617519 container died 88b81d6bb413f9369be44e0d5e202aa8136e5640b88648ec5d875eaa8380ad70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:13:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-2544ec5cc92c3399577fd39481fa16e3be616d980e5ff68bf91af3eba5bf9723-merged.mount: Deactivated successfully.
Dec  5 02:13:24 compute-0 podman[451634]: 2025-12-05 02:13:24.891135701 +0000 UTC m=+1.189551820 container remove 88b81d6bb413f9369be44e0d5e202aa8136e5640b88648ec5d875eaa8380ad70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:13:24 compute-0 systemd[1]: libpod-conmon-88b81d6bb413f9369be44e0d5e202aa8136e5640b88648ec5d875eaa8380ad70.scope: Deactivated successfully.
Dec  5 02:13:26 compute-0 podman[451811]: 2025-12-05 02:13:26.098661366 +0000 UTC m=+0.090851433 container create a925d7c9b2fc07d7efa71c55751a3e34a6756cdcb9bb063a1369e166fef8287e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jackson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  5 02:13:26 compute-0 systemd[1]: Started libpod-conmon-a925d7c9b2fc07d7efa71c55751a3e34a6756cdcb9bb063a1369e166fef8287e.scope.
Dec  5 02:13:26 compute-0 podman[451811]: 2025-12-05 02:13:26.070782663 +0000 UTC m=+0.062972730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:13:26 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:13:26 compute-0 podman[451811]: 2025-12-05 02:13:26.243334929 +0000 UTC m=+0.235525026 container init a925d7c9b2fc07d7efa71c55751a3e34a6756cdcb9bb063a1369e166fef8287e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jackson, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:13:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1933: 321 pgs: 321 active+clean; 394 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 587 KiB/s rd, 4.3 MiB/s wr, 119 op/s
Dec  5 02:13:26 compute-0 podman[451811]: 2025-12-05 02:13:26.26047109 +0000 UTC m=+0.252661157 container start a925d7c9b2fc07d7efa71c55751a3e34a6756cdcb9bb063a1369e166fef8287e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jackson, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:13:26 compute-0 podman[451811]: 2025-12-05 02:13:26.267761465 +0000 UTC m=+0.259951532 container attach a925d7c9b2fc07d7efa71c55751a3e34a6756cdcb9bb063a1369e166fef8287e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jackson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:13:26 compute-0 naughty_jackson[451827]: 167 167
Dec  5 02:13:26 compute-0 systemd[1]: libpod-a925d7c9b2fc07d7efa71c55751a3e34a6756cdcb9bb063a1369e166fef8287e.scope: Deactivated successfully.
Dec  5 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.280 349552 INFO nova.compute.manager [None req-5382d356-6ec0-439c-8c04-ddc894f8c060 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Get console output#033[00m
Dec  5 02:13:26 compute-0 podman[451811]: 2025-12-05 02:13:26.283997761 +0000 UTC m=+0.276187828 container died a925d7c9b2fc07d7efa71c55751a3e34a6756cdcb9bb063a1369e166fef8287e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jackson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.305 449857 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Dec  5 02:13:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-25a427bb122b6ec0b466945da2bbdcc375f0f7688f8eaa5938775ae7f60a940c-merged.mount: Deactivated successfully.
Dec  5 02:13:26 compute-0 podman[451811]: 2025-12-05 02:13:26.360352756 +0000 UTC m=+0.352542823 container remove a925d7c9b2fc07d7efa71c55751a3e34a6756cdcb9bb063a1369e166fef8287e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jackson, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:13:26 compute-0 systemd[1]: libpod-conmon-a925d7c9b2fc07d7efa71c55751a3e34a6756cdcb9bb063a1369e166fef8287e.scope: Deactivated successfully.
Dec  5 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.603 349552 DEBUG oslo_concurrency.lockutils [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Acquiring lock "e184a71d-1d91-4999-bb53-73c2caa1110a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.605 349552 DEBUG oslo_concurrency.lockutils [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "e184a71d-1d91-4999-bb53-73c2caa1110a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.606 349552 DEBUG oslo_concurrency.lockutils [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Acquiring lock "e184a71d-1d91-4999-bb53-73c2caa1110a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.607 349552 DEBUG oslo_concurrency.lockutils [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "e184a71d-1d91-4999-bb53-73c2caa1110a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.607 349552 DEBUG oslo_concurrency.lockutils [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "e184a71d-1d91-4999-bb53-73c2caa1110a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.611 349552 INFO nova.compute.manager [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Terminating instance#033[00m
Dec  5 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.614 349552 DEBUG nova.compute.manager [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  5 02:13:26 compute-0 podman[451849]: 2025-12-05 02:13:26.643115696 +0000 UTC m=+0.083701341 container create 5c64f8eab24d474c6b12deba9299fb74a8905657721b4fb1e0da0676ebbfe017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mclean, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  5 02:13:26 compute-0 podman[451849]: 2025-12-05 02:13:26.618432193 +0000 UTC m=+0.059017848 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:13:26 compute-0 systemd[1]: Started libpod-conmon-5c64f8eab24d474c6b12deba9299fb74a8905657721b4fb1e0da0676ebbfe017.scope.
Dec  5 02:13:26 compute-0 kernel: tap94c7e2c9-6a (unregistering): left promiscuous mode
Dec  5 02:13:26 compute-0 NetworkManager[49092]: <info>  [1764900806.7200] device (tap94c7e2c9-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  5 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.732 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:26 compute-0 ovn_controller[89286]: 2025-12-05T02:13:26Z|00155|binding|INFO|Releasing lport 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 from this chassis (sb_readonly=0)
Dec  5 02:13:26 compute-0 ovn_controller[89286]: 2025-12-05T02:13:26Z|00156|binding|INFO|Setting lport 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 down in Southbound
Dec  5 02:13:26 compute-0 ovn_controller[89286]: 2025-12-05T02:13:26Z|00157|binding|INFO|Removing iface tap94c7e2c9-6a ovn-installed in OVS
Dec  5 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.739 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:26.748 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:22:fb 10.100.0.3'], port_security=['fa:16:3e:de:22:fb 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e184a71d-1d91-4999-bb53-73c2caa1110a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6aaead05b2404fec8f687504ed800a2b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5dbf4e63-8bae-4a45-8f77-a68eb174185f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee2a399c-ba53-4ea4-9f46-ca7b46a10984, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=94c7e2c9-6aeb-4be2-a022-8cd7ad27d978) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 02:13:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:26.750 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 in datapath 580f50f3-cfd1-4167-ba29-a8edbd53ee0f unbound from our chassis#033[00m
Dec  5 02:13:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:26.752 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 580f50f3-cfd1-4167-ba29-a8edbd53ee0f#033[00m
Dec  5 02:13:26 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:13:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2a60a2c62c2897285e4474980e4b871a955d8c03aa3a1baf8fb2780fa4c80d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.766 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2a60a2c62c2897285e4474980e4b871a955d8c03aa3a1baf8fb2780fa4c80d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:13:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2a60a2c62c2897285e4474980e4b871a955d8c03aa3a1baf8fb2780fa4c80d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:13:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2a60a2c62c2897285e4474980e4b871a955d8c03aa3a1baf8fb2780fa4c80d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:13:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:26.786 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[bca9f9f3-fbba-4302-bdfb-efdea66df223]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:13:26 compute-0 podman[451849]: 2025-12-05 02:13:26.794293802 +0000 UTC m=+0.234879457 container init 5c64f8eab24d474c6b12deba9299fb74a8905657721b4fb1e0da0676ebbfe017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  5 02:13:26 compute-0 podman[451849]: 2025-12-05 02:13:26.809200391 +0000 UTC m=+0.249786036 container start 5c64f8eab24d474c6b12deba9299fb74a8905657721b4fb1e0da0676ebbfe017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mclean, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:13:26 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Dec  5 02:13:26 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000d.scope: Consumed 39.976s CPU time.
Dec  5 02:13:26 compute-0 podman[451849]: 2025-12-05 02:13:26.821442325 +0000 UTC m=+0.262027960 container attach 5c64f8eab24d474c6b12deba9299fb74a8905657721b4fb1e0da0676ebbfe017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mclean, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:13:26 compute-0 systemd-machined[138700]: Machine qemu-14-instance-0000000d terminated.
Dec  5 02:13:26 compute-0 kernel: tap94c7e2c9-6a: entered promiscuous mode
Dec  5 02:13:26 compute-0 systemd-udevd[451875]: Network interface NamePolicy= disabled on kernel command line.
Dec  5 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.842 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:26 compute-0 kernel: tap94c7e2c9-6a (unregistering): left promiscuous mode
Dec  5 02:13:26 compute-0 ovn_controller[89286]: 2025-12-05T02:13:26Z|00158|binding|INFO|Claiming lport 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 for this chassis.
Dec  5 02:13:26 compute-0 ovn_controller[89286]: 2025-12-05T02:13:26Z|00159|binding|INFO|94c7e2c9-6aeb-4be2-a022-8cd7ad27d978: Claiming fa:16:3e:de:22:fb 10.100.0.3
Dec  5 02:13:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:26.838 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[bc9e966a-47f5-499b-8967-3ecd0cb0ae8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:13:26 compute-0 NetworkManager[49092]: <info>  [1764900806.8495] manager: (tap94c7e2c9-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/78)
Dec  5 02:13:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:26.851 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[fc451d94-4cc3-4c85-8c4a-1a431702e659]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:13:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:26.854 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:22:fb 10.100.0.3'], port_security=['fa:16:3e:de:22:fb 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e184a71d-1d91-4999-bb53-73c2caa1110a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6aaead05b2404fec8f687504ed800a2b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5dbf4e63-8bae-4a45-8f77-a68eb174185f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee2a399c-ba53-4ea4-9f46-ca7b46a10984, chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=94c7e2c9-6aeb-4be2-a022-8cd7ad27d978) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 02:13:26 compute-0 ovn_controller[89286]: 2025-12-05T02:13:26Z|00160|binding|INFO|Setting lport 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 ovn-installed in OVS
Dec  5 02:13:26 compute-0 ovn_controller[89286]: 2025-12-05T02:13:26Z|00161|binding|INFO|Setting lport 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 up in Southbound
Dec  5 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.879 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:26 compute-0 ovn_controller[89286]: 2025-12-05T02:13:26Z|00162|binding|INFO|Releasing lport 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 from this chassis (sb_readonly=1)
Dec  5 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.881 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:26 compute-0 ovn_controller[89286]: 2025-12-05T02:13:26Z|00163|if_status|INFO|Not setting lport 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 down as sb is readonly
Dec  5 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.885 349552 INFO nova.virt.libvirt.driver [-] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Instance destroyed successfully.#033[00m
Dec  5 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.885 349552 DEBUG nova.objects.instance [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lazy-loading 'resources' on Instance uuid e184a71d-1d91-4999-bb53-73c2caa1110a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:13:26 compute-0 ovn_controller[89286]: 2025-12-05T02:13:26Z|00164|binding|INFO|Releasing lport 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 from this chassis (sb_readonly=0)
Dec  5 02:13:26 compute-0 ovn_controller[89286]: 2025-12-05T02:13:26Z|00165|binding|INFO|Removing iface tap94c7e2c9-6a ovn-installed in OVS
Dec  5 02:13:26 compute-0 ovn_controller[89286]: 2025-12-05T02:13:26Z|00166|binding|INFO|Setting lport 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 down in Southbound
Dec  5 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.893 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.901 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:26.901 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[9bd898dc-70ab-442f-8923-86bd99e1b835]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:13:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:26.920 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[10a07bfd-865f-4560-98dd-f197854f8f1a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap580f50f3-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:c2:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 42], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678558, 'reachable_time': 20224, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 451886, 'error': None, 'target': 'ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:13:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:26.937 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[2ccfa7c6-34a9-45e5-9187-c1a7f78e957c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap580f50f3-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678580, 'tstamp': 678580}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 451887, 'error': None, 'target': 'ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap580f50f3-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678588, 'tstamp': 678588}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 451887, 'error': None, 'target': 'ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:13:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:26.940 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap580f50f3-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.942 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:26.943 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:22:fb 10.100.0.3'], port_security=['fa:16:3e:de:22:fb 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'e184a71d-1d91-4999-bb53-73c2caa1110a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6aaead05b2404fec8f687504ed800a2b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5dbf4e63-8bae-4a45-8f77-a68eb174185f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee2a399c-ba53-4ea4-9f46-ca7b46a10984, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=94c7e2c9-6aeb-4be2-a022-8cd7ad27d978) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.946 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:26.947 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap580f50f3-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:13:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:26.948 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 02:13:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:26.949 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap580f50f3-c0, col_values=(('external_ids', {'iface-id': '29ff39a2-9491-44bb-a004-0de689e8aadc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:13:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:26.950 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 02:13:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:26.952 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 in datapath 580f50f3-cfd1-4167-ba29-a8edbd53ee0f unbound from our chassis#033[00m
Dec  5 02:13:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:26.954 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 580f50f3-cfd1-4167-ba29-a8edbd53ee0f#033[00m
Dec  5 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.956 349552 DEBUG nova.virt.libvirt.vif [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T02:12:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-246991198',display_name='tempest-TestNetworkBasicOps-server-246991198',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-246991198',id=13,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOPiJuAHnAJu46IBGrW2KCWpzoZreiuIkGq3//er4nG+5eIgXpgWi9tSl+igSSp8Nl6if+KEJaz1jLll0XICHyeubF/iswJE5bpcW/PYkhqz7B8mkIP3gi3Vhw5yfXTbIg==',key_name='tempest-TestNetworkBasicOps-994593786',keypairs=<?>,launch_index=0,launched_at=2025-12-05T02:12:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6aaead05b2404fec8f687504ed800a2b',ramdisk_id='',reservation_id='r-9tqk8ujr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-576606253',owner_user_name='tempest-TestNetworkBasicOps-576606253-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-05T02:12:42Z,user_data=None,user_id='2e61f46e24a240608d1523fb5265d3ac',uuid=e184a71d-1d91-4999-bb53-73c2caa1110a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "address": "fa:16:3e:de:22:fb", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94c7e2c9-6a", "ovs_interfaceid": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  5 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.957 349552 DEBUG nova.network.os_vif_util [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Converting VIF {"id": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "address": "fa:16:3e:de:22:fb", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap94c7e2c9-6a", "ovs_interfaceid": "94c7e2c9-6aeb-4be2-a022-8cd7ad27d978", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.957 349552 DEBUG nova.network.os_vif_util [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:de:22:fb,bridge_name='br-int',has_traffic_filtering=True,id=94c7e2c9-6aeb-4be2-a022-8cd7ad27d978,network=Network(580f50f3-cfd1-4167-ba29-a8edbd53ee0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94c7e2c9-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.958 349552 DEBUG os_vif [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:de:22:fb,bridge_name='br-int',has_traffic_filtering=True,id=94c7e2c9-6aeb-4be2-a022-8cd7ad27d978,network=Network(580f50f3-cfd1-4167-ba29-a8edbd53ee0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94c7e2c9-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  5 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.959 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.960 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap94c7e2c9-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.961 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.963 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  5 02:13:26 compute-0 nova_compute[349548]: 2025-12-05 02:13:26.966 349552 INFO os_vif [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:de:22:fb,bridge_name='br-int',has_traffic_filtering=True,id=94c7e2c9-6aeb-4be2-a022-8cd7ad27d978,network=Network(580f50f3-cfd1-4167-ba29-a8edbd53ee0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap94c7e2c9-6a')#033[00m
Dec  5 02:13:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:26.971 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[cda3db4f-e978-418d-aea7-e51b8a114569]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.008 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[3ba0d404-73b6-4c35-8325-1bf79c35a165]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.012 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[85e78643-05ec-47a6-90be-99222c696370]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.045 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[0c797541-791c-4d83-b4db-1e896d283940]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.088 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[0e81ae91-9e13-4ae7-befc-c1c0b01c3610]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap580f50f3-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:c2:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 9, 'rx_bytes': 658, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 9, 'rx_bytes': 658, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 42], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678558, 'reachable_time': 20224, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 451912, 'error': None, 'target': 'ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0030310510882659374 of space, bias 1.0, pg target 0.9093153264797812 quantized to 32 (current 32)
Dec  5 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  5 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:13:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.120 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[484a84e5-6a07-4caf-b8bd-a3ba2c4d556e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap580f50f3-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678580, 'tstamp': 678580}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 451913, 'error': None, 'target': 'ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap580f50f3-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678588, 'tstamp': 678588}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 451913, 'error': None, 'target': 'ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.138 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap580f50f3-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:13:27 compute-0 nova_compute[349548]: 2025-12-05 02:13:27.142 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:27 compute-0 nova_compute[349548]: 2025-12-05 02:13:27.143 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.143 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap580f50f3-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.144 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.145 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap580f50f3-c0, col_values=(('external_ids', {'iface-id': '29ff39a2-9491-44bb-a004-0de689e8aadc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.146 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.148 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 in datapath 580f50f3-cfd1-4167-ba29-a8edbd53ee0f unbound from our chassis#033[00m
Dec  5 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.150 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 580f50f3-cfd1-4167-ba29-a8edbd53ee0f#033[00m
Dec  5 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.178 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[9c6d48f5-4265-4453-80f1-e4e03fd127dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.219 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[fe4a2b53-3d7c-4b7f-9184-df944d8d8676]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.222 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[1c21ef35-6f32-462a-8898-6b5b39ba4991]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:13:27 compute-0 nova_compute[349548]: 2025-12-05 02:13:27.244 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.282 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[244d7f1f-efcc-475c-85d5-fcbd34094829]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.304 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[422a8e4d-741e-4c69-b9c1-47bd8213b9b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap580f50f3-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:c2:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 11, 'rx_bytes': 658, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 11, 'rx_bytes': 658, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 42], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678558, 'reachable_time': 20224, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 451919, 'error': None, 'target': 'ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.324 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[f17566e9-c61c-45f6-a858-48b9f180cac3]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap580f50f3-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678580, 'tstamp': 678580}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 451920, 'error': None, 'target': 'ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap580f50f3-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678588, 'tstamp': 678588}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 451920, 'error': None, 'target': 'ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.327 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap580f50f3-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:13:27 compute-0 nova_compute[349548]: 2025-12-05 02:13:27.329 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:27 compute-0 nova_compute[349548]: 2025-12-05 02:13:27.331 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.332 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap580f50f3-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.333 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.334 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap580f50f3-c0, col_values=(('external_ids', {'iface-id': '29ff39a2-9491-44bb-a004-0de689e8aadc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:13:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:27.335 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 02:13:27 compute-0 nova_compute[349548]: 2025-12-05 02:13:27.670 349552 INFO nova.virt.libvirt.driver [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Deleting instance files /var/lib/nova/instances/e184a71d-1d91-4999-bb53-73c2caa1110a_del#033[00m
Dec  5 02:13:27 compute-0 nova_compute[349548]: 2025-12-05 02:13:27.671 349552 INFO nova.virt.libvirt.driver [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Deletion of /var/lib/nova/instances/e184a71d-1d91-4999-bb53-73c2caa1110a_del complete#033[00m
Dec  5 02:13:27 compute-0 nova_compute[349548]: 2025-12-05 02:13:27.762 349552 INFO nova.compute.manager [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Took 1.15 seconds to destroy the instance on the hypervisor.#033[00m
Dec  5 02:13:27 compute-0 nova_compute[349548]: 2025-12-05 02:13:27.762 349552 DEBUG oslo.service.loopingcall [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  5 02:13:27 compute-0 nova_compute[349548]: 2025-12-05 02:13:27.763 349552 DEBUG nova.compute.manager [-] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  5 02:13:27 compute-0 nova_compute[349548]: 2025-12-05 02:13:27.763 349552 DEBUG nova.network.neutron [-] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  5 02:13:28 compute-0 kind_mclean[451865]: {
Dec  5 02:13:28 compute-0 kind_mclean[451865]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 02:13:28 compute-0 kind_mclean[451865]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:13:28 compute-0 kind_mclean[451865]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 02:13:28 compute-0 kind_mclean[451865]:        "osd_id": 0,
Dec  5 02:13:28 compute-0 kind_mclean[451865]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:13:28 compute-0 kind_mclean[451865]:        "type": "bluestore"
Dec  5 02:13:28 compute-0 kind_mclean[451865]:    },
Dec  5 02:13:28 compute-0 kind_mclean[451865]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 02:13:28 compute-0 kind_mclean[451865]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:13:28 compute-0 kind_mclean[451865]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 02:13:28 compute-0 kind_mclean[451865]:        "osd_id": 1,
Dec  5 02:13:28 compute-0 kind_mclean[451865]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:13:28 compute-0 kind_mclean[451865]:        "type": "bluestore"
Dec  5 02:13:28 compute-0 kind_mclean[451865]:    },
Dec  5 02:13:28 compute-0 kind_mclean[451865]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 02:13:28 compute-0 kind_mclean[451865]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:13:28 compute-0 kind_mclean[451865]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 02:13:28 compute-0 kind_mclean[451865]:        "osd_id": 2,
Dec  5 02:13:28 compute-0 kind_mclean[451865]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:13:28 compute-0 kind_mclean[451865]:        "type": "bluestore"
Dec  5 02:13:28 compute-0 kind_mclean[451865]:    }
Dec  5 02:13:28 compute-0 kind_mclean[451865]: }
Dec  5 02:13:28 compute-0 systemd[1]: libpod-5c64f8eab24d474c6b12deba9299fb74a8905657721b4fb1e0da0676ebbfe017.scope: Deactivated successfully.
Dec  5 02:13:28 compute-0 systemd[1]: libpod-5c64f8eab24d474c6b12deba9299fb74a8905657721b4fb1e0da0676ebbfe017.scope: Consumed 1.182s CPU time.
Dec  5 02:13:28 compute-0 podman[451849]: 2025-12-05 02:13:28.063609292 +0000 UTC m=+1.504194957 container died 5c64f8eab24d474c6b12deba9299fb74a8905657721b4fb1e0da0676ebbfe017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mclean, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:13:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2a60a2c62c2897285e4474980e4b871a955d8c03aa3a1baf8fb2780fa4c80d6-merged.mount: Deactivated successfully.
Dec  5 02:13:28 compute-0 nova_compute[349548]: 2025-12-05 02:13:28.122 349552 DEBUG nova.compute.manager [req-2bcdb3fa-6380-4304-b659-5e0c7fb9a343 req-c019cc23-1d65-41e6-8895-f905449db1ec a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Received event network-vif-unplugged-94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:13:28 compute-0 nova_compute[349548]: 2025-12-05 02:13:28.122 349552 DEBUG oslo_concurrency.lockutils [req-2bcdb3fa-6380-4304-b659-5e0c7fb9a343 req-c019cc23-1d65-41e6-8895-f905449db1ec a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "e184a71d-1d91-4999-bb53-73c2caa1110a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:13:28 compute-0 nova_compute[349548]: 2025-12-05 02:13:28.123 349552 DEBUG oslo_concurrency.lockutils [req-2bcdb3fa-6380-4304-b659-5e0c7fb9a343 req-c019cc23-1d65-41e6-8895-f905449db1ec a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "e184a71d-1d91-4999-bb53-73c2caa1110a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:13:28 compute-0 nova_compute[349548]: 2025-12-05 02:13:28.123 349552 DEBUG oslo_concurrency.lockutils [req-2bcdb3fa-6380-4304-b659-5e0c7fb9a343 req-c019cc23-1d65-41e6-8895-f905449db1ec a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "e184a71d-1d91-4999-bb53-73c2caa1110a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:13:28 compute-0 nova_compute[349548]: 2025-12-05 02:13:28.123 349552 DEBUG nova.compute.manager [req-2bcdb3fa-6380-4304-b659-5e0c7fb9a343 req-c019cc23-1d65-41e6-8895-f905449db1ec a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] No waiting events found dispatching network-vif-unplugged-94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 02:13:28 compute-0 nova_compute[349548]: 2025-12-05 02:13:28.124 349552 DEBUG nova.compute.manager [req-2bcdb3fa-6380-4304-b659-5e0c7fb9a343 req-c019cc23-1d65-41e6-8895-f905449db1ec a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Received event network-vif-unplugged-94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  5 02:13:28 compute-0 podman[451849]: 2025-12-05 02:13:28.153146477 +0000 UTC m=+1.593732122 container remove 5c64f8eab24d474c6b12deba9299fb74a8905657721b4fb1e0da0676ebbfe017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mclean, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:13:28 compute-0 systemd[1]: libpod-conmon-5c64f8eab24d474c6b12deba9299fb74a8905657721b4fb1e0da0676ebbfe017.scope: Deactivated successfully.
Dec  5 02:13:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:13:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 02:13:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:13:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 02:13:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:13:28 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 3ef501c5-e39a-4098-b58c-687e4ec6ba54 does not exist
Dec  5 02:13:28 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 8021e548-aa02-46ee-9e1f-0cc917e97c95 does not exist
Dec  5 02:13:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1934: 321 pgs: 321 active+clean; 395 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 604 KiB/s rd, 4.3 MiB/s wr, 133 op/s
Dec  5 02:13:29 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:29.041 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:c8:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:b5:45:4f:f9:d2'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 02:13:29 compute-0 nova_compute[349548]: 2025-12-05 02:13:29.042 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:29 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:29.045 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  5 02:13:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:13:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:13:29 compute-0 nova_compute[349548]: 2025-12-05 02:13:29.551 349552 DEBUG nova.network.neutron [-] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:13:29 compute-0 nova_compute[349548]: 2025-12-05 02:13:29.566 349552 INFO nova.compute.manager [-] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Took 1.80 seconds to deallocate network for instance.#033[00m
Dec  5 02:13:29 compute-0 nova_compute[349548]: 2025-12-05 02:13:29.613 349552 DEBUG oslo_concurrency.lockutils [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:13:29 compute-0 nova_compute[349548]: 2025-12-05 02:13:29.614 349552 DEBUG oslo_concurrency.lockutils [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:13:29 compute-0 nova_compute[349548]: 2025-12-05 02:13:29.676 349552 DEBUG nova.compute.manager [req-5800c343-bbb4-4ae0-9826-74d17d2571bf req-a315565c-f7c2-4689-800c-2ee671c8f35c a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Received event network-vif-deleted-94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:13:29 compute-0 podman[452014]: 2025-12-05 02:13:29.722550905 +0000 UTC m=+0.116482022 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  5 02:13:29 compute-0 podman[452013]: 2025-12-05 02:13:29.737118394 +0000 UTC m=+0.130430194 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Dec  5 02:13:29 compute-0 podman[158197]: time="2025-12-05T02:13:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:13:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:13:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 46279 "" "Go-http-client/1.1"
Dec  5 02:13:29 compute-0 nova_compute[349548]: 2025-12-05 02:13:29.768 349552 DEBUG oslo_concurrency.processutils [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:13:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:13:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 9592 "" "Go-http-client/1.1"
Dec  5 02:13:30 compute-0 nova_compute[349548]: 2025-12-05 02:13:30.198 349552 DEBUG nova.compute.manager [req-4a6a493b-a86a-4d6d-a427-103c656ffeea req-014c07d3-f523-4537-bb5e-eb37a2468c39 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Received event network-vif-plugged-94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:13:30 compute-0 nova_compute[349548]: 2025-12-05 02:13:30.199 349552 DEBUG oslo_concurrency.lockutils [req-4a6a493b-a86a-4d6d-a427-103c656ffeea req-014c07d3-f523-4537-bb5e-eb37a2468c39 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "e184a71d-1d91-4999-bb53-73c2caa1110a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:13:30 compute-0 nova_compute[349548]: 2025-12-05 02:13:30.200 349552 DEBUG oslo_concurrency.lockutils [req-4a6a493b-a86a-4d6d-a427-103c656ffeea req-014c07d3-f523-4537-bb5e-eb37a2468c39 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "e184a71d-1d91-4999-bb53-73c2caa1110a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:13:30 compute-0 nova_compute[349548]: 2025-12-05 02:13:30.201 349552 DEBUG oslo_concurrency.lockutils [req-4a6a493b-a86a-4d6d-a427-103c656ffeea req-014c07d3-f523-4537-bb5e-eb37a2468c39 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "e184a71d-1d91-4999-bb53-73c2caa1110a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:13:30 compute-0 nova_compute[349548]: 2025-12-05 02:13:30.202 349552 DEBUG nova.compute.manager [req-4a6a493b-a86a-4d6d-a427-103c656ffeea req-014c07d3-f523-4537-bb5e-eb37a2468c39 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] No waiting events found dispatching network-vif-plugged-94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 02:13:30 compute-0 nova_compute[349548]: 2025-12-05 02:13:30.202 349552 WARNING nova.compute.manager [req-4a6a493b-a86a-4d6d-a427-103c656ffeea req-014c07d3-f523-4537-bb5e-eb37a2468c39 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Received unexpected event network-vif-plugged-94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 for instance with vm_state deleted and task_state None.#033[00m
Dec  5 02:13:30 compute-0 nova_compute[349548]: 2025-12-05 02:13:30.203 349552 DEBUG nova.compute.manager [req-4a6a493b-a86a-4d6d-a427-103c656ffeea req-014c07d3-f523-4537-bb5e-eb37a2468c39 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Received event network-vif-plugged-94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:13:30 compute-0 nova_compute[349548]: 2025-12-05 02:13:30.204 349552 DEBUG oslo_concurrency.lockutils [req-4a6a493b-a86a-4d6d-a427-103c656ffeea req-014c07d3-f523-4537-bb5e-eb37a2468c39 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "e184a71d-1d91-4999-bb53-73c2caa1110a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:13:30 compute-0 nova_compute[349548]: 2025-12-05 02:13:30.205 349552 DEBUG oslo_concurrency.lockutils [req-4a6a493b-a86a-4d6d-a427-103c656ffeea req-014c07d3-f523-4537-bb5e-eb37a2468c39 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "e184a71d-1d91-4999-bb53-73c2caa1110a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:13:30 compute-0 nova_compute[349548]: 2025-12-05 02:13:30.205 349552 DEBUG oslo_concurrency.lockutils [req-4a6a493b-a86a-4d6d-a427-103c656ffeea req-014c07d3-f523-4537-bb5e-eb37a2468c39 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "e184a71d-1d91-4999-bb53-73c2caa1110a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:13:30 compute-0 nova_compute[349548]: 2025-12-05 02:13:30.208 349552 DEBUG nova.compute.manager [req-4a6a493b-a86a-4d6d-a427-103c656ffeea req-014c07d3-f523-4537-bb5e-eb37a2468c39 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] No waiting events found dispatching network-vif-plugged-94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 02:13:30 compute-0 nova_compute[349548]: 2025-12-05 02:13:30.209 349552 WARNING nova.compute.manager [req-4a6a493b-a86a-4d6d-a427-103c656ffeea req-014c07d3-f523-4537-bb5e-eb37a2468c39 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Received unexpected event network-vif-plugged-94c7e2c9-6aeb-4be2-a022-8cd7ad27d978 for instance with vm_state deleted and task_state None.#033[00m
Dec  5 02:13:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1935: 321 pgs: 321 active+clean; 395 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 603 KiB/s rd, 4.3 MiB/s wr, 132 op/s
Dec  5 02:13:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:13:30 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3394186467' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:13:30 compute-0 nova_compute[349548]: 2025-12-05 02:13:30.340 349552 DEBUG oslo_concurrency.processutils [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:13:30 compute-0 nova_compute[349548]: 2025-12-05 02:13:30.364 349552 DEBUG nova.compute.provider_tree [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:13:30 compute-0 nova_compute[349548]: 2025-12-05 02:13:30.395 349552 DEBUG nova.scheduler.client.report [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:13:30 compute-0 nova_compute[349548]: 2025-12-05 02:13:30.422 349552 DEBUG oslo_concurrency.lockutils [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:13:30 compute-0 nova_compute[349548]: 2025-12-05 02:13:30.465 349552 INFO nova.scheduler.client.report [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Deleted allocations for instance e184a71d-1d91-4999-bb53-73c2caa1110a#033[00m
Dec  5 02:13:30 compute-0 nova_compute[349548]: 2025-12-05 02:13:30.577 349552 DEBUG oslo_concurrency.lockutils [None req-0bdd156d-f21a-48ba-90b1-55afc655dade 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "e184a71d-1d91-4999-bb53-73c2caa1110a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.971s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:13:31 compute-0 openstack_network_exporter[366555]: ERROR   02:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:13:31 compute-0 openstack_network_exporter[366555]: ERROR   02:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:13:31 compute-0 openstack_network_exporter[366555]: ERROR   02:13:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:13:31 compute-0 openstack_network_exporter[366555]: ERROR   02:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:13:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:13:31 compute-0 openstack_network_exporter[366555]: ERROR   02:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:13:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:13:31 compute-0 nova_compute[349548]: 2025-12-05 02:13:31.570 349552 DEBUG oslo_concurrency.lockutils [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Acquiring lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:13:31 compute-0 nova_compute[349548]: 2025-12-05 02:13:31.571 349552 DEBUG oslo_concurrency.lockutils [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:13:31 compute-0 nova_compute[349548]: 2025-12-05 02:13:31.572 349552 DEBUG oslo_concurrency.lockutils [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Acquiring lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:13:31 compute-0 nova_compute[349548]: 2025-12-05 02:13:31.572 349552 DEBUG oslo_concurrency.lockutils [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:13:31 compute-0 nova_compute[349548]: 2025-12-05 02:13:31.572 349552 DEBUG oslo_concurrency.lockutils [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:13:31 compute-0 nova_compute[349548]: 2025-12-05 02:13:31.574 349552 INFO nova.compute.manager [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Terminating instance#033[00m
Dec  5 02:13:31 compute-0 nova_compute[349548]: 2025-12-05 02:13:31.577 349552 DEBUG nova.compute.manager [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  5 02:13:31 compute-0 kernel: tap1e754fc7-10 (unregistering): left promiscuous mode
Dec  5 02:13:31 compute-0 NetworkManager[49092]: <info>  [1764900811.7093] device (tap1e754fc7-10): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  5 02:13:31 compute-0 ovn_controller[89286]: 2025-12-05T02:13:31Z|00167|binding|INFO|Releasing lport 1e754fc7-106a-43d2-a675-79c30089904b from this chassis (sb_readonly=0)
Dec  5 02:13:31 compute-0 ovn_controller[89286]: 2025-12-05T02:13:31Z|00168|binding|INFO|Setting lport 1e754fc7-106a-43d2-a675-79c30089904b down in Southbound
Dec  5 02:13:31 compute-0 nova_compute[349548]: 2025-12-05 02:13:31.740 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:31 compute-0 ovn_controller[89286]: 2025-12-05T02:13:31Z|00169|binding|INFO|Removing iface tap1e754fc7-10 ovn-installed in OVS
Dec  5 02:13:31 compute-0 nova_compute[349548]: 2025-12-05 02:13:31.750 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:31 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:31.749 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:49:42 10.100.0.11'], port_security=['fa:16:3e:ab:49:42 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '1fcee2c4-ccfc-4651-bc90-a606a4e46e0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6aaead05b2404fec8f687504ed800a2b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6637e5fa-33c5-4d8a-98b9-4b42baed7ff5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ee2a399c-ba53-4ea4-9f46-ca7b46a10984, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=1e754fc7-106a-43d2-a675-79c30089904b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 02:13:31 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:31.750 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 1e754fc7-106a-43d2-a675-79c30089904b in datapath 580f50f3-cfd1-4167-ba29-a8edbd53ee0f unbound from our chassis#033[00m
Dec  5 02:13:31 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:31.751 287122 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 580f50f3-cfd1-4167-ba29-a8edbd53ee0f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  5 02:13:31 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:31.752 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[51c663da-6329-4b21-a361-baa8ba8a4d13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:13:31 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:31.753 287122 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f namespace which is not needed anymore#033[00m
Dec  5 02:13:31 compute-0 nova_compute[349548]: 2025-12-05 02:13:31.769 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:31 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Dec  5 02:13:31 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Consumed 47.082s CPU time.
Dec  5 02:13:31 compute-0 systemd-machined[138700]: Machine qemu-13-instance-0000000c terminated.
Dec  5 02:13:31 compute-0 nova_compute[349548]: 2025-12-05 02:13:31.840 349552 INFO nova.virt.libvirt.driver [-] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Instance destroyed successfully.#033[00m
Dec  5 02:13:31 compute-0 nova_compute[349548]: 2025-12-05 02:13:31.841 349552 DEBUG nova.objects.instance [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lazy-loading 'resources' on Instance uuid 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:13:31 compute-0 nova_compute[349548]: 2025-12-05 02:13:31.857 349552 DEBUG nova.virt.libvirt.vif [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T02:11:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-593464214',display_name='tempest-TestNetworkBasicOps-server-593464214',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-593464214',id=12,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPalP/AzwmHbA95rHCd/QJUJ7wbPS0Rqk62UPUO5FJAN2XrqFXhwvH10HGMSigesY1L3ja9sPfGII3cyjD9vy9gcLVsBBYGCRjTM6JwQSUcRRAf5rls2BCt8IBDTT+ISQg==',key_name='tempest-TestNetworkBasicOps-727356260',keypairs=<?>,launch_index=0,launched_at=2025-12-05T02:11:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6aaead05b2404fec8f687504ed800a2b',ramdisk_id='',reservation_id='r-bpaczbpy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-576606253',owner_user_name='tempest-TestNetworkBasicOps-576606253-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-05T02:11:39Z,user_data=None,user_id='2e61f46e24a240608d1523fb5265d3ac',uuid=1fcee2c4-ccfc-4651-bc90-a606a4e46e0f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1e754fc7-106a-43d2-a675-79c30089904b", "address": "fa:16:3e:ab:49:42", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e754fc7-10", "ovs_interfaceid": "1e754fc7-106a-43d2-a675-79c30089904b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  5 02:13:31 compute-0 nova_compute[349548]: 2025-12-05 02:13:31.858 349552 DEBUG nova.network.os_vif_util [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Converting VIF {"id": "1e754fc7-106a-43d2-a675-79c30089904b", "address": "fa:16:3e:ab:49:42", "network": {"id": "580f50f3-cfd1-4167-ba29-a8edbd53ee0f", "bridge": "br-int", "label": "tempest-network-smoke--2137061445", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6aaead05b2404fec8f687504ed800a2b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1e754fc7-10", "ovs_interfaceid": "1e754fc7-106a-43d2-a675-79c30089904b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 02:13:31 compute-0 nova_compute[349548]: 2025-12-05 02:13:31.858 349552 DEBUG nova.network.os_vif_util [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ab:49:42,bridge_name='br-int',has_traffic_filtering=True,id=1e754fc7-106a-43d2-a675-79c30089904b,network=Network(580f50f3-cfd1-4167-ba29-a8edbd53ee0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e754fc7-10') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 02:13:31 compute-0 nova_compute[349548]: 2025-12-05 02:13:31.859 349552 DEBUG os_vif [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ab:49:42,bridge_name='br-int',has_traffic_filtering=True,id=1e754fc7-106a-43d2-a675-79c30089904b,network=Network(580f50f3-cfd1-4167-ba29-a8edbd53ee0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e754fc7-10') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  5 02:13:31 compute-0 nova_compute[349548]: 2025-12-05 02:13:31.861 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:31 compute-0 nova_compute[349548]: 2025-12-05 02:13:31.861 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1e754fc7-10, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:13:31 compute-0 nova_compute[349548]: 2025-12-05 02:13:31.865 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  5 02:13:31 compute-0 nova_compute[349548]: 2025-12-05 02:13:31.868 349552 INFO os_vif [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ab:49:42,bridge_name='br-int',has_traffic_filtering=True,id=1e754fc7-106a-43d2-a675-79c30089904b,network=Network(580f50f3-cfd1-4167-ba29-a8edbd53ee0f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1e754fc7-10')#033[00m
Dec  5 02:13:32 compute-0 neutron-haproxy-ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f[448664]: [NOTICE]   (448668) : haproxy version is 2.8.14-c23fe91
Dec  5 02:13:32 compute-0 neutron-haproxy-ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f[448664]: [NOTICE]   (448668) : path to executable is /usr/sbin/haproxy
Dec  5 02:13:32 compute-0 neutron-haproxy-ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f[448664]: [WARNING]  (448668) : Exiting Master process...
Dec  5 02:13:32 compute-0 neutron-haproxy-ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f[448664]: [WARNING]  (448668) : Exiting Master process...
Dec  5 02:13:32 compute-0 neutron-haproxy-ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f[448664]: [ALERT]    (448668) : Current worker (448670) exited with code 143 (Terminated)
Dec  5 02:13:32 compute-0 neutron-haproxy-ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f[448664]: [WARNING]  (448668) : All workers exited. Exiting... (0)
Dec  5 02:13:32 compute-0 systemd[1]: libpod-df4bb467200e60b85325558aa5683d0298efdeef6e06afa38b71f727f10b580e.scope: Deactivated successfully.
Dec  5 02:13:32 compute-0 podman[452121]: 2025-12-05 02:13:32.028603002 +0000 UTC m=+0.089139955 container died df4bb467200e60b85325558aa5683d0298efdeef6e06afa38b71f727f10b580e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  5 02:13:32 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:32.048 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8dd76c1c-ab01-42af-b35e-2e870841b6ad, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:13:32 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-df4bb467200e60b85325558aa5683d0298efdeef6e06afa38b71f727f10b580e-userdata-shm.mount: Deactivated successfully.
Dec  5 02:13:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-be1f4a9f0791655bc892fe852878dc488b3de35fa469b0c521274d81205f10f4-merged.mount: Deactivated successfully.
Dec  5 02:13:32 compute-0 podman[452121]: 2025-12-05 02:13:32.100367137 +0000 UTC m=+0.160904040 container cleanup df4bb467200e60b85325558aa5683d0298efdeef6e06afa38b71f727f10b580e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  5 02:13:32 compute-0 systemd[1]: libpod-conmon-df4bb467200e60b85325558aa5683d0298efdeef6e06afa38b71f727f10b580e.scope: Deactivated successfully.
Dec  5 02:13:32 compute-0 podman[452150]: 2025-12-05 02:13:32.238025914 +0000 UTC m=+0.096698487 container remove df4bb467200e60b85325558aa5683d0298efdeef6e06afa38b71f727f10b580e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  5 02:13:32 compute-0 nova_compute[349548]: 2025-12-05 02:13:32.245 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:32 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:32.255 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[f3a1f677-19d7-48f6-a602-786eb9f54d21]: (4, ('Fri Dec  5 02:13:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f (df4bb467200e60b85325558aa5683d0298efdeef6e06afa38b71f727f10b580e)\ndf4bb467200e60b85325558aa5683d0298efdeef6e06afa38b71f727f10b580e\nFri Dec  5 02:13:32 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f (df4bb467200e60b85325558aa5683d0298efdeef6e06afa38b71f727f10b580e)\ndf4bb467200e60b85325558aa5683d0298efdeef6e06afa38b71f727f10b580e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:13:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1936: 321 pgs: 321 active+clean; 295 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 619 KiB/s rd, 4.3 MiB/s wr, 159 op/s
Dec  5 02:13:32 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:32.260 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[e7f46495-5c3b-4565-9843-79003c856322]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:13:32 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:32.265 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap580f50f3-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:13:32 compute-0 nova_compute[349548]: 2025-12-05 02:13:32.267 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:32 compute-0 kernel: tap580f50f3-c0: left promiscuous mode
Dec  5 02:13:32 compute-0 nova_compute[349548]: 2025-12-05 02:13:32.272 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:32 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:32.273 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[c432d6c9-db46-4848-9dd7-dfc5e1dd2d57]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:13:32 compute-0 nova_compute[349548]: 2025-12-05 02:13:32.287 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:32 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:32.301 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[2f772dfc-d30d-4eb3-9765-3fa03c7fe89d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:13:32 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:32.303 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[afadae2c-1dfe-4124-92be-82aae715ed28]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:13:32 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:32.330 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[d564711e-2cf1-4fe3-bd15-1fcb5a048f7b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678547, 'reachable_time': 15221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 452164, 'error': None, 'target': 'ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:13:32 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:32.335 287504 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-580f50f3-cfd1-4167-ba29-a8edbd53ee0f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  5 02:13:32 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:32.335 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[3b769e36-ebb6-48be-b954-347d64cee3f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:13:32 compute-0 systemd[1]: run-netns-ovnmeta\x2d580f50f3\x2dcfd1\x2d4167\x2dba29\x2da8edbd53ee0f.mount: Deactivated successfully.
Dec  5 02:13:32 compute-0 nova_compute[349548]: 2025-12-05 02:13:32.372 349552 DEBUG nova.compute.manager [req-75fb96ae-1818-4638-af4d-def7657d4ea8 req-8fbec432-cb32-4d36-a510-056727b0c9e4 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Received event network-vif-unplugged-1e754fc7-106a-43d2-a675-79c30089904b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:13:32 compute-0 nova_compute[349548]: 2025-12-05 02:13:32.373 349552 DEBUG oslo_concurrency.lockutils [req-75fb96ae-1818-4638-af4d-def7657d4ea8 req-8fbec432-cb32-4d36-a510-056727b0c9e4 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:13:32 compute-0 nova_compute[349548]: 2025-12-05 02:13:32.373 349552 DEBUG oslo_concurrency.lockutils [req-75fb96ae-1818-4638-af4d-def7657d4ea8 req-8fbec432-cb32-4d36-a510-056727b0c9e4 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:13:32 compute-0 nova_compute[349548]: 2025-12-05 02:13:32.373 349552 DEBUG oslo_concurrency.lockutils [req-75fb96ae-1818-4638-af4d-def7657d4ea8 req-8fbec432-cb32-4d36-a510-056727b0c9e4 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:13:32 compute-0 nova_compute[349548]: 2025-12-05 02:13:32.373 349552 DEBUG nova.compute.manager [req-75fb96ae-1818-4638-af4d-def7657d4ea8 req-8fbec432-cb32-4d36-a510-056727b0c9e4 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] No waiting events found dispatching network-vif-unplugged-1e754fc7-106a-43d2-a675-79c30089904b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 02:13:32 compute-0 nova_compute[349548]: 2025-12-05 02:13:32.373 349552 DEBUG nova.compute.manager [req-75fb96ae-1818-4638-af4d-def7657d4ea8 req-8fbec432-cb32-4d36-a510-056727b0c9e4 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Received event network-vif-unplugged-1e754fc7-106a-43d2-a675-79c30089904b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  5 02:13:32 compute-0 nova_compute[349548]: 2025-12-05 02:13:32.643 349552 INFO nova.virt.libvirt.driver [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Deleting instance files /var/lib/nova/instances/1fcee2c4-ccfc-4651-bc90-a606a4e46e0f_del#033[00m
Dec  5 02:13:32 compute-0 nova_compute[349548]: 2025-12-05 02:13:32.644 349552 INFO nova.virt.libvirt.driver [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Deletion of /var/lib/nova/instances/1fcee2c4-ccfc-4651-bc90-a606a4e46e0f_del complete#033[00m
Dec  5 02:13:32 compute-0 nova_compute[349548]: 2025-12-05 02:13:32.697 349552 INFO nova.compute.manager [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Took 1.12 seconds to destroy the instance on the hypervisor.#033[00m
Dec  5 02:13:32 compute-0 nova_compute[349548]: 2025-12-05 02:13:32.698 349552 DEBUG oslo.service.loopingcall [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  5 02:13:32 compute-0 nova_compute[349548]: 2025-12-05 02:13:32.699 349552 DEBUG nova.compute.manager [-] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  5 02:13:32 compute-0 nova_compute[349548]: 2025-12-05 02:13:32.700 349552 DEBUG nova.network.neutron [-] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  5 02:13:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:13:33 compute-0 nova_compute[349548]: 2025-12-05 02:13:33.526 349552 DEBUG nova.network.neutron [-] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:13:33 compute-0 nova_compute[349548]: 2025-12-05 02:13:33.546 349552 INFO nova.compute.manager [-] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Took 0.85 seconds to deallocate network for instance.#033[00m
Dec  5 02:13:33 compute-0 nova_compute[349548]: 2025-12-05 02:13:33.599 349552 DEBUG oslo_concurrency.lockutils [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:13:33 compute-0 nova_compute[349548]: 2025-12-05 02:13:33.600 349552 DEBUG oslo_concurrency.lockutils [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:13:33 compute-0 nova_compute[349548]: 2025-12-05 02:13:33.608 349552 DEBUG nova.compute.manager [req-19c7028d-b0da-40af-86da-817063863dda req-f62017fa-8218-4d06-9896-38605619e946 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Received event network-vif-deleted-1e754fc7-106a-43d2-a675-79c30089904b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:13:33 compute-0 nova_compute[349548]: 2025-12-05 02:13:33.701 349552 DEBUG oslo_concurrency.processutils [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:13:34 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:13:34 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2611683973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:13:34 compute-0 nova_compute[349548]: 2025-12-05 02:13:34.200 349552 DEBUG oslo_concurrency.processutils [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:13:34 compute-0 nova_compute[349548]: 2025-12-05 02:13:34.212 349552 DEBUG nova.compute.provider_tree [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:13:34 compute-0 nova_compute[349548]: 2025-12-05 02:13:34.234 349552 DEBUG nova.scheduler.client.report [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:13:34 compute-0 nova_compute[349548]: 2025-12-05 02:13:34.258 349552 DEBUG oslo_concurrency.lockutils [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:13:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1937: 321 pgs: 321 active+clean; 295 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 233 KiB/s rd, 145 KiB/s wr, 72 op/s
Dec  5 02:13:34 compute-0 nova_compute[349548]: 2025-12-05 02:13:34.284 349552 INFO nova.scheduler.client.report [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Deleted allocations for instance 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f#033[00m
Dec  5 02:13:34 compute-0 nova_compute[349548]: 2025-12-05 02:13:34.353 349552 DEBUG oslo_concurrency.lockutils [None req-34166e88-80e2-4413-b78d-ad98044a7870 2e61f46e24a240608d1523fb5265d3ac 6aaead05b2404fec8f687504ed800a2b - - default default] Lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:13:34 compute-0 nova_compute[349548]: 2025-12-05 02:13:34.445 349552 DEBUG nova.compute.manager [req-f1fbe6e3-3bbf-4e23-b829-c1da29617176 req-fddbe79d-eeaa-4ba1-a63a-0aee99a925a2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Received event network-vif-plugged-1e754fc7-106a-43d2-a675-79c30089904b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:13:34 compute-0 nova_compute[349548]: 2025-12-05 02:13:34.446 349552 DEBUG oslo_concurrency.lockutils [req-f1fbe6e3-3bbf-4e23-b829-c1da29617176 req-fddbe79d-eeaa-4ba1-a63a-0aee99a925a2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:13:34 compute-0 nova_compute[349548]: 2025-12-05 02:13:34.451 349552 DEBUG oslo_concurrency.lockutils [req-f1fbe6e3-3bbf-4e23-b829-c1da29617176 req-fddbe79d-eeaa-4ba1-a63a-0aee99a925a2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:13:34 compute-0 nova_compute[349548]: 2025-12-05 02:13:34.452 349552 DEBUG oslo_concurrency.lockutils [req-f1fbe6e3-3bbf-4e23-b829-c1da29617176 req-fddbe79d-eeaa-4ba1-a63a-0aee99a925a2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "1fcee2c4-ccfc-4651-bc90-a606a4e46e0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:13:34 compute-0 nova_compute[349548]: 2025-12-05 02:13:34.452 349552 DEBUG nova.compute.manager [req-f1fbe6e3-3bbf-4e23-b829-c1da29617176 req-fddbe79d-eeaa-4ba1-a63a-0aee99a925a2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] No waiting events found dispatching network-vif-plugged-1e754fc7-106a-43d2-a675-79c30089904b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 02:13:34 compute-0 nova_compute[349548]: 2025-12-05 02:13:34.453 349552 WARNING nova.compute.manager [req-f1fbe6e3-3bbf-4e23-b829-c1da29617176 req-fddbe79d-eeaa-4ba1-a63a-0aee99a925a2 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Received unexpected event network-vif-plugged-1e754fc7-106a-43d2-a675-79c30089904b for instance with vm_state deleted and task_state None.#033[00m
Dec  5 02:13:34 compute-0 podman[452188]: 2025-12-05 02:13:34.720423603 +0000 UTC m=+0.113983502 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Dec  5 02:13:34 compute-0 podman[452187]: 2025-12-05 02:13:34.764873032 +0000 UTC m=+0.162274079 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.vendor=CentOS)
Dec  5 02:13:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1938: 321 pgs: 321 active+clean; 268 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 237 KiB/s rd, 146 KiB/s wr, 80 op/s
Dec  5 02:13:36 compute-0 podman[452222]: 2025-12-05 02:13:36.705981789 +0000 UTC m=+0.105198815 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, container_name=kepler, maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, name=ubi9, config_id=edpm, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  5 02:13:36 compute-0 ovn_controller[89286]: 2025-12-05T02:13:36Z|00170|binding|INFO|Releasing lport 9db11503-fcc0-46ec-ad9b-de48fe796de4 from this chassis (sb_readonly=0)
Dec  5 02:13:36 compute-0 ovn_controller[89286]: 2025-12-05T02:13:36Z|00171|binding|INFO|Releasing lport 9309009c-26a0-4ed9-8142-14ad142ca1c0 from this chassis (sb_readonly=0)
Dec  5 02:13:36 compute-0 nova_compute[349548]: 2025-12-05 02:13:36.866 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:36 compute-0 nova_compute[349548]: 2025-12-05 02:13:36.957 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:37 compute-0 nova_compute[349548]: 2025-12-05 02:13:37.249 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:38 compute-0 nova_compute[349548]: 2025-12-05 02:13:38.069 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:13:38 compute-0 nova_compute[349548]: 2025-12-05 02:13:38.070 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 02:13:38 compute-0 nova_compute[349548]: 2025-12-05 02:13:38.109 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  5 02:13:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:13:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1939: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 29 KiB/s wr, 61 op/s
Dec  5 02:13:39 compute-0 nova_compute[349548]: 2025-12-05 02:13:39.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:13:39 compute-0 nova_compute[349548]: 2025-12-05 02:13:39.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:13:39 compute-0 nova_compute[349548]: 2025-12-05 02:13:39.068 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 02:13:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1940: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 27 KiB/s wr, 47 op/s
Dec  5 02:13:41 compute-0 nova_compute[349548]: 2025-12-05 02:13:41.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:13:41 compute-0 nova_compute[349548]: 2025-12-05 02:13:41.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:13:41 compute-0 nova_compute[349548]: 2025-12-05 02:13:41.104 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:13:41 compute-0 nova_compute[349548]: 2025-12-05 02:13:41.105 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:13:41 compute-0 nova_compute[349548]: 2025-12-05 02:13:41.106 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:13:41 compute-0 nova_compute[349548]: 2025-12-05 02:13:41.106 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 02:13:41 compute-0 nova_compute[349548]: 2025-12-05 02:13:41.107 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:13:41 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:13:41 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3237394633' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:13:41 compute-0 nova_compute[349548]: 2025-12-05 02:13:41.639 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:13:41 compute-0 nova_compute[349548]: 2025-12-05 02:13:41.776 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:13:41 compute-0 nova_compute[349548]: 2025-12-05 02:13:41.776 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:13:41 compute-0 nova_compute[349548]: 2025-12-05 02:13:41.784 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:13:41 compute-0 nova_compute[349548]: 2025-12-05 02:13:41.785 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:13:41 compute-0 nova_compute[349548]: 2025-12-05 02:13:41.872 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:41 compute-0 nova_compute[349548]: 2025-12-05 02:13:41.877 349552 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764900806.8760295, e184a71d-1d91-4999-bb53-73c2caa1110a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:13:41 compute-0 nova_compute[349548]: 2025-12-05 02:13:41.878 349552 INFO nova.compute.manager [-] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] VM Stopped (Lifecycle Event)#033[00m
Dec  5 02:13:41 compute-0 nova_compute[349548]: 2025-12-05 02:13:41.913 349552 DEBUG nova.compute.manager [None req-757ccffd-2fd5-4cbf-aa59-f855945d94b7 - - - - - -] [instance: e184a71d-1d91-4999-bb53-73c2caa1110a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:13:42 compute-0 nova_compute[349548]: 2025-12-05 02:13:42.250 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1941: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 27 KiB/s wr, 47 op/s
Dec  5 02:13:42 compute-0 nova_compute[349548]: 2025-12-05 02:13:42.326 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:13:42 compute-0 nova_compute[349548]: 2025-12-05 02:13:42.327 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3530MB free_disk=59.897274017333984GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 02:13:42 compute-0 nova_compute[349548]: 2025-12-05 02:13:42.327 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:13:42 compute-0 nova_compute[349548]: 2025-12-05 02:13:42.328 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:13:42 compute-0 nova_compute[349548]: 2025-12-05 02:13:42.422 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 292fd084-0808-4a80-adc1-6ab1f28e188a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:13:42 compute-0 nova_compute[349548]: 2025-12-05 02:13:42.423 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 117d1772-87cc-4a3d-bf07-3f9b49ac0c63 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:13:42 compute-0 nova_compute[349548]: 2025-12-05 02:13:42.423 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 02:13:42 compute-0 nova_compute[349548]: 2025-12-05 02:13:42.424 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 02:13:42 compute-0 nova_compute[349548]: 2025-12-05 02:13:42.446 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing inventories for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  5 02:13:42 compute-0 nova_compute[349548]: 2025-12-05 02:13:42.472 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating ProviderTree inventory for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  5 02:13:42 compute-0 nova_compute[349548]: 2025-12-05 02:13:42.473 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating inventory in ProviderTree for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  5 02:13:42 compute-0 nova_compute[349548]: 2025-12-05 02:13:42.495 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing aggregate associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  5 02:13:42 compute-0 nova_compute[349548]: 2025-12-05 02:13:42.514 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing trait associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, traits: HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,HW_CPU_X86_ABM,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE41,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI2,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  5 02:13:42 compute-0 nova_compute[349548]: 2025-12-05 02:13:42.560 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:13:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:13:42 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/661731154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:13:43 compute-0 nova_compute[349548]: 2025-12-05 02:13:43.013 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:13:43 compute-0 nova_compute[349548]: 2025-12-05 02:13:43.024 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:13:43 compute-0 nova_compute[349548]: 2025-12-05 02:13:43.043 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:13:43 compute-0 nova_compute[349548]: 2025-12-05 02:13:43.078 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 02:13:43 compute-0 nova_compute[349548]: 2025-12-05 02:13:43.079 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:13:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:13:44 compute-0 nova_compute[349548]: 2025-12-05 02:13:44.080 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:13:44 compute-0 nova_compute[349548]: 2025-12-05 02:13:44.081 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:13:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1942: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 1.2 KiB/s wr, 21 op/s
Dec  5 02:13:45 compute-0 nova_compute[349548]: 2025-12-05 02:13:45.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:13:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 02:13:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2050494399' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 02:13:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 02:13:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2050494399' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 02:13:45 compute-0 podman[452287]: 2025-12-05 02:13:45.701058222 +0000 UTC m=+0.113178860 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd)
Dec  5 02:13:45 compute-0 podman[452295]: 2025-12-05 02:13:45.707088221 +0000 UTC m=+0.093557279 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, config_id=edpm, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, container_name=openstack_network_exporter, version=9.6, release=1755695350, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7)
Dec  5 02:13:45 compute-0 podman[452288]: 2025-12-05 02:13:45.716354861 +0000 UTC m=+0.120315860 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  5 02:13:45 compute-0 podman[452289]: 2025-12-05 02:13:45.752710102 +0000 UTC m=+0.151287840 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec  5 02:13:46 compute-0 nova_compute[349548]: 2025-12-05 02:13:46.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:13:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1943: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.2 KiB/s wr, 21 op/s
Dec  5 02:13:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:13:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:13:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:13:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:13:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:13:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:13:46 compute-0 nova_compute[349548]: 2025-12-05 02:13:46.833 349552 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764900811.8315182, 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:13:46 compute-0 nova_compute[349548]: 2025-12-05 02:13:46.834 349552 INFO nova.compute.manager [-] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] VM Stopped (Lifecycle Event)#033[00m
Dec  5 02:13:46 compute-0 nova_compute[349548]: 2025-12-05 02:13:46.877 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:46 compute-0 nova_compute[349548]: 2025-12-05 02:13:46.879 349552 DEBUG nova.compute.manager [None req-b0aa18d8-3ee7-42d7-88f9-508abec97c0f - - - - - -] [instance: 1fcee2c4-ccfc-4651-bc90-a606a4e46e0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:13:47 compute-0 nova_compute[349548]: 2025-12-05 02:13:47.226 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:47 compute-0 nova_compute[349548]: 2025-12-05 02:13:47.252 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:48 compute-0 nova_compute[349548]: 2025-12-05 02:13:48.062 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:13:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:13:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1944: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 9.8 KiB/s rd, 1.5 KiB/s wr, 13 op/s
Dec  5 02:13:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1945: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec  5 02:13:51 compute-0 nova_compute[349548]: 2025-12-05 02:13:51.881 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:51 compute-0 nova_compute[349548]: 2025-12-05 02:13:51.998 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:52 compute-0 nova_compute[349548]: 2025-12-05 02:13:52.258 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1946: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.2 KiB/s wr, 0 op/s
Dec  5 02:13:52 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:52.991 287490 DEBUG eventlet.wsgi.server [-] (287490) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Dec  5 02:13:52 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:52.994 287490 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0#015
Dec  5 02:13:52 compute-0 ovn_metadata_agent[287107]: Accept: */*#015
Dec  5 02:13:52 compute-0 ovn_metadata_agent[287107]: Connection: close#015
Dec  5 02:13:52 compute-0 ovn_metadata_agent[287107]: Content-Type: text/plain#015
Dec  5 02:13:52 compute-0 ovn_metadata_agent[287107]: Host: 169.254.169.254#015
Dec  5 02:13:52 compute-0 ovn_metadata_agent[287107]: User-Agent: curl/7.84.0#015
Dec  5 02:13:52 compute-0 ovn_metadata_agent[287107]: X-Forwarded-For: 10.100.0.12#015
Dec  5 02:13:52 compute-0 ovn_metadata_agent[287107]: X-Ovn-Network-Id: 297ab129-d19a-4a0e-893c-731678c3b7a7 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Dec  5 02:13:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:13:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:54.224 287490 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Dec  5 02:13:54 compute-0 haproxy-metadata-proxy-297ab129-d19a-4a0e-893c-731678c3b7a7[450910]: 10.100.0.12:43802 [05/Dec/2025:02:13:52.990] listener listener/metadata 0/0/0/1234/1234 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Dec  5 02:13:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:54.224 287490 INFO eventlet.wsgi.server [-] 10.100.0.12,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 1.2309959#033[00m
Dec  5 02:13:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1947: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec  5 02:13:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:54.367 287490 DEBUG eventlet.wsgi.server [-] (287490) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Dec  5 02:13:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:54.368 287490 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0#015
Dec  5 02:13:54 compute-0 ovn_metadata_agent[287107]: Accept: */*#015
Dec  5 02:13:54 compute-0 ovn_metadata_agent[287107]: Connection: close#015
Dec  5 02:13:54 compute-0 ovn_metadata_agent[287107]: Content-Length: 100#015
Dec  5 02:13:54 compute-0 ovn_metadata_agent[287107]: Content-Type: application/x-www-form-urlencoded#015
Dec  5 02:13:54 compute-0 ovn_metadata_agent[287107]: Host: 169.254.169.254#015
Dec  5 02:13:54 compute-0 ovn_metadata_agent[287107]: User-Agent: curl/7.84.0#015
Dec  5 02:13:54 compute-0 ovn_metadata_agent[287107]: X-Forwarded-For: 10.100.0.12#015
Dec  5 02:13:54 compute-0 ovn_metadata_agent[287107]: X-Ovn-Network-Id: 297ab129-d19a-4a0e-893c-731678c3b7a7#015
Dec  5 02:13:54 compute-0 ovn_metadata_agent[287107]: #015
Dec  5 02:13:54 compute-0 ovn_metadata_agent[287107]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Dec  5 02:13:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:54.633 287490 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Dec  5 02:13:54 compute-0 haproxy-metadata-proxy-297ab129-d19a-4a0e-893c-731678c3b7a7[450910]: 10.100.0.12:43810 [05/Dec/2025:02:13:54.365] listener listener/metadata 0/0/0/267/267 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Dec  5 02:13:54 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:54.633 287490 INFO eventlet.wsgi.server [-] 10.100.0.12,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.2655857#033[00m
Dec  5 02:13:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:56.211 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:13:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:56.212 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:13:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:56.212 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:13:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1948: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 2.2 KiB/s wr, 1 op/s
Dec  5 02:13:56 compute-0 nova_compute[349548]: 2025-12-05 02:13:56.886 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.137 349552 DEBUG oslo_concurrency.lockutils [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Acquiring lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.138 349552 DEBUG oslo_concurrency.lockutils [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.142 349552 DEBUG oslo_concurrency.lockutils [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Acquiring lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.142 349552 DEBUG oslo_concurrency.lockutils [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.143 349552 DEBUG oslo_concurrency.lockutils [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.145 349552 INFO nova.compute.manager [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Terminating instance#033[00m
Dec  5 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.147 349552 DEBUG nova.compute.manager [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  5 02:13:57 compute-0 kernel: tapd5201944-81 (unregistering): left promiscuous mode
Dec  5 02:13:57 compute-0 NetworkManager[49092]: <info>  [1764900837.2704] device (tapd5201944-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  5 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.283 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:57 compute-0 ovn_controller[89286]: 2025-12-05T02:13:57Z|00172|binding|INFO|Releasing lport d5201944-8184-405e-ae5f-b743e1bd7399 from this chassis (sb_readonly=0)
Dec  5 02:13:57 compute-0 ovn_controller[89286]: 2025-12-05T02:13:57Z|00173|binding|INFO|Setting lport d5201944-8184-405e-ae5f-b743e1bd7399 down in Southbound
Dec  5 02:13:57 compute-0 ovn_controller[89286]: 2025-12-05T02:13:57Z|00174|binding|INFO|Removing iface tapd5201944-81 ovn-installed in OVS
Dec  5 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.287 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:57.297 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:b5:d5 10.100.0.12'], port_security=['fa:16:3e:8f:b5:d5 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '117d1772-87cc-4a3d-bf07-3f9b49ac0c63', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-297ab129-d19a-4a0e-893c-731678c3b7a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '286d2d767009421bb0c889a0ff65b2a2', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cea42b97-22e3-42f2-b4a9-e60ab6e5a3f6 f4a2d83a-c7b3-4fde-b9ec-59d46e5208fb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.207'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ff1f531e-a659-4463-9351-3086ed6c2f8e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=d5201944-8184-405e-ae5f-b743e1bd7399) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 02:13:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:57.299 287122 INFO neutron.agent.ovn.metadata.agent [-] Port d5201944-8184-405e-ae5f-b743e1bd7399 in datapath 297ab129-d19a-4a0e-893c-731678c3b7a7 unbound from our chassis#033[00m
Dec  5 02:13:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:57.303 287122 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 297ab129-d19a-4a0e-893c-731678c3b7a7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  5 02:13:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:57.304 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[20612916-8591-4250-b504-4719987d7f91]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:13:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:57.305 287122 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7 namespace which is not needed anymore#033[00m
Dec  5 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.315 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:57 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Dec  5 02:13:57 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Consumed 44.580s CPU time.
Dec  5 02:13:57 compute-0 systemd-machined[138700]: Machine qemu-15-instance-0000000e terminated.
Dec  5 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.411 349552 INFO nova.virt.libvirt.driver [-] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Instance destroyed successfully.#033[00m
Dec  5 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.420 349552 DEBUG nova.objects.instance [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Lazy-loading 'resources' on Instance uuid 117d1772-87cc-4a3d-bf07-3f9b49ac0c63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.437 349552 DEBUG nova.virt.libvirt.vif [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T02:12:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1301152906',display_name='tempest-TestServerBasicOps-server-1301152906',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1301152906',id=14,image_ref='e9091bfb-b431-47c9-a284-79372046956b',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDCz5vjwlgDWbvwiwH6Lrc3odqUa7TZ3EfOipPX5fpPxPUspT7EN7quA0kvbAyTCNWf/e9htL6cMWK3K35T7n3AN3hOq0SEzHNsNLt1sUvuz6ePIFT2WS8FYfWxAPVEIpA==',key_name='tempest-TestServerBasicOps-1536427465',keypairs=<?>,launch_index=0,launched_at=2025-12-05T02:12:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='286d2d767009421bb0c889a0ff65b2a2',ramdisk_id='',reservation_id='r-iqi50j5i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e9091bfb-b431-47c9-a284-79372046956b',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-1996691968',owner_user_name='tempest-TestServerBasicOps-1996691968-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-05T02:13:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='69e134c969b04dc58a1d1556d8ecf4a8',uuid=117d1772-87cc-4a3d-bf07-3f9b49ac0c63,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d5201944-8184-405e-ae5f-b743e1bd7399", "address": "fa:16:3e:8f:b5:d5", "network": {"id": "297ab129-d19a-4a0e-893c-731678c3b7a7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-588084580-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "286d2d767009421bb0c889a0ff65b2a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5201944-81", "ovs_interfaceid": "d5201944-8184-405e-ae5f-b743e1bd7399", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  5 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.437 349552 DEBUG nova.network.os_vif_util [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Converting VIF {"id": "d5201944-8184-405e-ae5f-b743e1bd7399", "address": "fa:16:3e:8f:b5:d5", "network": {"id": "297ab129-d19a-4a0e-893c-731678c3b7a7", "bridge": "br-int", "label": "tempest-TestServerBasicOps-588084580-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "286d2d767009421bb0c889a0ff65b2a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5201944-81", "ovs_interfaceid": "d5201944-8184-405e-ae5f-b743e1bd7399", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.439 349552 DEBUG nova.network.os_vif_util [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8f:b5:d5,bridge_name='br-int',has_traffic_filtering=True,id=d5201944-8184-405e-ae5f-b743e1bd7399,network=Network(297ab129-d19a-4a0e-893c-731678c3b7a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5201944-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.440 349552 DEBUG os_vif [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8f:b5:d5,bridge_name='br-int',has_traffic_filtering=True,id=d5201944-8184-405e-ae5f-b743e1bd7399,network=Network(297ab129-d19a-4a0e-893c-731678c3b7a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5201944-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  5 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.442 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.442 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5201944-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.444 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.447 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  5 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.450 349552 INFO os_vif [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8f:b5:d5,bridge_name='br-int',has_traffic_filtering=True,id=d5201944-8184-405e-ae5f-b743e1bd7399,network=Network(297ab129-d19a-4a0e-893c-731678c3b7a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5201944-81')#033[00m
Dec  5 02:13:57 compute-0 neutron-haproxy-ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7[450903]: [NOTICE]   (450908) : haproxy version is 2.8.14-c23fe91
Dec  5 02:13:57 compute-0 neutron-haproxy-ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7[450903]: [NOTICE]   (450908) : path to executable is /usr/sbin/haproxy
Dec  5 02:13:57 compute-0 neutron-haproxy-ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7[450903]: [WARNING]  (450908) : Exiting Master process...
Dec  5 02:13:57 compute-0 neutron-haproxy-ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7[450903]: [WARNING]  (450908) : Exiting Master process...
Dec  5 02:13:57 compute-0 neutron-haproxy-ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7[450903]: [ALERT]    (450908) : Current worker (450910) exited with code 143 (Terminated)
Dec  5 02:13:57 compute-0 neutron-haproxy-ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7[450903]: [WARNING]  (450908) : All workers exited. Exiting... (0)
Dec  5 02:13:57 compute-0 systemd[1]: libpod-6f0c52049aacf7629ee6bf5752ade983525a5a45f00ee3b2ed23eb855c5cc2ad.scope: Deactivated successfully.
Dec  5 02:13:57 compute-0 podman[452400]: 2025-12-05 02:13:57.545543232 +0000 UTC m=+0.073903856 container died 6f0c52049aacf7629ee6bf5752ade983525a5a45f00ee3b2ed23eb855c5cc2ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:13:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6f0c52049aacf7629ee6bf5752ade983525a5a45f00ee3b2ed23eb855c5cc2ad-userdata-shm.mount: Deactivated successfully.
Dec  5 02:13:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-596f961f4772e7095f34b4530a56ee23aa3e77a9b26fe356092ec6991cf0ede7-merged.mount: Deactivated successfully.
Dec  5 02:13:57 compute-0 podman[452400]: 2025-12-05 02:13:57.626339821 +0000 UTC m=+0.154700415 container cleanup 6f0c52049aacf7629ee6bf5752ade983525a5a45f00ee3b2ed23eb855c5cc2ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec  5 02:13:57 compute-0 systemd[1]: libpod-conmon-6f0c52049aacf7629ee6bf5752ade983525a5a45f00ee3b2ed23eb855c5cc2ad.scope: Deactivated successfully.
Dec  5 02:13:57 compute-0 podman[452443]: 2025-12-05 02:13:57.771696764 +0000 UTC m=+0.097211291 container remove 6f0c52049aacf7629ee6bf5752ade983525a5a45f00ee3b2ed23eb855c5cc2ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  5 02:13:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:57.789 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[eac2a280-d9a9-4800-9c65-02884a9691af]: (4, ('Fri Dec  5 02:13:57 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7 (6f0c52049aacf7629ee6bf5752ade983525a5a45f00ee3b2ed23eb855c5cc2ad)\n6f0c52049aacf7629ee6bf5752ade983525a5a45f00ee3b2ed23eb855c5cc2ad\nFri Dec  5 02:13:57 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7 (6f0c52049aacf7629ee6bf5752ade983525a5a45f00ee3b2ed23eb855c5cc2ad)\n6f0c52049aacf7629ee6bf5752ade983525a5a45f00ee3b2ed23eb855c5cc2ad\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:13:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:57.793 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[3fa0413b-4fb4-497b-8f52-687fa865c372]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:13:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:57.795 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap297ab129-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.797 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:57 compute-0 kernel: tap297ab129-d0: left promiscuous mode
Dec  5 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.800 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:57.816 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[fdae9b21-57c8-41e3-82b4-d43bd7ee25fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.821 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.831 349552 DEBUG nova.compute.manager [req-5a733769-9068-4f22-839c-1ab9fb2b44fe req-320a54e7-3479-42e1-bd65-1379ac021db1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Received event network-vif-unplugged-d5201944-8184-405e-ae5f-b743e1bd7399 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.831 349552 DEBUG oslo_concurrency.lockutils [req-5a733769-9068-4f22-839c-1ab9fb2b44fe req-320a54e7-3479-42e1-bd65-1379ac021db1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.832 349552 DEBUG oslo_concurrency.lockutils [req-5a733769-9068-4f22-839c-1ab9fb2b44fe req-320a54e7-3479-42e1-bd65-1379ac021db1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.832 349552 DEBUG oslo_concurrency.lockutils [req-5a733769-9068-4f22-839c-1ab9fb2b44fe req-320a54e7-3479-42e1-bd65-1379ac021db1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.832 349552 DEBUG nova.compute.manager [req-5a733769-9068-4f22-839c-1ab9fb2b44fe req-320a54e7-3479-42e1-bd65-1379ac021db1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] No waiting events found dispatching network-vif-unplugged-d5201944-8184-405e-ae5f-b743e1bd7399 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 02:13:57 compute-0 nova_compute[349548]: 2025-12-05 02:13:57.832 349552 DEBUG nova.compute.manager [req-5a733769-9068-4f22-839c-1ab9fb2b44fe req-320a54e7-3479-42e1-bd65-1379ac021db1 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Received event network-vif-unplugged-d5201944-8184-405e-ae5f-b743e1bd7399 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  5 02:13:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:57.839 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[9de92e9c-51d4-4bd3-ac62-a9c6697c45e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:13:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:57.841 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[a6185ec6-0209-421e-b15c-420e4c510dd4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:13:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:57.880 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[07db7734-901c-40f4-a8cc-9fb46ea36cce]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 685127, 'reachable_time': 20794, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 452455, 'error': None, 'target': 'ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:13:57 compute-0 systemd[1]: run-netns-ovnmeta\x2d297ab129\x2dd19a\x2d4a0e\x2d893c\x2d731678c3b7a7.mount: Deactivated successfully.
Dec  5 02:13:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:57.887 287504 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-297ab129-d19a-4a0e-893c-731678c3b7a7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  5 02:13:57 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:13:57.888 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[b0c32868-b449-4423-ad31-d082cc3669c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:13:58 compute-0 nova_compute[349548]: 2025-12-05 02:13:58.190 349552 INFO nova.virt.libvirt.driver [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Deleting instance files /var/lib/nova/instances/117d1772-87cc-4a3d-bf07-3f9b49ac0c63_del#033[00m
Dec  5 02:13:58 compute-0 nova_compute[349548]: 2025-12-05 02:13:58.193 349552 INFO nova.virt.libvirt.driver [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Deletion of /var/lib/nova/instances/117d1772-87cc-4a3d-bf07-3f9b49ac0c63_del complete#033[00m
Dec  5 02:13:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:13:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1949: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 2.5 KiB/s wr, 1 op/s
Dec  5 02:13:58 compute-0 nova_compute[349548]: 2025-12-05 02:13:58.349 349552 INFO nova.compute.manager [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Took 1.20 seconds to destroy the instance on the hypervisor.#033[00m
Dec  5 02:13:58 compute-0 nova_compute[349548]: 2025-12-05 02:13:58.350 349552 DEBUG oslo.service.loopingcall [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  5 02:13:58 compute-0 nova_compute[349548]: 2025-12-05 02:13:58.351 349552 DEBUG nova.compute.manager [-] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  5 02:13:58 compute-0 nova_compute[349548]: 2025-12-05 02:13:58.352 349552 DEBUG nova.network.neutron [-] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  5 02:13:59 compute-0 podman[158197]: time="2025-12-05T02:13:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:13:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:13:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 02:13:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:13:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8653 "" "Go-http-client/1.1"
Dec  5 02:14:00 compute-0 nova_compute[349548]: 2025-12-05 02:14:00.030 349552 DEBUG nova.compute.manager [req-34d420b7-1558-4c41-a177-d6f3dad97600 req-f43ff667-ee79-49e0-afee-caa393042348 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Received event network-vif-plugged-d5201944-8184-405e-ae5f-b743e1bd7399 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:14:00 compute-0 nova_compute[349548]: 2025-12-05 02:14:00.031 349552 DEBUG oslo_concurrency.lockutils [req-34d420b7-1558-4c41-a177-d6f3dad97600 req-f43ff667-ee79-49e0-afee-caa393042348 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:14:00 compute-0 nova_compute[349548]: 2025-12-05 02:14:00.032 349552 DEBUG oslo_concurrency.lockutils [req-34d420b7-1558-4c41-a177-d6f3dad97600 req-f43ff667-ee79-49e0-afee-caa393042348 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:14:00 compute-0 nova_compute[349548]: 2025-12-05 02:14:00.033 349552 DEBUG oslo_concurrency.lockutils [req-34d420b7-1558-4c41-a177-d6f3dad97600 req-f43ff667-ee79-49e0-afee-caa393042348 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:14:00 compute-0 nova_compute[349548]: 2025-12-05 02:14:00.033 349552 DEBUG nova.compute.manager [req-34d420b7-1558-4c41-a177-d6f3dad97600 req-f43ff667-ee79-49e0-afee-caa393042348 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] No waiting events found dispatching network-vif-plugged-d5201944-8184-405e-ae5f-b743e1bd7399 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 02:14:00 compute-0 nova_compute[349548]: 2025-12-05 02:14:00.033 349552 WARNING nova.compute.manager [req-34d420b7-1558-4c41-a177-d6f3dad97600 req-f43ff667-ee79-49e0-afee-caa393042348 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Received unexpected event network-vif-plugged-d5201944-8184-405e-ae5f-b743e1bd7399 for instance with vm_state active and task_state deleting.#033[00m
Dec  5 02:14:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1950: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 2.5 KiB/s wr, 1 op/s
Dec  5 02:14:00 compute-0 nova_compute[349548]: 2025-12-05 02:14:00.447 349552 DEBUG nova.network.neutron [-] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:14:00 compute-0 nova_compute[349548]: 2025-12-05 02:14:00.474 349552 INFO nova.compute.manager [-] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Took 2.12 seconds to deallocate network for instance.#033[00m
Dec  5 02:14:00 compute-0 nova_compute[349548]: 2025-12-05 02:14:00.535 349552 DEBUG oslo_concurrency.lockutils [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:14:00 compute-0 nova_compute[349548]: 2025-12-05 02:14:00.537 349552 DEBUG oslo_concurrency.lockutils [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:14:00 compute-0 nova_compute[349548]: 2025-12-05 02:14:00.540 349552 DEBUG nova.compute.manager [req-ab023671-1978-47e4-824d-e21aa9dfae1f req-56fce45c-f861-4971-985a-25992e3432e8 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Received event network-vif-deleted-d5201944-8184-405e-ae5f-b743e1bd7399 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:14:00 compute-0 nova_compute[349548]: 2025-12-05 02:14:00.620 349552 DEBUG oslo_concurrency.processutils [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:14:00 compute-0 podman[452456]: 2025-12-05 02:14:00.703420963 +0000 UTC m=+0.121610366 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec  5 02:14:00 compute-0 podman[452457]: 2025-12-05 02:14:00.735246637 +0000 UTC m=+0.134817997 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  5 02:14:01 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:14:01 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1653726545' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:14:01 compute-0 nova_compute[349548]: 2025-12-05 02:14:01.086 349552 DEBUG oslo_concurrency.processutils [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:14:01 compute-0 nova_compute[349548]: 2025-12-05 02:14:01.103 349552 DEBUG nova.compute.provider_tree [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:14:01 compute-0 nova_compute[349548]: 2025-12-05 02:14:01.128 349552 DEBUG nova.scheduler.client.report [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:14:01 compute-0 nova_compute[349548]: 2025-12-05 02:14:01.158 349552 DEBUG oslo_concurrency.lockutils [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:14:01 compute-0 nova_compute[349548]: 2025-12-05 02:14:01.198 349552 INFO nova.scheduler.client.report [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Deleted allocations for instance 117d1772-87cc-4a3d-bf07-3f9b49ac0c63#033[00m
Dec  5 02:14:01 compute-0 nova_compute[349548]: 2025-12-05 02:14:01.285 349552 DEBUG oslo_concurrency.lockutils [None req-fb07345a-6a63-407d-a339-9d188a344d63 69e134c969b04dc58a1d1556d8ecf4a8 286d2d767009421bb0c889a0ff65b2a2 - - default default] Lock "117d1772-87cc-4a3d-bf07-3f9b49ac0c63" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.146s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:14:01 compute-0 openstack_network_exporter[366555]: ERROR   02:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:14:01 compute-0 openstack_network_exporter[366555]: ERROR   02:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:14:01 compute-0 openstack_network_exporter[366555]: ERROR   02:14:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:14:01 compute-0 openstack_network_exporter[366555]: ERROR   02:14:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:14:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:14:01 compute-0 openstack_network_exporter[366555]: ERROR   02:14:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:14:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:14:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1951: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 7.7 KiB/s wr, 31 op/s
Dec  5 02:14:02 compute-0 nova_compute[349548]: 2025-12-05 02:14:02.314 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:14:02 compute-0 nova_compute[349548]: 2025-12-05 02:14:02.445 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:14:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:14:03 compute-0 ovn_controller[89286]: 2025-12-05T02:14:03Z|00175|binding|INFO|Releasing lport 9309009c-26a0-4ed9-8142-14ad142ca1c0 from this chassis (sb_readonly=0)
Dec  5 02:14:03 compute-0 nova_compute[349548]: 2025-12-05 02:14:03.539 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:14:03 compute-0 ovn_controller[89286]: 2025-12-05T02:14:03Z|00176|binding|INFO|Releasing lport 9309009c-26a0-4ed9-8142-14ad142ca1c0 from this chassis (sb_readonly=0)
Dec  5 02:14:03 compute-0 nova_compute[349548]: 2025-12-05 02:14:03.870 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:14:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1952: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 7.6 KiB/s wr, 31 op/s
Dec  5 02:14:05 compute-0 podman[452520]: 2025-12-05 02:14:05.679817039 +0000 UTC m=+0.096977925 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  5 02:14:05 compute-0 podman[452521]: 2025-12-05 02:14:05.739253218 +0000 UTC m=+0.145866148 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 02:14:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1953: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 7.6 KiB/s wr, 31 op/s
Dec  5 02:14:07 compute-0 nova_compute[349548]: 2025-12-05 02:14:07.316 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:14:07 compute-0 nova_compute[349548]: 2025-12-05 02:14:07.449 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:14:07 compute-0 podman[452559]: 2025-12-05 02:14:07.72825679 +0000 UTC m=+0.133404858 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, container_name=kepler, maintainer=Red Hat, Inc., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, io.openshift.tags=base rhel9, architecture=x86_64, com.redhat.component=ubi9-container, managed_by=edpm_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Dec  5 02:14:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:14:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1954: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 6.5 KiB/s wr, 29 op/s
Dec  5 02:14:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1955: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Dec  5 02:14:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1956: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Dec  5 02:14:12 compute-0 nova_compute[349548]: 2025-12-05 02:14:12.320 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:14:12 compute-0 nova_compute[349548]: 2025-12-05 02:14:12.404 349552 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764900837.402943, 117d1772-87cc-4a3d-bf07-3f9b49ac0c63 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:14:12 compute-0 nova_compute[349548]: 2025-12-05 02:14:12.404 349552 INFO nova.compute.manager [-] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] VM Stopped (Lifecycle Event)#033[00m
Dec  5 02:14:12 compute-0 nova_compute[349548]: 2025-12-05 02:14:12.430 349552 DEBUG nova.compute.manager [None req-2e75218f-4cf8-40fe-af15-84dd697ec005 - - - - - -] [instance: 117d1772-87cc-4a3d-bf07-3f9b49ac0c63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:14:12 compute-0 nova_compute[349548]: 2025-12-05 02:14:12.452 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:14:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:14:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1957: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:14:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1958: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:14:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:14:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:14:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:14:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:14:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:14:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:14:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:14:16
Dec  5 02:14:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 02:14:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 02:14:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'images', 'backups', 'volumes']
Dec  5 02:14:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 02:14:16 compute-0 podman[452582]: 2025-12-05 02:14:16.913202915 +0000 UTC m=+0.096538662 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, version=9.6, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  5 02:14:16 compute-0 podman[452579]: 2025-12-05 02:14:16.91586864 +0000 UTC m=+0.118719686 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  5 02:14:16 compute-0 podman[452580]: 2025-12-05 02:14:16.938506586 +0000 UTC m=+0.131839974 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 02:14:16 compute-0 podman[452581]: 2025-12-05 02:14:16.987298706 +0000 UTC m=+0.171793826 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  5 02:14:17 compute-0 nova_compute[349548]: 2025-12-05 02:14:17.323 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:14:17 compute-0 nova_compute[349548]: 2025-12-05 02:14:17.454 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:14:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 02:14:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:14:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 02:14:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:14:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:14:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:14:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:14:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:14:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:14:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:14:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:14:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1959: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:14:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1960: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:14:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1961: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:14:22 compute-0 nova_compute[349548]: 2025-12-05 02:14:22.324 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:14:22 compute-0 nova_compute[349548]: 2025-12-05 02:14:22.457 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 02:14:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 9676 writes, 36K keys, 9676 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 9676 writes, 2587 syncs, 3.74 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2307 writes, 8716 keys, 2307 commit groups, 1.0 writes per commit group, ingest: 9.18 MB, 0.02 MB/s#012Interval WAL: 2307 writes, 929 syncs, 2.48 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  5 02:14:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:14:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1962: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:14:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1963: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007575561022660676 of space, bias 1.0, pg target 0.2272668306798203 quantized to 32 (current 32)
Dec  5 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  5 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:14:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 02:14:27 compute-0 nova_compute[349548]: 2025-12-05 02:14:27.327 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:14:27 compute-0 nova_compute[349548]: 2025-12-05 02:14:27.459 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:14:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:14:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1964: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:14:29 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 02:14:29 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.2 total, 600.0 interval#012Cumulative writes: 11K writes, 44K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 11K writes, 2982 syncs, 3.80 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2397 writes, 9102 keys, 2397 commit groups, 1.0 writes per commit group, ingest: 9.64 MB, 0.02 MB/s#012Interval WAL: 2397 writes, 959 syncs, 2.50 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  5 02:14:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:14:29 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:14:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 02:14:29 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:14:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 02:14:29 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:14:29 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 966e5409-a857-48db-95ab-9b8542a433cc does not exist
Dec  5 02:14:29 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 60a2fb9f-1d55-44a5-90c5-049e59862260 does not exist
Dec  5 02:14:29 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev a26da526-d659-4343-82aa-2e09f701fd87 does not exist
Dec  5 02:14:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 02:14:29 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 02:14:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 02:14:29 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:14:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:14:29 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:14:29 compute-0 podman[158197]: time="2025-12-05T02:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:14:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 02:14:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8659 "" "Go-http-client/1.1"
Dec  5 02:14:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1965: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:14:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:14:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:14:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:14:30 compute-0 podman[452931]: 2025-12-05 02:14:30.664076058 +0000 UTC m=+0.089598667 container create 53f901430caa289cf037ee9b965ffa6556cd3e750cf59430c74dfb83804d9c26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:14:30 compute-0 podman[452931]: 2025-12-05 02:14:30.63245719 +0000 UTC m=+0.057979879 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:14:30 compute-0 systemd[1]: Started libpod-conmon-53f901430caa289cf037ee9b965ffa6556cd3e750cf59430c74dfb83804d9c26.scope.
Dec  5 02:14:30 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:14:30 compute-0 podman[452931]: 2025-12-05 02:14:30.83502665 +0000 UTC m=+0.260549329 container init 53f901430caa289cf037ee9b965ffa6556cd3e750cf59430c74dfb83804d9c26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:14:30 compute-0 podman[452931]: 2025-12-05 02:14:30.850499004 +0000 UTC m=+0.276021593 container start 53f901430caa289cf037ee9b965ffa6556cd3e750cf59430c74dfb83804d9c26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_goldberg, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:14:30 compute-0 podman[452931]: 2025-12-05 02:14:30.856087891 +0000 UTC m=+0.281610540 container attach 53f901430caa289cf037ee9b965ffa6556cd3e750cf59430c74dfb83804d9c26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  5 02:14:30 compute-0 relaxed_goldberg[452947]: 167 167
Dec  5 02:14:30 compute-0 systemd[1]: libpod-53f901430caa289cf037ee9b965ffa6556cd3e750cf59430c74dfb83804d9c26.scope: Deactivated successfully.
Dec  5 02:14:30 compute-0 conmon[452947]: conmon 53f901430caa289cf037 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-53f901430caa289cf037ee9b965ffa6556cd3e750cf59430c74dfb83804d9c26.scope/container/memory.events
Dec  5 02:14:30 compute-0 podman[452931]: 2025-12-05 02:14:30.862274144 +0000 UTC m=+0.287796743 container died 53f901430caa289cf037ee9b965ffa6556cd3e750cf59430c74dfb83804d9c26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_goldberg, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:14:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-04ecb1c9fe731822801c747542dc82d7e1bf9597fb0f77f6d033ab7d90e651f0-merged.mount: Deactivated successfully.
Dec  5 02:14:30 compute-0 podman[452946]: 2025-12-05 02:14:30.917511575 +0000 UTC m=+0.145859966 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec  5 02:14:30 compute-0 podman[452931]: 2025-12-05 02:14:30.923955206 +0000 UTC m=+0.349477775 container remove 53f901430caa289cf037ee9b965ffa6556cd3e750cf59430c74dfb83804d9c26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  5 02:14:30 compute-0 systemd[1]: libpod-conmon-53f901430caa289cf037ee9b965ffa6556cd3e750cf59430c74dfb83804d9c26.scope: Deactivated successfully.
Dec  5 02:14:30 compute-0 podman[452955]: 2025-12-05 02:14:30.969535756 +0000 UTC m=+0.150286391 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 02:14:31 compute-0 podman[453006]: 2025-12-05 02:14:31.156138057 +0000 UTC m=+0.072827416 container create 938ea7f5d38862d8cd372d01c6d6a8f9b03448a0a231b73070164bc9229c5987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_newton, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:14:31 compute-0 podman[453006]: 2025-12-05 02:14:31.127555405 +0000 UTC m=+0.044244804 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:14:31 compute-0 systemd[1]: Started libpod-conmon-938ea7f5d38862d8cd372d01c6d6a8f9b03448a0a231b73070164bc9229c5987.scope.
Dec  5 02:14:31 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:14:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/345cc48542ea1fcb84b9e64ef055ec21279afb938281b7286c7afc030fdb4b1f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:14:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/345cc48542ea1fcb84b9e64ef055ec21279afb938281b7286c7afc030fdb4b1f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:14:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/345cc48542ea1fcb84b9e64ef055ec21279afb938281b7286c7afc030fdb4b1f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:14:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/345cc48542ea1fcb84b9e64ef055ec21279afb938281b7286c7afc030fdb4b1f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:14:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/345cc48542ea1fcb84b9e64ef055ec21279afb938281b7286c7afc030fdb4b1f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 02:14:31 compute-0 podman[453006]: 2025-12-05 02:14:31.328808447 +0000 UTC m=+0.245497766 container init 938ea7f5d38862d8cd372d01c6d6a8f9b03448a0a231b73070164bc9229c5987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  5 02:14:31 compute-0 podman[453006]: 2025-12-05 02:14:31.345205467 +0000 UTC m=+0.261894796 container start 938ea7f5d38862d8cd372d01c6d6a8f9b03448a0a231b73070164bc9229c5987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  5 02:14:31 compute-0 podman[453006]: 2025-12-05 02:14:31.350506766 +0000 UTC m=+0.267196105 container attach 938ea7f5d38862d8cd372d01c6d6a8f9b03448a0a231b73070164bc9229c5987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  5 02:14:31 compute-0 openstack_network_exporter[366555]: ERROR   02:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:14:31 compute-0 openstack_network_exporter[366555]: ERROR   02:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:14:31 compute-0 openstack_network_exporter[366555]: ERROR   02:14:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:14:31 compute-0 openstack_network_exporter[366555]: ERROR   02:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:14:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:14:31 compute-0 openstack_network_exporter[366555]: ERROR   02:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:14:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:14:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1966: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:14:32 compute-0 nova_compute[349548]: 2025-12-05 02:14:32.330 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:14:32 compute-0 nova_compute[349548]: 2025-12-05 02:14:32.463 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:14:32 compute-0 interesting_newton[453021]: --> passed data devices: 0 physical, 3 LVM
Dec  5 02:14:32 compute-0 interesting_newton[453021]: --> relative data size: 1.0
Dec  5 02:14:32 compute-0 interesting_newton[453021]: --> All data devices are unavailable
Dec  5 02:14:32 compute-0 systemd[1]: libpod-938ea7f5d38862d8cd372d01c6d6a8f9b03448a0a231b73070164bc9229c5987.scope: Deactivated successfully.
Dec  5 02:14:32 compute-0 systemd[1]: libpod-938ea7f5d38862d8cd372d01c6d6a8f9b03448a0a231b73070164bc9229c5987.scope: Consumed 1.201s CPU time.
Dec  5 02:14:32 compute-0 podman[453006]: 2025-12-05 02:14:32.601471601 +0000 UTC m=+1.518160950 container died 938ea7f5d38862d8cd372d01c6d6a8f9b03448a0a231b73070164bc9229c5987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Dec  5 02:14:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-345cc48542ea1fcb84b9e64ef055ec21279afb938281b7286c7afc030fdb4b1f-merged.mount: Deactivated successfully.
Dec  5 02:14:32 compute-0 podman[453006]: 2025-12-05 02:14:32.86135342 +0000 UTC m=+1.778042779 container remove 938ea7f5d38862d8cd372d01c6d6a8f9b03448a0a231b73070164bc9229c5987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_newton, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:14:32 compute-0 systemd[1]: libpod-conmon-938ea7f5d38862d8cd372d01c6d6a8f9b03448a0a231b73070164bc9229c5987.scope: Deactivated successfully.
Dec  5 02:14:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:14:33 compute-0 podman[453200]: 2025-12-05 02:14:33.931620679 +0000 UTC m=+0.070916042 container create 0ea75c4f09caa29f9edb960f1d4446fa742ac4090f7e0b52a9a6bf21d7063ffa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec  5 02:14:33 compute-0 systemd[1]: Started libpod-conmon-0ea75c4f09caa29f9edb960f1d4446fa742ac4090f7e0b52a9a6bf21d7063ffa.scope.
Dec  5 02:14:34 compute-0 podman[453200]: 2025-12-05 02:14:33.906769962 +0000 UTC m=+0.046065355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:14:34 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:14:34 compute-0 podman[453200]: 2025-12-05 02:14:34.134358084 +0000 UTC m=+0.273653517 container init 0ea75c4f09caa29f9edb960f1d4446fa742ac4090f7e0b52a9a6bf21d7063ffa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:14:34 compute-0 podman[453200]: 2025-12-05 02:14:34.149871179 +0000 UTC m=+0.289166582 container start 0ea75c4f09caa29f9edb960f1d4446fa742ac4090f7e0b52a9a6bf21d7063ffa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  5 02:14:34 compute-0 podman[453200]: 2025-12-05 02:14:34.162603377 +0000 UTC m=+0.301898780 container attach 0ea75c4f09caa29f9edb960f1d4446fa742ac4090f7e0b52a9a6bf21d7063ffa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:14:34 compute-0 angry_booth[453216]: 167 167
Dec  5 02:14:34 compute-0 systemd[1]: libpod-0ea75c4f09caa29f9edb960f1d4446fa742ac4090f7e0b52a9a6bf21d7063ffa.scope: Deactivated successfully.
Dec  5 02:14:34 compute-0 podman[453200]: 2025-12-05 02:14:34.168322738 +0000 UTC m=+0.307618141 container died 0ea75c4f09caa29f9edb960f1d4446fa742ac4090f7e0b52a9a6bf21d7063ffa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_booth, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:14:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b8eaa706de67091647c9322fb2bf65b68c19b92ba3933be50bfcc3483c6a0e6-merged.mount: Deactivated successfully.
Dec  5 02:14:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1967: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:14:34 compute-0 podman[453200]: 2025-12-05 02:14:34.333058774 +0000 UTC m=+0.472354167 container remove 0ea75c4f09caa29f9edb960f1d4446fa742ac4090f7e0b52a9a6bf21d7063ffa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_booth, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:14:34 compute-0 systemd[1]: libpod-conmon-0ea75c4f09caa29f9edb960f1d4446fa742ac4090f7e0b52a9a6bf21d7063ffa.scope: Deactivated successfully.
Dec  5 02:14:34 compute-0 podman[453240]: 2025-12-05 02:14:34.655369196 +0000 UTC m=+0.085359439 container create b47d4f076530f13ea73ca5c68d300e2552d6869d5d761f09be271e98e0fa74ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:14:34 compute-0 podman[453240]: 2025-12-05 02:14:34.619750275 +0000 UTC m=+0.049740588 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:14:34 compute-0 systemd[1]: Started libpod-conmon-b47d4f076530f13ea73ca5c68d300e2552d6869d5d761f09be271e98e0fa74ea.scope.
Dec  5 02:14:34 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:14:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/533ef216754e28c0bb82e6da509087057248bd887792724f62f909f37e42df5b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:14:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/533ef216754e28c0bb82e6da509087057248bd887792724f62f909f37e42df5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:14:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/533ef216754e28c0bb82e6da509087057248bd887792724f62f909f37e42df5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:14:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/533ef216754e28c0bb82e6da509087057248bd887792724f62f909f37e42df5b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:14:35 compute-0 podman[453240]: 2025-12-05 02:14:35.021673434 +0000 UTC m=+0.451663737 container init b47d4f076530f13ea73ca5c68d300e2552d6869d5d761f09be271e98e0fa74ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hamilton, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  5 02:14:35 compute-0 podman[453240]: 2025-12-05 02:14:35.043090165 +0000 UTC m=+0.473080428 container start b47d4f076530f13ea73ca5c68d300e2552d6869d5d761f09be271e98e0fa74ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:14:35 compute-0 podman[453240]: 2025-12-05 02:14:35.196532735 +0000 UTC m=+0.626522968 container attach b47d4f076530f13ea73ca5c68d300e2552d6869d5d761f09be271e98e0fa74ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hamilton, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 02:14:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 9064 writes, 35K keys, 9064 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 9064 writes, 2319 syncs, 3.91 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1653 writes, 6154 keys, 1653 commit groups, 1.0 writes per commit group, ingest: 6.39 MB, 0.01 MB/s#012Interval WAL: 1653 writes, 687 syncs, 2.41 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]: {
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:    "0": [
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:        {
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:            "devices": [
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:                "/dev/loop3"
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:            ],
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:            "lv_name": "ceph_lv0",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:            "lv_size": "21470642176",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:            "name": "ceph_lv0",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:            "tags": {
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:                "ceph.cluster_name": "ceph",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:                "ceph.crush_device_class": "",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:                "ceph.encrypted": "0",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:                "ceph.osd_id": "0",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:                "ceph.type": "block",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:                "ceph.vdo": "0"
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:            },
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:            "type": "block",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:            "vg_name": "ceph_vg0"
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:        }
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:    ],
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:    "1": [
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:        {
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:            "devices": [
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:                "/dev/loop4"
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:            ],
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:            "lv_name": "ceph_lv1",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:            "lv_size": "21470642176",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:            "name": "ceph_lv1",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:            "tags": {
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:                "ceph.cluster_name": "ceph",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:                "ceph.crush_device_class": "",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:                "ceph.encrypted": "0",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:                "ceph.osd_id": "1",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:                "ceph.type": "block",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:                "ceph.vdo": "0"
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:            },
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:            "type": "block",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:            "vg_name": "ceph_vg1"
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:        }
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:    ],
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:    "2": [
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:        {
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:            "devices": [
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:                "/dev/loop5"
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:            ],
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:            "lv_name": "ceph_lv2",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:            "lv_size": "21470642176",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:            "name": "ceph_lv2",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:            "tags": {
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:                "ceph.cluster_name": "ceph",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:                "ceph.crush_device_class": "",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:                "ceph.encrypted": "0",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:                "ceph.osd_id": "2",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:                "ceph.type": "block",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:                "ceph.vdo": "0"
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:            },
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:            "type": "block",
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:            "vg_name": "ceph_vg2"
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:        }
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]:    ]
Dec  5 02:14:35 compute-0 laughing_hamilton[453253]: }
Dec  5 02:14:35 compute-0 systemd[1]: libpod-b47d4f076530f13ea73ca5c68d300e2552d6869d5d761f09be271e98e0fa74ea.scope: Deactivated successfully.
Dec  5 02:14:36 compute-0 podman[453264]: 2025-12-05 02:14:36.056633331 +0000 UTC m=+0.053412941 container died b47d4f076530f13ea73ca5c68d300e2552d6869d5d761f09be271e98e0fa74ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:14:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-533ef216754e28c0bb82e6da509087057248bd887792724f62f909f37e42df5b-merged.mount: Deactivated successfully.
Dec  5 02:14:36 compute-0 podman[453265]: 2025-12-05 02:14:36.155302133 +0000 UTC m=+0.142019120 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  5 02:14:36 compute-0 podman[453263]: 2025-12-05 02:14:36.259209261 +0000 UTC m=+0.244104947 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm)
Dec  5 02:14:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1968: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:14:36 compute-0 podman[453264]: 2025-12-05 02:14:36.511404544 +0000 UTC m=+0.508184154 container remove b47d4f076530f13ea73ca5c68d300e2552d6869d5d761f09be271e98e0fa74ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_hamilton, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:14:36 compute-0 systemd[1]: libpod-conmon-b47d4f076530f13ea73ca5c68d300e2552d6869d5d761f09be271e98e0fa74ea.scope: Deactivated successfully.
Dec  5 02:14:37 compute-0 nova_compute[349548]: 2025-12-05 02:14:37.332 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:14:37 compute-0 nova_compute[349548]: 2025-12-05 02:14:37.465 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:14:37 compute-0 podman[453449]: 2025-12-05 02:14:37.570349165 +0000 UTC m=+0.070469700 container create f2f1e02dc97fbdbf43c89da3af6368bdf89922065f43aaa6188687719d37e7cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Dec  5 02:14:37 compute-0 ceph-mgr[193209]: [devicehealth INFO root] Check health
Dec  5 02:14:37 compute-0 podman[453449]: 2025-12-05 02:14:37.546779053 +0000 UTC m=+0.046899608 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:14:37 compute-0 systemd[1]: Started libpod-conmon-f2f1e02dc97fbdbf43c89da3af6368bdf89922065f43aaa6188687719d37e7cd.scope.
Dec  5 02:14:37 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:14:37 compute-0 podman[453449]: 2025-12-05 02:14:37.970796132 +0000 UTC m=+0.470916687 container init f2f1e02dc97fbdbf43c89da3af6368bdf89922065f43aaa6188687719d37e7cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_varahamihira, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:14:37 compute-0 podman[453449]: 2025-12-05 02:14:37.992580104 +0000 UTC m=+0.492700659 container start f2f1e02dc97fbdbf43c89da3af6368bdf89922065f43aaa6188687719d37e7cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_varahamihira, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:14:38 compute-0 podman[453449]: 2025-12-05 02:14:37.999833928 +0000 UTC m=+0.499954503 container attach f2f1e02dc97fbdbf43c89da3af6368bdf89922065f43aaa6188687719d37e7cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  5 02:14:38 compute-0 heuristic_varahamihira[453467]: 167 167
Dec  5 02:14:38 compute-0 systemd[1]: libpod-f2f1e02dc97fbdbf43c89da3af6368bdf89922065f43aaa6188687719d37e7cd.scope: Deactivated successfully.
Dec  5 02:14:38 compute-0 podman[453449]: 2025-12-05 02:14:38.006503245 +0000 UTC m=+0.506623810 container died f2f1e02dc97fbdbf43c89da3af6368bdf89922065f43aaa6188687719d37e7cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_varahamihira, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:14:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-c382bea35d4f0a8a9666aca8f6f7f160e2f4010d9521670bbf0b0d335fcc6ca4-merged.mount: Deactivated successfully.
Dec  5 02:14:38 compute-0 podman[453449]: 2025-12-05 02:14:38.087299024 +0000 UTC m=+0.587419569 container remove f2f1e02dc97fbdbf43c89da3af6368bdf89922065f43aaa6188687719d37e7cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_varahamihira, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:14:38 compute-0 podman[453466]: 2025-12-05 02:14:38.090618077 +0000 UTC m=+0.273478781 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., config_id=edpm, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, release-0.7.12=, architecture=x86_64, com.redhat.component=ubi9-container, io.openshift.expose-services=, distribution-scope=public, release=1214.1726694543, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, managed_by=edpm_ansible, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  5 02:14:38 compute-0 systemd[1]: libpod-conmon-f2f1e02dc97fbdbf43c89da3af6368bdf89922065f43aaa6188687719d37e7cd.scope: Deactivated successfully.
Dec  5 02:14:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:14:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1969: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.324 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.325 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5c6f46e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.333 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '292fd084-0808-4a80-adc1-6ab1f28e188a', 'name': 'te-3255585-asg-ymkpcnuo2iqm-rsaqvth2jwvx-k3ipymnd45pa', 'flavor': {'id': 'bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'b01709a3378347e1a3f25eeb2b8b1bca', 'user_id': '99591ed8361e41579fee1d14f16bf0f7', 'hostId': '1d9ee94bfdb0c27cf886050001bab7f2a93221931735791e86b3ac18', 'status': 'active', 'metadata': {'metering.server_group': '92ca195d-98d1-443c-9947-dcb7ca7b926a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.334 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.334 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd61438050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.334 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd61438050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.335 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.336 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.336 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.336 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.337 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.337 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.339 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-05T02:14:38.335139) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.339 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-05T02:14:38.337367) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:14:38 compute-0 podman[453506]: 2025-12-05 02:14:38.355359712 +0000 UTC m=+0.085069240 container create fa773b8cb0baeac3aee53ef0617fd7b819d44c897c78761b26a903917ea1b805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.361 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.361 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.362 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.363 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.363 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.363 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.364 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.364 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.365 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.365 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.365 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.366 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.366 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-05T02:14:38.364220) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.366 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.366 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.366 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.367 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.368 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-05T02:14:38.367174) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:14:38 compute-0 podman[453506]: 2025-12-05 02:14:38.31540723 +0000 UTC m=+0.045116818 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.432 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.bytes volume: 29961216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.433 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.433 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.434 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.434 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.434 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.434 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.434 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.435 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.latency volume: 3090417276 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.435 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.latency volume: 214244219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.436 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.436 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.436 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.437 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.437 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:14:38 compute-0 systemd[1]: Started libpod-conmon-fa773b8cb0baeac3aee53ef0617fd7b819d44c897c78761b26a903917ea1b805.scope.
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.438 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-05T02:14:38.434863) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.437 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.438 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.requests volume: 1059 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.439 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.440 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.441 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.441 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-05T02:14:38.437725) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.441 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.441 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.441 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.441 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.442 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-05T02:14:38.441779) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.442 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.443 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.443 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.444 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.444 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.444 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.444 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.445 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.445 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.bytes volume: 72839168 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.446 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-05T02:14:38.445084) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.446 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.446 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.447 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.447 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.447 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.447 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.447 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.448 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-05T02:14:38.447510) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:14:38 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68dc37b46001bba592061ce04c9f560a93ba75b42ef79495dfc19b0bf0cb795b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68dc37b46001bba592061ce04c9f560a93ba75b42ef79495dfc19b0bf0cb795b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68dc37b46001bba592061ce04c9f560a93ba75b42ef79495dfc19b0bf0cb795b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:14:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68dc37b46001bba592061ce04c9f560a93ba75b42ef79495dfc19b0bf0cb795b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.487 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.488 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.488 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.489 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.489 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.489 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.489 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.489 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.latency volume: 10935968399 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.490 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.491 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.491 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.491 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.492 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.492 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.492 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.493 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.requests volume: 290 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.493 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.494 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.494 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.494 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.495 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.495 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.495 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.498 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-05T02:14:38.489597) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.499 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-05T02:14:38.492564) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.500 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-05T02:14:38.495475) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.501 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.packets volume: 9 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.501 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.501 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.502 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.502 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.502 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.502 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.502 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.502 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.503 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.503 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.503 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.503 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.503 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.503 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.504 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.504 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-05T02:14:38.502333) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.504 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.504 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.504 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.505 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.505 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.505 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.505 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.505 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.506 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.506 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.506 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.506 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.506 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.506 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.507 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.507 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.507 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.507 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.507 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.507 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.508 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.508 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.508 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.508 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.508 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.508 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.509 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.509 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.509 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.509 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/memory.usage volume: 43.12109375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.510 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.510 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.510 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.510 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.510 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.510 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.511 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.bytes volume: 1352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.511 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.511 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.511 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.512 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.512 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.512 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.512 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.512 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.513 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.513 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.513 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.513 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.513 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.513 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.513 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.514 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.514 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.514 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.514 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.514 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.514 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/cpu volume: 183440000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.515 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.515 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.515 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.515 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.515 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.515 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.515 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.516 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.516 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.516 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.516 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.516 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.517 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.517 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.508 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-05T02:14:38.503688) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.517 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.518 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-05T02:14:38.505314) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.518 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.518 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.518 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.518 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-05T02:14:38.506505) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.519 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.519 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-05T02:14:38.507813) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.519 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.519 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.519 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.520 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.520 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.520 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.520 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.520 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.520 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.521 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.521 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.521 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.521 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.521 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.521 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.522 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.522 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.522 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.522 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.522 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.522 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.523 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.523 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-05T02:14:38.509469) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.524 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-05T02:14:38.510831) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.524 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-05T02:14:38.512226) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.524 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-05T02:14:38.513418) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.525 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-05T02:14:38.514673) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.525 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-05T02:14:38.515754) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:14:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:14:38.526 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-05T02:14:38.517040) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:14:38 compute-0 podman[453506]: 2025-12-05 02:14:38.551722327 +0000 UTC m=+0.281431895 container init fa773b8cb0baeac3aee53ef0617fd7b819d44c897c78761b26a903917ea1b805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_pare, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  5 02:14:38 compute-0 podman[453506]: 2025-12-05 02:14:38.573351895 +0000 UTC m=+0.303061393 container start fa773b8cb0baeac3aee53ef0617fd7b819d44c897c78761b26a903917ea1b805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_pare, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:14:38 compute-0 podman[453506]: 2025-12-05 02:14:38.578486299 +0000 UTC m=+0.308195827 container attach fa773b8cb0baeac3aee53ef0617fd7b819d44c897c78761b26a903917ea1b805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_pare, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Dec  5 02:14:39 compute-0 serene_pare[453521]: {
Dec  5 02:14:39 compute-0 serene_pare[453521]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 02:14:39 compute-0 serene_pare[453521]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:14:39 compute-0 serene_pare[453521]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 02:14:39 compute-0 serene_pare[453521]:        "osd_id": 0,
Dec  5 02:14:39 compute-0 serene_pare[453521]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:14:39 compute-0 serene_pare[453521]:        "type": "bluestore"
Dec  5 02:14:39 compute-0 serene_pare[453521]:    },
Dec  5 02:14:39 compute-0 serene_pare[453521]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 02:14:39 compute-0 serene_pare[453521]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:14:39 compute-0 serene_pare[453521]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 02:14:39 compute-0 serene_pare[453521]:        "osd_id": 1,
Dec  5 02:14:39 compute-0 serene_pare[453521]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:14:39 compute-0 serene_pare[453521]:        "type": "bluestore"
Dec  5 02:14:39 compute-0 serene_pare[453521]:    },
Dec  5 02:14:39 compute-0 serene_pare[453521]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 02:14:39 compute-0 serene_pare[453521]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:14:39 compute-0 serene_pare[453521]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 02:14:39 compute-0 serene_pare[453521]:        "osd_id": 2,
Dec  5 02:14:39 compute-0 serene_pare[453521]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:14:39 compute-0 serene_pare[453521]:        "type": "bluestore"
Dec  5 02:14:39 compute-0 serene_pare[453521]:    }
Dec  5 02:14:39 compute-0 serene_pare[453521]: }
Dec  5 02:14:39 compute-0 systemd[1]: libpod-fa773b8cb0baeac3aee53ef0617fd7b819d44c897c78761b26a903917ea1b805.scope: Deactivated successfully.
Dec  5 02:14:39 compute-0 podman[453506]: 2025-12-05 02:14:39.718646181 +0000 UTC m=+1.448355709 container died fa773b8cb0baeac3aee53ef0617fd7b819d44c897c78761b26a903917ea1b805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_pare, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  5 02:14:39 compute-0 systemd[1]: libpod-fa773b8cb0baeac3aee53ef0617fd7b819d44c897c78761b26a903917ea1b805.scope: Consumed 1.155s CPU time.
Dec  5 02:14:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-68dc37b46001bba592061ce04c9f560a93ba75b42ef79495dfc19b0bf0cb795b-merged.mount: Deactivated successfully.
Dec  5 02:14:39 compute-0 podman[453506]: 2025-12-05 02:14:39.815741968 +0000 UTC m=+1.545451476 container remove fa773b8cb0baeac3aee53ef0617fd7b819d44c897c78761b26a903917ea1b805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  5 02:14:39 compute-0 systemd[1]: libpod-conmon-fa773b8cb0baeac3aee53ef0617fd7b819d44c897c78761b26a903917ea1b805.scope: Deactivated successfully.
Dec  5 02:14:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 02:14:39 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:14:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 02:14:39 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:14:39 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev b6da5b51-4939-43e9-8c52-b201d938c612 does not exist
Dec  5 02:14:39 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev b88f9982-44de-4f27-900c-504db4a23a7e does not exist
Dec  5 02:14:40 compute-0 nova_compute[349548]: 2025-12-05 02:14:40.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:14:40 compute-0 nova_compute[349548]: 2025-12-05 02:14:40.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 02:14:40 compute-0 nova_compute[349548]: 2025-12-05 02:14:40.068 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  5 02:14:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1970: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:14:40 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:14:40 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:14:41 compute-0 nova_compute[349548]: 2025-12-05 02:14:41.080 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:14:41 compute-0 nova_compute[349548]: 2025-12-05 02:14:41.081 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:14:41 compute-0 nova_compute[349548]: 2025-12-05 02:14:41.081 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  5 02:14:41 compute-0 nova_compute[349548]: 2025-12-05 02:14:41.082 349552 DEBUG nova.objects.instance [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 292fd084-0808-4a80-adc1-6ab1f28e188a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:14:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1971: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:14:42 compute-0 nova_compute[349548]: 2025-12-05 02:14:42.335 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:14:42 compute-0 nova_compute[349548]: 2025-12-05 02:14:42.467 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:14:42 compute-0 nova_compute[349548]: 2025-12-05 02:14:42.831 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updating instance_info_cache with network_info: [{"id": "706f9405-4061-481e-a252-9b14f4534a4e", "address": "fa:16:3e:cf:10:bc", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.151", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706f9405-40", "ovs_interfaceid": "706f9405-4061-481e-a252-9b14f4534a4e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:14:42 compute-0 nova_compute[349548]: 2025-12-05 02:14:42.854 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:14:42 compute-0 nova_compute[349548]: 2025-12-05 02:14:42.854 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  5 02:14:42 compute-0 nova_compute[349548]: 2025-12-05 02:14:42.855 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:14:42 compute-0 nova_compute[349548]: 2025-12-05 02:14:42.855 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:14:42 compute-0 nova_compute[349548]: 2025-12-05 02:14:42.856 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:14:42 compute-0 nova_compute[349548]: 2025-12-05 02:14:42.856 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 02:14:43 compute-0 nova_compute[349548]: 2025-12-05 02:14:43.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:14:43 compute-0 nova_compute[349548]: 2025-12-05 02:14:43.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:14:43 compute-0 nova_compute[349548]: 2025-12-05 02:14:43.116 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:14:43 compute-0 nova_compute[349548]: 2025-12-05 02:14:43.117 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:14:43 compute-0 nova_compute[349548]: 2025-12-05 02:14:43.117 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:14:43 compute-0 nova_compute[349548]: 2025-12-05 02:14:43.118 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 02:14:43 compute-0 nova_compute[349548]: 2025-12-05 02:14:43.118 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:14:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:14:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:14:43 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/576510669' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:14:43 compute-0 nova_compute[349548]: 2025-12-05 02:14:43.616 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:14:43 compute-0 nova_compute[349548]: 2025-12-05 02:14:43.722 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:14:43 compute-0 nova_compute[349548]: 2025-12-05 02:14:43.724 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:14:44 compute-0 nova_compute[349548]: 2025-12-05 02:14:44.234 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:14:44 compute-0 nova_compute[349548]: 2025-12-05 02:14:44.235 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3729MB free_disk=59.94283676147461GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 02:14:44 compute-0 nova_compute[349548]: 2025-12-05 02:14:44.236 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:14:44 compute-0 nova_compute[349548]: 2025-12-05 02:14:44.236 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:14:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1972: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:14:44 compute-0 nova_compute[349548]: 2025-12-05 02:14:44.336 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 292fd084-0808-4a80-adc1-6ab1f28e188a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:14:44 compute-0 nova_compute[349548]: 2025-12-05 02:14:44.337 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 02:14:44 compute-0 nova_compute[349548]: 2025-12-05 02:14:44.338 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 02:14:44 compute-0 nova_compute[349548]: 2025-12-05 02:14:44.370 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:14:44 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec  5 02:14:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:14:44 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/106377308' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:14:44 compute-0 nova_compute[349548]: 2025-12-05 02:14:44.946 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:14:44 compute-0 nova_compute[349548]: 2025-12-05 02:14:44.956 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:14:44 compute-0 nova_compute[349548]: 2025-12-05 02:14:44.983 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:14:45 compute-0 nova_compute[349548]: 2025-12-05 02:14:45.016 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 02:14:45 compute-0 nova_compute[349548]: 2025-12-05 02:14:45.016 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:14:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 02:14:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2229788711' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 02:14:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 02:14:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2229788711' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 02:14:46 compute-0 nova_compute[349548]: 2025-12-05 02:14:46.017 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:14:46 compute-0 nova_compute[349548]: 2025-12-05 02:14:46.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:14:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1973: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:14:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:14:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:14:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:14:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:14:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:14:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:14:46 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  5 02:14:47 compute-0 nova_compute[349548]: 2025-12-05 02:14:47.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:14:47 compute-0 nova_compute[349548]: 2025-12-05 02:14:47.338 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:14:47 compute-0 nova_compute[349548]: 2025-12-05 02:14:47.470 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:14:47 compute-0 podman[453662]: 2025-12-05 02:14:47.688100339 +0000 UTC m=+0.095012310 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  5 02:14:47 compute-0 podman[453663]: 2025-12-05 02:14:47.706097784 +0000 UTC m=+0.118528480 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 02:14:47 compute-0 podman[453664]: 2025-12-05 02:14:47.734242145 +0000 UTC m=+0.129900509 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible)
Dec  5 02:14:47 compute-0 podman[453669]: 2025-12-05 02:14:47.748574777 +0000 UTC m=+0.134026805 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=openstack_network_exporter, name=ubi9-minimal, io.openshift.tags=minimal rhel9, version=9.6, build-date=2025-08-20T13:12:41, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, distribution-scope=public, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  5 02:14:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:14:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1974: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:14:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1975: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:14:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1976: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:14:52 compute-0 nova_compute[349548]: 2025-12-05 02:14:52.343 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:14:52 compute-0 nova_compute[349548]: 2025-12-05 02:14:52.474 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:14:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:14:53 compute-0 ovn_controller[89286]: 2025-12-05T02:14:53Z|00177|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Dec  5 02:14:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1977: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:14:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:14:56.212 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:14:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:14:56.213 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:14:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:14:56.214 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:14:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1978: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:14:57 compute-0 nova_compute[349548]: 2025-12-05 02:14:57.346 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:14:57 compute-0 nova_compute[349548]: 2025-12-05 02:14:57.478 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:14:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:14:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1979: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:14:59 compute-0 podman[158197]: time="2025-12-05T02:14:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:14:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:14:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 02:14:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:14:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8649 "" "Go-http-client/1.1"
Dec  5 02:15:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1980: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:15:01 compute-0 openstack_network_exporter[366555]: ERROR   02:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:15:01 compute-0 openstack_network_exporter[366555]: ERROR   02:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:15:01 compute-0 openstack_network_exporter[366555]: ERROR   02:15:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:15:01 compute-0 openstack_network_exporter[366555]: ERROR   02:15:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:15:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:15:01 compute-0 openstack_network_exporter[366555]: ERROR   02:15:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:15:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:15:01 compute-0 podman[453746]: 2025-12-05 02:15:01.695341151 +0000 UTC m=+0.103153968 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 02:15:01 compute-0 podman[453747]: 2025-12-05 02:15:01.707677567 +0000 UTC m=+0.116659047 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 02:15:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1981: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:15:02 compute-0 nova_compute[349548]: 2025-12-05 02:15:02.349 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:15:02 compute-0 nova_compute[349548]: 2025-12-05 02:15:02.481 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:15:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:15:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1982: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:15:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1983: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:15:06 compute-0 podman[453787]: 2025-12-05 02:15:06.696056659 +0000 UTC m=+0.106038258 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Dec  5 02:15:06 compute-0 podman[453788]: 2025-12-05 02:15:06.743222303 +0000 UTC m=+0.136539834 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  5 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.067 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.067 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.068 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.069 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.069 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.070 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.104 349552 DEBUG nova.virt.libvirt.imagecache [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Dec  5 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.128 349552 DEBUG nova.virt.libvirt.imagecache [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Dec  5 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.129 349552 DEBUG nova.virt.libvirt.imagecache [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Image id 773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e yields fingerprint ce40e952b4771285622230948599d16442d55b06 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Dec  5 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.130 349552 INFO nova.virt.libvirt.imagecache [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] image 773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e at (/var/lib/nova/instances/_base/ce40e952b4771285622230948599d16442d55b06): checking#033[00m
Dec  5 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.131 349552 DEBUG nova.virt.libvirt.imagecache [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] image 773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e at (/var/lib/nova/instances/_base/ce40e952b4771285622230948599d16442d55b06): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279#033[00m
Dec  5 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.135 349552 DEBUG nova.virt.libvirt.imagecache [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Dec  5 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.136 349552 DEBUG nova.virt.libvirt.imagecache [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] 292fd084-0808-4a80-adc1-6ab1f28e188a is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Dec  5 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.137 349552 WARNING nova.virt.libvirt.imagecache [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42#033[00m
Dec  5 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.138 349552 WARNING nova.virt.libvirt.imagecache [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/c50dad93a0c0d8de9b59bb98a1c7fb911608b410#033[00m
Dec  5 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.139 349552 WARNING nova.virt.libvirt.imagecache [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3#033[00m
Dec  5 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.139 349552 INFO nova.virt.libvirt.imagecache [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Active base files: /var/lib/nova/instances/_base/ce40e952b4771285622230948599d16442d55b06#033[00m
Dec  5 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.140 349552 INFO nova.virt.libvirt.imagecache [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Removable base files: /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42 /var/lib/nova/instances/_base/c50dad93a0c0d8de9b59bb98a1c7fb911608b410 /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3#033[00m
Dec  5 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.141 349552 INFO nova.virt.libvirt.imagecache [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/af0f6d73e40706411141d751e7ebef271f1a5b42#033[00m
Dec  5 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.142 349552 INFO nova.virt.libvirt.imagecache [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/c50dad93a0c0d8de9b59bb98a1c7fb911608b410#033[00m
Dec  5 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.143 349552 INFO nova.virt.libvirt.imagecache [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ffce62741223dc66a92b5b29c88e68e15f46caf3#033[00m
Dec  5 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.143 349552 DEBUG nova.virt.libvirt.imagecache [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Dec  5 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.144 349552 DEBUG nova.virt.libvirt.imagecache [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Dec  5 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.145 349552 DEBUG nova.virt.libvirt.imagecache [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Dec  5 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.145 349552 INFO nova.virt.libvirt.imagecache [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Dec  5 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.353 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:15:07 compute-0 nova_compute[349548]: 2025-12-05 02:15:07.485 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:15:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:15:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1984: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:15:08 compute-0 podman[453826]: 2025-12-05 02:15:08.710312461 +0000 UTC m=+0.110433453 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-type=git, architecture=x86_64, version=9.4, release-0.7.12=, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, release=1214.1726694543, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec  5 02:15:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1985: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:15:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1986: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:15:12 compute-0 nova_compute[349548]: 2025-12-05 02:15:12.356 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:15:12 compute-0 nova_compute[349548]: 2025-12-05 02:15:12.488 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:15:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:15:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1987: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:15:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:15:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:15:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1988: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:15:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:15:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:15:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:15:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:15:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:15:16
Dec  5 02:15:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 02:15:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 02:15:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'images', 'default.rgw.meta', 'backups', '.rgw.root', 'default.rgw.control', '.mgr']
Dec  5 02:15:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 02:15:17 compute-0 nova_compute[349548]: 2025-12-05 02:15:17.361 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:15:17 compute-0 nova_compute[349548]: 2025-12-05 02:15:17.490 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:15:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 02:15:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:15:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 02:15:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:15:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:15:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:15:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:15:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:15:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:15:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:15:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:15:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1989: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:15:18 compute-0 podman[453847]: 2025-12-05 02:15:18.694839162 +0000 UTC m=+0.091867441 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  5 02:15:18 compute-0 podman[453854]: 2025-12-05 02:15:18.712993972 +0000 UTC m=+0.099990859 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, version=9.6, architecture=x86_64, com.redhat.component=ubi9-minimal-container, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_id=edpm, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, distribution-scope=public)
Dec  5 02:15:18 compute-0 podman[453846]: 2025-12-05 02:15:18.723155268 +0000 UTC m=+0.137784951 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  5 02:15:18 compute-0 podman[453848]: 2025-12-05 02:15:18.763513771 +0000 UTC m=+0.155318763 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec  5 02:15:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1990: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:15:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1991: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:15:22 compute-0 nova_compute[349548]: 2025-12-05 02:15:22.364 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:15:22 compute-0 nova_compute[349548]: 2025-12-05 02:15:22.495 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:15:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:15:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1992: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:15:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1993: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007575561022660676 of space, bias 1.0, pg target 0.2272668306798203 quantized to 32 (current 32)
Dec  5 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  5 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:15:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 02:15:27 compute-0 nova_compute[349548]: 2025-12-05 02:15:27.366 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:15:27 compute-0 nova_compute[349548]: 2025-12-05 02:15:27.498 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:15:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:15:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1994: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:15:29 compute-0 podman[158197]: time="2025-12-05T02:15:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:15:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:15:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 02:15:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:15:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8664 "" "Go-http-client/1.1"
Dec  5 02:15:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1995: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:15:30 compute-0 nova_compute[349548]: 2025-12-05 02:15:30.654 349552 DEBUG oslo_concurrency.lockutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Acquiring lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:15:30 compute-0 nova_compute[349548]: 2025-12-05 02:15:30.655 349552 DEBUG oslo_concurrency.lockutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:15:30 compute-0 nova_compute[349548]: 2025-12-05 02:15:30.679 349552 DEBUG nova.compute.manager [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  5 02:15:30 compute-0 nova_compute[349548]: 2025-12-05 02:15:30.802 349552 DEBUG oslo_concurrency.lockutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:15:30 compute-0 nova_compute[349548]: 2025-12-05 02:15:30.803 349552 DEBUG oslo_concurrency.lockutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:15:30 compute-0 nova_compute[349548]: 2025-12-05 02:15:30.818 349552 DEBUG nova.virt.hardware [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  5 02:15:30 compute-0 nova_compute[349548]: 2025-12-05 02:15:30.819 349552 INFO nova.compute.claims [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  5 02:15:30 compute-0 nova_compute[349548]: 2025-12-05 02:15:30.940 349552 DEBUG oslo_concurrency.processutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:15:31 compute-0 openstack_network_exporter[366555]: ERROR   02:15:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:15:31 compute-0 openstack_network_exporter[366555]: ERROR   02:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:15:31 compute-0 openstack_network_exporter[366555]: ERROR   02:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:15:31 compute-0 openstack_network_exporter[366555]: ERROR   02:15:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:15:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:15:31 compute-0 openstack_network_exporter[366555]: ERROR   02:15:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:15:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:15:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:15:31 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2236030647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:15:31 compute-0 nova_compute[349548]: 2025-12-05 02:15:31.515 349552 DEBUG oslo_concurrency.processutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:15:31 compute-0 nova_compute[349548]: 2025-12-05 02:15:31.529 349552 DEBUG nova.compute.provider_tree [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:15:31 compute-0 nova_compute[349548]: 2025-12-05 02:15:31.563 349552 DEBUG nova.scheduler.client.report [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:15:31 compute-0 nova_compute[349548]: 2025-12-05 02:15:31.592 349552 DEBUG oslo_concurrency.lockutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:15:31 compute-0 nova_compute[349548]: 2025-12-05 02:15:31.594 349552 DEBUG nova.compute.manager [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  5 02:15:31 compute-0 nova_compute[349548]: 2025-12-05 02:15:31.655 349552 DEBUG nova.compute.manager [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  5 02:15:31 compute-0 nova_compute[349548]: 2025-12-05 02:15:31.656 349552 DEBUG nova.network.neutron [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  5 02:15:31 compute-0 nova_compute[349548]: 2025-12-05 02:15:31.685 349552 INFO nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  5 02:15:31 compute-0 nova_compute[349548]: 2025-12-05 02:15:31.705 349552 DEBUG nova.compute.manager [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  5 02:15:31 compute-0 nova_compute[349548]: 2025-12-05 02:15:31.792 349552 DEBUG nova.compute.manager [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  5 02:15:31 compute-0 nova_compute[349548]: 2025-12-05 02:15:31.794 349552 DEBUG nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  5 02:15:31 compute-0 nova_compute[349548]: 2025-12-05 02:15:31.795 349552 INFO nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Creating image(s)#033[00m
Dec  5 02:15:31 compute-0 nova_compute[349548]: 2025-12-05 02:15:31.835 349552 DEBUG nova.storage.rbd_utils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] rbd image e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:15:31 compute-0 nova_compute[349548]: 2025-12-05 02:15:31.879 349552 DEBUG nova.storage.rbd_utils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] rbd image e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:15:31 compute-0 nova_compute[349548]: 2025-12-05 02:15:31.933 349552 DEBUG nova.storage.rbd_utils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] rbd image e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:15:31 compute-0 nova_compute[349548]: 2025-12-05 02:15:31.945 349552 DEBUG oslo_concurrency.processutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ce40e952b4771285622230948599d16442d55b06 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:15:31 compute-0 nova_compute[349548]: 2025-12-05 02:15:31.985 349552 DEBUG nova.policy [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '99591ed8361e41579fee1d14f16bf0f7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b01709a3378347e1a3f25eeb2b8b1bca', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Dec  5 02:15:32 compute-0 nova_compute[349548]: 2025-12-05 02:15:32.047 349552 DEBUG oslo_concurrency.processutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ce40e952b4771285622230948599d16442d55b06 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:15:32 compute-0 nova_compute[349548]: 2025-12-05 02:15:32.048 349552 DEBUG oslo_concurrency.lockutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Acquiring lock "ce40e952b4771285622230948599d16442d55b06" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:15:32 compute-0 nova_compute[349548]: 2025-12-05 02:15:32.048 349552 DEBUG oslo_concurrency.lockutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "ce40e952b4771285622230948599d16442d55b06" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:15:32 compute-0 nova_compute[349548]: 2025-12-05 02:15:32.049 349552 DEBUG oslo_concurrency.lockutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "ce40e952b4771285622230948599d16442d55b06" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:15:32 compute-0 nova_compute[349548]: 2025-12-05 02:15:32.097 349552 DEBUG nova.storage.rbd_utils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] rbd image e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:15:32 compute-0 nova_compute[349548]: 2025-12-05 02:15:32.107 349552 DEBUG oslo_concurrency.processutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ce40e952b4771285622230948599d16442d55b06 e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:15:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1996: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:15:32 compute-0 nova_compute[349548]: 2025-12-05 02:15:32.369 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:15:32 compute-0 nova_compute[349548]: 2025-12-05 02:15:32.501 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:15:32 compute-0 nova_compute[349548]: 2025-12-05 02:15:32.506 349552 DEBUG oslo_concurrency.processutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ce40e952b4771285622230948599d16442d55b06 e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.399s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:15:32 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:15:32.535 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:c8:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:b5:45:4f:f9:d2'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 02:15:32 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:15:32.540 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  5 02:15:32 compute-0 nova_compute[349548]: 2025-12-05 02:15:32.576 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:15:32 compute-0 nova_compute[349548]: 2025-12-05 02:15:32.684 349552 DEBUG nova.network.neutron [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Successfully created port: afc3cf6c-cbe3-4163-920e-7122f474d371 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  5 02:15:32 compute-0 podman[454067]: 2025-12-05 02:15:32.703446503 +0000 UTC m=+0.104761363 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 02:15:32 compute-0 podman[454064]: 2025-12-05 02:15:32.704944945 +0000 UTC m=+0.114195998 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  5 02:15:32 compute-0 nova_compute[349548]: 2025-12-05 02:15:32.705 349552 DEBUG nova.storage.rbd_utils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] resizing rbd image e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Dec  5 02:15:32 compute-0 nova_compute[349548]: 2025-12-05 02:15:32.875 349552 DEBUG nova.objects.instance [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lazy-loading 'migration_context' on Instance uuid e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:15:32 compute-0 nova_compute[349548]: 2025-12-05 02:15:32.891 349552 DEBUG nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  5 02:15:32 compute-0 nova_compute[349548]: 2025-12-05 02:15:32.892 349552 DEBUG nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Ensure instance console log exists: /var/lib/nova/instances/e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  5 02:15:32 compute-0 nova_compute[349548]: 2025-12-05 02:15:32.892 349552 DEBUG oslo_concurrency.lockutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:15:32 compute-0 nova_compute[349548]: 2025-12-05 02:15:32.893 349552 DEBUG oslo_concurrency.lockutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:15:32 compute-0 nova_compute[349548]: 2025-12-05 02:15:32.893 349552 DEBUG oslo_concurrency.lockutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:15:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:15:33 compute-0 nova_compute[349548]: 2025-12-05 02:15:33.404 349552 DEBUG nova.network.neutron [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Successfully updated port: afc3cf6c-cbe3-4163-920e-7122f474d371 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  5 02:15:33 compute-0 nova_compute[349548]: 2025-12-05 02:15:33.423 349552 DEBUG oslo_concurrency.lockutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Acquiring lock "refresh_cache-e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:15:33 compute-0 nova_compute[349548]: 2025-12-05 02:15:33.424 349552 DEBUG oslo_concurrency.lockutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Acquired lock "refresh_cache-e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:15:33 compute-0 nova_compute[349548]: 2025-12-05 02:15:33.424 349552 DEBUG nova.network.neutron [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  5 02:15:33 compute-0 nova_compute[349548]: 2025-12-05 02:15:33.505 349552 DEBUG nova.compute.manager [req-bc032658-6ce9-4449-a1fc-aa1001464151 req-913dbf6b-7767-4b15-94ab-31059bb58be8 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Received event network-changed-afc3cf6c-cbe3-4163-920e-7122f474d371 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:15:33 compute-0 nova_compute[349548]: 2025-12-05 02:15:33.505 349552 DEBUG nova.compute.manager [req-bc032658-6ce9-4449-a1fc-aa1001464151 req-913dbf6b-7767-4b15-94ab-31059bb58be8 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Refreshing instance network info cache due to event network-changed-afc3cf6c-cbe3-4163-920e-7122f474d371. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  5 02:15:33 compute-0 nova_compute[349548]: 2025-12-05 02:15:33.505 349552 DEBUG oslo_concurrency.lockutils [req-bc032658-6ce9-4449-a1fc-aa1001464151 req-913dbf6b-7767-4b15-94ab-31059bb58be8 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "refresh_cache-e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:15:33 compute-0 nova_compute[349548]: 2025-12-05 02:15:33.582 349552 DEBUG nova.network.neutron [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  5 02:15:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1997: 321 pgs: 321 active+clean; 157 MiB data, 341 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:15:35 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:15:35.544 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8dd76c1c-ab01-42af-b35e-2e870841b6ad, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  5 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.083 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  5 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.327 349552 DEBUG nova.network.neutron [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Updating instance_info_cache with network_info: [{"id": "afc3cf6c-cbe3-4163-920e-7122f474d371", "address": "fa:16:3e:69:80:52", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafc3cf6c-cb", "ovs_interfaceid": "afc3cf6c-cbe3-4163-920e-7122f474d371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:15:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1998: 321 pgs: 321 active+clean; 166 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 524 KiB/s wr, 0 op/s
Dec  5 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.351 349552 DEBUG oslo_concurrency.lockutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Releasing lock "refresh_cache-e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.351 349552 DEBUG nova.compute.manager [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Instance network_info: |[{"id": "afc3cf6c-cbe3-4163-920e-7122f474d371", "address": "fa:16:3e:69:80:52", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafc3cf6c-cb", "ovs_interfaceid": "afc3cf6c-cbe3-4163-920e-7122f474d371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  5 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.352 349552 DEBUG oslo_concurrency.lockutils [req-bc032658-6ce9-4449-a1fc-aa1001464151 req-913dbf6b-7767-4b15-94ab-31059bb58be8 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquired lock "refresh_cache-e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.352 349552 DEBUG nova.network.neutron [req-bc032658-6ce9-4449-a1fc-aa1001464151 req-913dbf6b-7767-4b15-94ab-31059bb58be8 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Refreshing network info cache for port afc3cf6c-cbe3-4163-920e-7122f474d371 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  5 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.357 349552 DEBUG nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Start _get_guest_xml network_info=[{"id": "afc3cf6c-cbe3-4163-920e-7122f474d371", "address": "fa:16:3e:69:80:52", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafc3cf6c-cb", "ovs_interfaceid": "afc3cf6c-cbe3-4163-920e-7122f474d371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T02:11:06Z,direct_url=<?>,disk_format='qcow2',id=773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e,min_disk=0,min_ram=0,name='tempest-scenario-img--2105045224',owner='b01709a3378347e1a3f25eeb2b8b1bca',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T02:11:08Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'encrypted': False, 'disk_bus': 'virtio', 'encryption_format': None, 'boot_index': 0, 'device_name': '/dev/vda', 'device_type': 'disk', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'image_id': '773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  5 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.372 349552 WARNING nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.380 349552 DEBUG nova.virt.libvirt.host [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  5 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.381 349552 DEBUG nova.virt.libvirt.host [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  5 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.386 349552 DEBUG nova.virt.libvirt.host [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  5 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.387 349552 DEBUG nova.virt.libvirt.host [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  5 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.388 349552 DEBUG nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  5 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.388 349552 DEBUG nova.virt.hardware [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-05T02:07:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-05T02:11:06Z,direct_url=<?>,disk_format='qcow2',id=773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e,min_disk=0,min_ram=0,name='tempest-scenario-img--2105045224',owner='b01709a3378347e1a3f25eeb2b8b1bca',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-05T02:11:08Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  5 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.389 349552 DEBUG nova.virt.hardware [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  5 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.389 349552 DEBUG nova.virt.hardware [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  5 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.390 349552 DEBUG nova.virt.hardware [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  5 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.390 349552 DEBUG nova.virt.hardware [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  5 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.391 349552 DEBUG nova.virt.hardware [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  5 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.391 349552 DEBUG nova.virt.hardware [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  5 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.391 349552 DEBUG nova.virt.hardware [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  5 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.392 349552 DEBUG nova.virt.hardware [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  5 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.392 349552 DEBUG nova.virt.hardware [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  5 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.392 349552 DEBUG nova.virt.hardware [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  5 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.397 349552 DEBUG oslo_concurrency.processutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:15:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  5 02:15:36 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3856466857' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  5 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.952 349552 DEBUG oslo_concurrency.processutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:15:36 compute-0 nova_compute[349548]: 2025-12-05 02:15:36.997 349552 DEBUG nova.storage.rbd_utils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] rbd image e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:15:37 compute-0 nova_compute[349548]: 2025-12-05 02:15:37.007 349552 DEBUG oslo_concurrency.processutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:15:37 compute-0 nova_compute[349548]: 2025-12-05 02:15:37.372 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:15:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Dec  5 02:15:37 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2213337067' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  5 02:15:37 compute-0 nova_compute[349548]: 2025-12-05 02:15:37.501 349552 DEBUG oslo_concurrency.processutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:15:37 compute-0 nova_compute[349548]: 2025-12-05 02:15:37.503 349552 DEBUG nova.virt.libvirt.vif [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T02:15:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-3255585-asg-ymkpcnuo2iqm-egephyv4dydi-sxgc5dh3lpwo',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-3255585-asg-ymkpcnuo2iqm-egephyv4dydi-sxgc5dh3lpwo',id=15,image_ref='773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='92ca195d-98d1-443c-9947-dcb7ca7b926a'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b01709a3378347e1a3f25eeb2b8b1bca',ramdisk_id='',reservation_id='r-hkm16u1q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-257639068',owner_user_name='tempest-PrometheusGabbiTest-257639068-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T02:15:31Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='99591ed8361e41579fee1d14f16bf0f7',uuid=e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "afc3cf6c-cbe3-4163-920e-7122f474d371", "address": "fa:16:3e:69:80:52", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafc3cf6c-cb", "ovs_interfaceid": "afc3cf6c-cbe3-4163-920e-7122f474d371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  5 02:15:37 compute-0 nova_compute[349548]: 2025-12-05 02:15:37.504 349552 DEBUG nova.network.os_vif_util [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Converting VIF {"id": "afc3cf6c-cbe3-4163-920e-7122f474d371", "address": "fa:16:3e:69:80:52", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafc3cf6c-cb", "ovs_interfaceid": "afc3cf6c-cbe3-4163-920e-7122f474d371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 02:15:37 compute-0 nova_compute[349548]: 2025-12-05 02:15:37.505 349552 DEBUG nova.network.os_vif_util [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:80:52,bridge_name='br-int',has_traffic_filtering=True,id=afc3cf6c-cbe3-4163-920e-7122f474d371,network=Network(d7842201-32d0-4f34-ad6b-51f98e5f8322),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapafc3cf6c-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 02:15:37 compute-0 nova_compute[349548]: 2025-12-05 02:15:37.508 349552 DEBUG nova.objects.instance [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lazy-loading 'pci_devices' on Instance uuid e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:15:37 compute-0 nova_compute[349548]: 2025-12-05 02:15:37.510 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:15:37 compute-0 podman[454223]: 2025-12-05 02:15:37.715260713 +0000 UTC m=+0.117769159 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  5 02:15:37 compute-0 podman[454222]: 2025-12-05 02:15:37.726354674 +0000 UTC m=+0.141519595 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, tcib_managed=true)
Dec  5 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.208 349552 DEBUG nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] End _get_guest_xml xml=<domain type="kvm">
Dec  5 02:15:38 compute-0 nova_compute[349548]:  <uuid>e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7</uuid>
Dec  5 02:15:38 compute-0 nova_compute[349548]:  <name>instance-0000000f</name>
Dec  5 02:15:38 compute-0 nova_compute[349548]:  <memory>131072</memory>
Dec  5 02:15:38 compute-0 nova_compute[349548]:  <vcpu>1</vcpu>
Dec  5 02:15:38 compute-0 nova_compute[349548]:  <metadata>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  5 02:15:38 compute-0 nova_compute[349548]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:      <nova:name>te-3255585-asg-ymkpcnuo2iqm-egephyv4dydi-sxgc5dh3lpwo</nova:name>
Dec  5 02:15:38 compute-0 nova_compute[349548]:      <nova:creationTime>2025-12-05 02:15:36</nova:creationTime>
Dec  5 02:15:38 compute-0 nova_compute[349548]:      <nova:flavor name="m1.nano">
Dec  5 02:15:38 compute-0 nova_compute[349548]:        <nova:memory>128</nova:memory>
Dec  5 02:15:38 compute-0 nova_compute[349548]:        <nova:disk>1</nova:disk>
Dec  5 02:15:38 compute-0 nova_compute[349548]:        <nova:swap>0</nova:swap>
Dec  5 02:15:38 compute-0 nova_compute[349548]:        <nova:ephemeral>0</nova:ephemeral>
Dec  5 02:15:38 compute-0 nova_compute[349548]:        <nova:vcpus>1</nova:vcpus>
Dec  5 02:15:38 compute-0 nova_compute[349548]:      </nova:flavor>
Dec  5 02:15:38 compute-0 nova_compute[349548]:      <nova:owner>
Dec  5 02:15:38 compute-0 nova_compute[349548]:        <nova:user uuid="99591ed8361e41579fee1d14f16bf0f7">tempest-PrometheusGabbiTest-257639068-project-member</nova:user>
Dec  5 02:15:38 compute-0 nova_compute[349548]:        <nova:project uuid="b01709a3378347e1a3f25eeb2b8b1bca">tempest-PrometheusGabbiTest-257639068</nova:project>
Dec  5 02:15:38 compute-0 nova_compute[349548]:      </nova:owner>
Dec  5 02:15:38 compute-0 nova_compute[349548]:      <nova:root type="image" uuid="773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:      <nova:ports>
Dec  5 02:15:38 compute-0 nova_compute[349548]:        <nova:port uuid="afc3cf6c-cbe3-4163-920e-7122f474d371">
Dec  5 02:15:38 compute-0 nova_compute[349548]:          <nova:ip type="fixed" address="10.100.2.8" ipVersion="4"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:        </nova:port>
Dec  5 02:15:38 compute-0 nova_compute[349548]:      </nova:ports>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    </nova:instance>
Dec  5 02:15:38 compute-0 nova_compute[349548]:  </metadata>
Dec  5 02:15:38 compute-0 nova_compute[349548]:  <sysinfo type="smbios">
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <system>
Dec  5 02:15:38 compute-0 nova_compute[349548]:      <entry name="manufacturer">RDO</entry>
Dec  5 02:15:38 compute-0 nova_compute[349548]:      <entry name="product">OpenStack Compute</entry>
Dec  5 02:15:38 compute-0 nova_compute[349548]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  5 02:15:38 compute-0 nova_compute[349548]:      <entry name="serial">e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7</entry>
Dec  5 02:15:38 compute-0 nova_compute[349548]:      <entry name="uuid">e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7</entry>
Dec  5 02:15:38 compute-0 nova_compute[349548]:      <entry name="family">Virtual Machine</entry>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    </system>
Dec  5 02:15:38 compute-0 nova_compute[349548]:  </sysinfo>
Dec  5 02:15:38 compute-0 nova_compute[349548]:  <os>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <boot dev="hd"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <smbios mode="sysinfo"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:  </os>
Dec  5 02:15:38 compute-0 nova_compute[349548]:  <features>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <acpi/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <apic/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <vmcoreinfo/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:  </features>
Dec  5 02:15:38 compute-0 nova_compute[349548]:  <clock offset="utc">
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <timer name="pit" tickpolicy="delay"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <timer name="hpet" present="no"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:  </clock>
Dec  5 02:15:38 compute-0 nova_compute[349548]:  <cpu mode="host-model" match="exact">
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <topology sockets="1" cores="1" threads="1"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:  </cpu>
Dec  5 02:15:38 compute-0 nova_compute[349548]:  <devices>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <disk type="network" device="disk">
Dec  5 02:15:38 compute-0 nova_compute[349548]:      <driver type="raw" cache="none"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:      <source protocol="rbd" name="vms/e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7_disk">
Dec  5 02:15:38 compute-0 nova_compute[349548]:        <host name="192.168.122.100" port="6789"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:      </source>
Dec  5 02:15:38 compute-0 nova_compute[349548]:      <auth username="openstack">
Dec  5 02:15:38 compute-0 nova_compute[349548]:        <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:      </auth>
Dec  5 02:15:38 compute-0 nova_compute[349548]:      <target dev="vda" bus="virtio"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    </disk>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <disk type="network" device="cdrom">
Dec  5 02:15:38 compute-0 nova_compute[349548]:      <driver type="raw" cache="none"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:      <source protocol="rbd" name="vms/e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7_disk.config">
Dec  5 02:15:38 compute-0 nova_compute[349548]:        <host name="192.168.122.100" port="6789"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:      </source>
Dec  5 02:15:38 compute-0 nova_compute[349548]:      <auth username="openstack">
Dec  5 02:15:38 compute-0 nova_compute[349548]:        <secret type="ceph" uuid="cbd280d3-cbd8-528b-ace6-2b3a887cdcee"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:      </auth>
Dec  5 02:15:38 compute-0 nova_compute[349548]:      <target dev="sda" bus="sata"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    </disk>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <interface type="ethernet">
Dec  5 02:15:38 compute-0 nova_compute[349548]:      <mac address="fa:16:3e:69:80:52"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:      <model type="virtio"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:      <driver name="vhost" rx_queue_size="512"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:      <mtu size="1442"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:      <target dev="tapafc3cf6c-cb"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    </interface>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <serial type="pty">
Dec  5 02:15:38 compute-0 nova_compute[349548]:      <log file="/var/lib/nova/instances/e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/console.log" append="off"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    </serial>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <video>
Dec  5 02:15:38 compute-0 nova_compute[349548]:      <model type="virtio"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    </video>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <input type="tablet" bus="usb"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <rng model="virtio">
Dec  5 02:15:38 compute-0 nova_compute[349548]:      <backend model="random">/dev/urandom</backend>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    </rng>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <controller type="pci" model="pcie-root-port"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <controller type="usb" index="0"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    <memballoon model="virtio">
Dec  5 02:15:38 compute-0 nova_compute[349548]:      <stats period="10"/>
Dec  5 02:15:38 compute-0 nova_compute[349548]:    </memballoon>
Dec  5 02:15:38 compute-0 nova_compute[349548]:  </devices>
Dec  5 02:15:38 compute-0 nova_compute[349548]: </domain>
Dec  5 02:15:38 compute-0 nova_compute[349548]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  5 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.208 349552 DEBUG nova.compute.manager [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Preparing to wait for external event network-vif-plugged-afc3cf6c-cbe3-4163-920e-7122f474d371 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  5 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.209 349552 DEBUG oslo_concurrency.lockutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Acquiring lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.209 349552 DEBUG oslo_concurrency.lockutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.210 349552 DEBUG oslo_concurrency.lockutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.211 349552 DEBUG nova.virt.libvirt.vif [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-05T02:15:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-3255585-asg-ymkpcnuo2iqm-egephyv4dydi-sxgc5dh3lpwo',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-3255585-asg-ymkpcnuo2iqm-egephyv4dydi-sxgc5dh3lpwo',id=15,image_ref='773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='92ca195d-98d1-443c-9947-dcb7ca7b926a'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b01709a3378347e1a3f25eeb2b8b1bca',ramdisk_id='',reservation_id='r-hkm16u1q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-257639068',owner_user_name='tempest-PrometheusGabbiTest-257639068-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-05T02:15:31Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='99591ed8361e41579fee1d14f16bf0f7',uuid=e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "afc3cf6c-cbe3-4163-920e-7122f474d371", "address": "fa:16:3e:69:80:52", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafc3cf6c-cb", "ovs_interfaceid": "afc3cf6c-cbe3-4163-920e-7122f474d371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  5 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.211 349552 DEBUG nova.network.os_vif_util [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Converting VIF {"id": "afc3cf6c-cbe3-4163-920e-7122f474d371", "address": "fa:16:3e:69:80:52", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafc3cf6c-cb", "ovs_interfaceid": "afc3cf6c-cbe3-4163-920e-7122f474d371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.212 349552 DEBUG nova.network.os_vif_util [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:80:52,bridge_name='br-int',has_traffic_filtering=True,id=afc3cf6c-cbe3-4163-920e-7122f474d371,network=Network(d7842201-32d0-4f34-ad6b-51f98e5f8322),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapafc3cf6c-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.213 349552 DEBUG os_vif [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:80:52,bridge_name='br-int',has_traffic_filtering=True,id=afc3cf6c-cbe3-4163-920e-7122f474d371,network=Network(d7842201-32d0-4f34-ad6b-51f98e5f8322),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapafc3cf6c-cb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  5 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.215 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.215 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.216 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.221 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.222 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapafc3cf6c-cb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.222 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapafc3cf6c-cb, col_values=(('external_ids', {'iface-id': 'afc3cf6c-cbe3-4163-920e-7122f474d371', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:69:80:52', 'vm-uuid': 'e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:15:38 compute-0 NetworkManager[49092]: <info>  [1764900938.2264] manager: (tapafc3cf6c-cb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/79)
Dec  5 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.225 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.229 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  5 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.235 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.235 349552 INFO os_vif [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:80:52,bridge_name='br-int',has_traffic_filtering=True,id=afc3cf6c-cbe3-4163-920e-7122f474d371,network=Network(d7842201-32d0-4f34-ad6b-51f98e5f8322),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapafc3cf6c-cb')#033[00m
Dec  5 02:15:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.308 349552 DEBUG nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  5 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.309 349552 DEBUG nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  5 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.309 349552 DEBUG nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] No VIF found with MAC fa:16:3e:69:80:52, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  5 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.310 349552 INFO nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Using config drive#033[00m
Dec  5 02:15:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v1999: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  5 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.357 349552 DEBUG nova.storage.rbd_utils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] rbd image e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.813 349552 INFO nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Creating config drive at /var/lib/nova/instances/e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.config#033[00m
Dec  5 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.825 349552 DEBUG oslo_concurrency.processutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp78s77pex execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.861 349552 DEBUG nova.network.neutron [req-bc032658-6ce9-4449-a1fc-aa1001464151 req-913dbf6b-7767-4b15-94ab-31059bb58be8 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Updated VIF entry in instance network info cache for port afc3cf6c-cbe3-4163-920e-7122f474d371. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  5 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.862 349552 DEBUG nova.network.neutron [req-bc032658-6ce9-4449-a1fc-aa1001464151 req-913dbf6b-7767-4b15-94ab-31059bb58be8 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Updating instance_info_cache with network_info: [{"id": "afc3cf6c-cbe3-4163-920e-7122f474d371", "address": "fa:16:3e:69:80:52", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafc3cf6c-cb", "ovs_interfaceid": "afc3cf6c-cbe3-4163-920e-7122f474d371", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.884 349552 DEBUG oslo_concurrency.lockutils [req-bc032658-6ce9-4449-a1fc-aa1001464151 req-913dbf6b-7767-4b15-94ab-31059bb58be8 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Releasing lock "refresh_cache-e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:15:38 compute-0 nova_compute[349548]: 2025-12-05 02:15:38.979 349552 DEBUG oslo_concurrency.processutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp78s77pex" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:15:39 compute-0 nova_compute[349548]: 2025-12-05 02:15:39.033 349552 DEBUG nova.storage.rbd_utils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] rbd image e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Dec  5 02:15:39 compute-0 nova_compute[349548]: 2025-12-05 02:15:39.046 349552 DEBUG oslo_concurrency.processutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.config e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:15:39 compute-0 nova_compute[349548]: 2025-12-05 02:15:39.307 349552 DEBUG oslo_concurrency.processutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.config e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.261s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:15:39 compute-0 nova_compute[349548]: 2025-12-05 02:15:39.309 349552 INFO nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Deleting local config drive /var/lib/nova/instances/e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.config because it was imported into RBD.#033[00m
Dec  5 02:15:39 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec  5 02:15:39 compute-0 systemd[1]: Started libvirt secret daemon.
Dec  5 02:15:39 compute-0 kernel: tapafc3cf6c-cb: entered promiscuous mode
Dec  5 02:15:39 compute-0 NetworkManager[49092]: <info>  [1764900939.4713] manager: (tapafc3cf6c-cb): new Tun device (/org/freedesktop/NetworkManager/Devices/80)
Dec  5 02:15:39 compute-0 nova_compute[349548]: 2025-12-05 02:15:39.473 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:15:39 compute-0 ovn_controller[89286]: 2025-12-05T02:15:39Z|00178|binding|INFO|Claiming lport afc3cf6c-cbe3-4163-920e-7122f474d371 for this chassis.
Dec  5 02:15:39 compute-0 ovn_controller[89286]: 2025-12-05T02:15:39Z|00179|binding|INFO|afc3cf6c-cbe3-4163-920e-7122f474d371: Claiming fa:16:3e:69:80:52 10.100.2.8
Dec  5 02:15:39 compute-0 podman[454319]: 2025-12-05 02:15:39.490393328 +0000 UTC m=+0.126304148 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, architecture=x86_64, config_id=edpm, container_name=kepler, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  5 02:15:39 compute-0 nova_compute[349548]: 2025-12-05 02:15:39.498 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:15:39 compute-0 ovn_controller[89286]: 2025-12-05T02:15:39Z|00180|binding|INFO|Setting lport afc3cf6c-cbe3-4163-920e-7122f474d371 ovn-installed in OVS
Dec  5 02:15:39 compute-0 nova_compute[349548]: 2025-12-05 02:15:39.499 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:15:39 compute-0 systemd-udevd[454366]: Network interface NamePolicy= disabled on kernel command line.
Dec  5 02:15:39 compute-0 NetworkManager[49092]: <info>  [1764900939.5243] device (tapafc3cf6c-cb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  5 02:15:39 compute-0 NetworkManager[49092]: <info>  [1764900939.5285] device (tapafc3cf6c-cb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  5 02:15:39 compute-0 systemd-machined[138700]: New machine qemu-16-instance-0000000f.
Dec  5 02:15:39 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-0000000f.
Dec  5 02:15:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:15:39.567 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:80:52 10.100.2.8'], port_security=['fa:16:3e:69:80:52 10.100.2.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.2.8/16', 'neutron:device_id': 'e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7842201-32d0-4f34-ad6b-51f98e5f8322', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b01709a3378347e1a3f25eeb2b8b1bca', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cb556767-8d1b-4432-9d0a-485dcba856ee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=40610b26-f7eb-46a6-9c49-714ab1f77db8, chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=afc3cf6c-cbe3-4163-920e-7122f474d371) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 02:15:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:15:39.569 287122 INFO neutron.agent.ovn.metadata.agent [-] Port afc3cf6c-cbe3-4163-920e-7122f474d371 in datapath d7842201-32d0-4f34-ad6b-51f98e5f8322 bound to our chassis#033[00m
Dec  5 02:15:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:15:39.571 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d7842201-32d0-4f34-ad6b-51f98e5f8322#033[00m
Dec  5 02:15:39 compute-0 ovn_controller[89286]: 2025-12-05T02:15:39Z|00181|binding|INFO|Setting lport afc3cf6c-cbe3-4163-920e-7122f474d371 up in Southbound
Dec  5 02:15:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:15:39.589 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[208153bd-706f-41ff-a3ed-817963bfac6f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:15:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:15:39.626 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[b3c4c6c5-9284-42de-aeb4-bd8808952a70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:15:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:15:39.630 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[de0b1d97-ce7b-4929-97e4-93dabc5f2f34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:15:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:15:39.667 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[dee28747-b5fc-43ce-9a76-04e6f163f8e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:15:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:15:39.686 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[60332d2a-12cf-4296-a3f0-a2b55a590863]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7842201-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:26:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677128, 'reachable_time': 17953, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 454383, 'error': None, 'target': 'ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:15:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:15:39.710 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[b4b9b3e7-7418-4fac-94d1-c3531220687f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd7842201-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 677143, 'tstamp': 677143}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 454384, 'error': None, 'target': 'ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tapd7842201-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 677147, 'tstamp': 677147}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 454384, 'error': None, 'target': 'ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:15:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:15:39.712 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7842201-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:15:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:15:39.716 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7842201-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:15:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:15:39.716 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 02:15:39 compute-0 nova_compute[349548]: 2025-12-05 02:15:39.716 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:15:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:15:39.717 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd7842201-30, col_values=(('external_ids', {'iface-id': '9309009c-26a0-4ed9-8142-14ad142ca1c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:15:39 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:15:39.718 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 02:15:40 compute-0 nova_compute[349548]: 2025-12-05 02:15:40.108 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900940.107982, e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:15:40 compute-0 nova_compute[349548]: 2025-12-05 02:15:40.109 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] VM Started (Lifecycle Event)#033[00m
Dec  5 02:15:40 compute-0 nova_compute[349548]: 2025-12-05 02:15:40.137 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:15:40 compute-0 nova_compute[349548]: 2025-12-05 02:15:40.145 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900940.1082067, e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:15:40 compute-0 nova_compute[349548]: 2025-12-05 02:15:40.146 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] VM Paused (Lifecycle Event)#033[00m
Dec  5 02:15:40 compute-0 nova_compute[349548]: 2025-12-05 02:15:40.171 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:15:40 compute-0 nova_compute[349548]: 2025-12-05 02:15:40.177 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  5 02:15:40 compute-0 nova_compute[349548]: 2025-12-05 02:15:40.199 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  5 02:15:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2000: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec  5 02:15:40 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec  5 02:15:40 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec  5 02:15:41 compute-0 nova_compute[349548]: 2025-12-05 02:15:41.081 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:15:41 compute-0 nova_compute[349548]: 2025-12-05 02:15:41.082 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 02:15:41 compute-0 nova_compute[349548]: 2025-12-05 02:15:41.082 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  5 02:15:41 compute-0 nova_compute[349548]: 2025-12-05 02:15:41.106 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Dec  5 02:15:41 compute-0 nova_compute[349548]: 2025-12-05 02:15:41.901 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:15:41 compute-0 nova_compute[349548]: 2025-12-05 02:15:41.902 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:15:41 compute-0 nova_compute[349548]: 2025-12-05 02:15:41.902 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  5 02:15:41 compute-0 nova_compute[349548]: 2025-12-05 02:15:41.902 349552 DEBUG nova.objects.instance [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 292fd084-0808-4a80-adc1-6ab1f28e188a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:15:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 02:15:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:15:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.074 349552 DEBUG nova.compute.manager [req-ab619e57-1e22-4187-84d2-e8e385e3307b req-4599da30-ffe1-4d77-8851-548afc1cd34b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Received event network-vif-plugged-afc3cf6c-cbe3-4163-920e-7122f474d371 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.075 349552 DEBUG oslo_concurrency.lockutils [req-ab619e57-1e22-4187-84d2-e8e385e3307b req-4599da30-ffe1-4d77-8851-548afc1cd34b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.077 349552 DEBUG oslo_concurrency.lockutils [req-ab619e57-1e22-4187-84d2-e8e385e3307b req-4599da30-ffe1-4d77-8851-548afc1cd34b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.077 349552 DEBUG oslo_concurrency.lockutils [req-ab619e57-1e22-4187-84d2-e8e385e3307b req-4599da30-ffe1-4d77-8851-548afc1cd34b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:15:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.078 349552 DEBUG nova.compute.manager [req-ab619e57-1e22-4187-84d2-e8e385e3307b req-4599da30-ffe1-4d77-8851-548afc1cd34b a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Processing event network-vif-plugged-afc3cf6c-cbe3-4163-920e-7122f474d371 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  5 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.081 349552 DEBUG nova.compute.manager [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  5 02:15:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:15:42 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.093 349552 DEBUG nova.virt.driver [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] Emitting event <LifecycleEvent: 1764900942.0932474, e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.094 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] VM Resumed (Lifecycle Event)#033[00m
Dec  5 02:15:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 02:15:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.098 349552 DEBUG nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  5 02:15:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.108 349552 INFO nova.virt.libvirt.driver [-] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Instance spawned successfully.#033[00m
Dec  5 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.109 349552 DEBUG nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  5 02:15:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:15:42 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 8bcc5eb9-6470-42e7-baa4-788b22560c36 does not exist
Dec  5 02:15:42 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 92bfb68b-5822-40c8-a95b-1ddf59298ad4 does not exist
Dec  5 02:15:42 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev a6c0d59b-da0b-44a4-a8b4-bcd65c63359a does not exist
Dec  5 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.122 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:15:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 02:15:42 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 02:15:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 02:15:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:15:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:15:42 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.152 349552 DEBUG nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  5 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.166 349552 DEBUG nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.167 349552 DEBUG nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.168 349552 DEBUG nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.169 349552 DEBUG nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.170 349552 DEBUG nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.170 349552 DEBUG nova.virt.libvirt.driver [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  5 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.176 349552 INFO nova.compute.manager [None req-b658379e-8c3b-480e-9b34-7bfaf8a51f94 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  5 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.237 349552 INFO nova.compute.manager [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Took 10.44 seconds to spawn the instance on the hypervisor.#033[00m
Dec  5 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.238 349552 DEBUG nova.compute.manager [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.310 349552 INFO nova.compute.manager [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Took 11.55 seconds to build instance.#033[00m
Dec  5 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.326 349552 DEBUG oslo_concurrency.lockutils [None req-4f60c45c-728f-4478-b23f-55ff3acf9ab5 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:15:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2001: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Dec  5 02:15:42 compute-0 nova_compute[349548]: 2025-12-05 02:15:42.376 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:15:43 compute-0 podman[454829]: 2025-12-05 02:15:43.048215272 +0000 UTC m=+0.085777770 container create 77391065b56884f302037b4de58ae54cc66fc722cc28b1788bcdbcd8c8a0b565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:15:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:15:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:15:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:15:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:15:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:15:43 compute-0 podman[454829]: 2025-12-05 02:15:43.012028225 +0000 UTC m=+0.049590783 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:15:43 compute-0 rsyslogd[188644]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  5 02:15:43 compute-0 rsyslogd[188644]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  5 02:15:43 compute-0 systemd[1]: Started libpod-conmon-77391065b56884f302037b4de58ae54cc66fc722cc28b1788bcdbcd8c8a0b565.scope.
Dec  5 02:15:43 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:15:43 compute-0 nova_compute[349548]: 2025-12-05 02:15:43.218 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updating instance_info_cache with network_info: [{"id": "706f9405-4061-481e-a252-9b14f4534a4e", "address": "fa:16:3e:cf:10:bc", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.151", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706f9405-40", "ovs_interfaceid": "706f9405-4061-481e-a252-9b14f4534a4e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:15:43 compute-0 podman[454829]: 2025-12-05 02:15:43.222586619 +0000 UTC m=+0.260149117 container init 77391065b56884f302037b4de58ae54cc66fc722cc28b1788bcdbcd8c8a0b565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kepler, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  5 02:15:43 compute-0 nova_compute[349548]: 2025-12-05 02:15:43.225 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:15:43 compute-0 podman[454829]: 2025-12-05 02:15:43.232335573 +0000 UTC m=+0.269898051 container start 77391065b56884f302037b4de58ae54cc66fc722cc28b1788bcdbcd8c8a0b565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kepler, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  5 02:15:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:15:43 compute-0 podman[454829]: 2025-12-05 02:15:43.237498938 +0000 UTC m=+0.275061436 container attach 77391065b56884f302037b4de58ae54cc66fc722cc28b1788bcdbcd8c8a0b565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kepler, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  5 02:15:43 compute-0 eloquent_kepler[454846]: 167 167
Dec  5 02:15:43 compute-0 systemd[1]: libpod-77391065b56884f302037b4de58ae54cc66fc722cc28b1788bcdbcd8c8a0b565.scope: Deactivated successfully.
Dec  5 02:15:43 compute-0 nova_compute[349548]: 2025-12-05 02:15:43.246 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:15:43 compute-0 nova_compute[349548]: 2025-12-05 02:15:43.247 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  5 02:15:43 compute-0 nova_compute[349548]: 2025-12-05 02:15:43.248 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:15:43 compute-0 nova_compute[349548]: 2025-12-05 02:15:43.249 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:15:43 compute-0 nova_compute[349548]: 2025-12-05 02:15:43.249 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:15:43 compute-0 nova_compute[349548]: 2025-12-05 02:15:43.249 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 02:15:43 compute-0 podman[454851]: 2025-12-05 02:15:43.299747356 +0000 UTC m=+0.043759840 container died 77391065b56884f302037b4de58ae54cc66fc722cc28b1788bcdbcd8c8a0b565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  5 02:15:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-be338eee8daa2b5083ab83399cdd3265363c7146d2d058a29abffdcc2bf4b371-merged.mount: Deactivated successfully.
Dec  5 02:15:43 compute-0 podman[454851]: 2025-12-05 02:15:43.363527368 +0000 UTC m=+0.107539822 container remove 77391065b56884f302037b4de58ae54cc66fc722cc28b1788bcdbcd8c8a0b565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kepler, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:15:43 compute-0 systemd[1]: libpod-conmon-77391065b56884f302037b4de58ae54cc66fc722cc28b1788bcdbcd8c8a0b565.scope: Deactivated successfully.
Dec  5 02:15:43 compute-0 podman[454872]: 2025-12-05 02:15:43.638021557 +0000 UTC m=+0.073001811 container create 6d6208ecb80219af8b4a57016c41135b0a4150b7c22789fcc37897144bb0a229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_roentgen, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  5 02:15:43 compute-0 systemd[1]: Started libpod-conmon-6d6208ecb80219af8b4a57016c41135b0a4150b7c22789fcc37897144bb0a229.scope.
Dec  5 02:15:43 compute-0 podman[454872]: 2025-12-05 02:15:43.608551119 +0000 UTC m=+0.043531403 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:15:43 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:15:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dbe1ab9ef200b0bec101e824563b5dc625e53be68100417630cf5355c81c283/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:15:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dbe1ab9ef200b0bec101e824563b5dc625e53be68100417630cf5355c81c283/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:15:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dbe1ab9ef200b0bec101e824563b5dc625e53be68100417630cf5355c81c283/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:15:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dbe1ab9ef200b0bec101e824563b5dc625e53be68100417630cf5355c81c283/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:15:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dbe1ab9ef200b0bec101e824563b5dc625e53be68100417630cf5355c81c283/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 02:15:43 compute-0 podman[454872]: 2025-12-05 02:15:43.752624816 +0000 UTC m=+0.187605080 container init 6d6208ecb80219af8b4a57016c41135b0a4150b7c22789fcc37897144bb0a229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  5 02:15:43 compute-0 podman[454872]: 2025-12-05 02:15:43.782489914 +0000 UTC m=+0.217470158 container start 6d6208ecb80219af8b4a57016c41135b0a4150b7c22789fcc37897144bb0a229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:15:43 compute-0 podman[454872]: 2025-12-05 02:15:43.788776811 +0000 UTC m=+0.223757065 container attach 6d6208ecb80219af8b4a57016c41135b0a4150b7c22789fcc37897144bb0a229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:15:44 compute-0 nova_compute[349548]: 2025-12-05 02:15:44.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:15:44 compute-0 nova_compute[349548]: 2025-12-05 02:15:44.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:15:44 compute-0 nova_compute[349548]: 2025-12-05 02:15:44.094 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:15:44 compute-0 nova_compute[349548]: 2025-12-05 02:15:44.095 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:15:44 compute-0 nova_compute[349548]: 2025-12-05 02:15:44.095 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:15:44 compute-0 nova_compute[349548]: 2025-12-05 02:15:44.095 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 02:15:44 compute-0 nova_compute[349548]: 2025-12-05 02:15:44.096 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:15:44 compute-0 nova_compute[349548]: 2025-12-05 02:15:44.307 349552 DEBUG nova.compute.manager [req-002b7ce2-42a9-4407-91fb-75dd5e468395 req-2f2e01b1-04d7-4dae-bd13-9d2b94c9fa92 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Received event network-vif-plugged-afc3cf6c-cbe3-4163-920e-7122f474d371 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:15:44 compute-0 nova_compute[349548]: 2025-12-05 02:15:44.309 349552 DEBUG oslo_concurrency.lockutils [req-002b7ce2-42a9-4407-91fb-75dd5e468395 req-2f2e01b1-04d7-4dae-bd13-9d2b94c9fa92 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:15:44 compute-0 nova_compute[349548]: 2025-12-05 02:15:44.310 349552 DEBUG oslo_concurrency.lockutils [req-002b7ce2-42a9-4407-91fb-75dd5e468395 req-2f2e01b1-04d7-4dae-bd13-9d2b94c9fa92 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:15:44 compute-0 nova_compute[349548]: 2025-12-05 02:15:44.311 349552 DEBUG oslo_concurrency.lockutils [req-002b7ce2-42a9-4407-91fb-75dd5e468395 req-2f2e01b1-04d7-4dae-bd13-9d2b94c9fa92 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:15:44 compute-0 nova_compute[349548]: 2025-12-05 02:15:44.312 349552 DEBUG nova.compute.manager [req-002b7ce2-42a9-4407-91fb-75dd5e468395 req-2f2e01b1-04d7-4dae-bd13-9d2b94c9fa92 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] No waiting events found dispatching network-vif-plugged-afc3cf6c-cbe3-4163-920e-7122f474d371 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 02:15:44 compute-0 nova_compute[349548]: 2025-12-05 02:15:44.313 349552 WARNING nova.compute.manager [req-002b7ce2-42a9-4407-91fb-75dd5e468395 req-2f2e01b1-04d7-4dae-bd13-9d2b94c9fa92 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Received unexpected event network-vif-plugged-afc3cf6c-cbe3-4163-920e-7122f474d371 for instance with vm_state active and task_state None.#033[00m
Dec  5 02:15:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2002: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Dec  5 02:15:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:15:44 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3476277176' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:15:44 compute-0 nova_compute[349548]: 2025-12-05 02:15:44.585 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:15:44 compute-0 nova_compute[349548]: 2025-12-05 02:15:44.721 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:15:44 compute-0 nova_compute[349548]: 2025-12-05 02:15:44.722 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:15:44 compute-0 nova_compute[349548]: 2025-12-05 02:15:44.729 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:15:44 compute-0 nova_compute[349548]: 2025-12-05 02:15:44.729 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:15:44 compute-0 reverent_roentgen[454888]: --> passed data devices: 0 physical, 3 LVM
Dec  5 02:15:44 compute-0 reverent_roentgen[454888]: --> relative data size: 1.0
Dec  5 02:15:44 compute-0 reverent_roentgen[454888]: --> All data devices are unavailable
Dec  5 02:15:44 compute-0 systemd[1]: libpod-6d6208ecb80219af8b4a57016c41135b0a4150b7c22789fcc37897144bb0a229.scope: Deactivated successfully.
Dec  5 02:15:44 compute-0 systemd[1]: libpod-6d6208ecb80219af8b4a57016c41135b0a4150b7c22789fcc37897144bb0a229.scope: Consumed 1.010s CPU time.
Dec  5 02:15:44 compute-0 conmon[454888]: conmon 6d6208ecb80219af8b4a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6d6208ecb80219af8b4a57016c41135b0a4150b7c22789fcc37897144bb0a229.scope/container/memory.events
Dec  5 02:15:44 compute-0 podman[454872]: 2025-12-05 02:15:44.898441017 +0000 UTC m=+1.333421251 container died 6d6208ecb80219af8b4a57016c41135b0a4150b7c22789fcc37897144bb0a229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:15:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-1dbe1ab9ef200b0bec101e824563b5dc625e53be68100417630cf5355c81c283-merged.mount: Deactivated successfully.
Dec  5 02:15:44 compute-0 podman[454872]: 2025-12-05 02:15:44.969479292 +0000 UTC m=+1.404459526 container remove 6d6208ecb80219af8b4a57016c41135b0a4150b7c22789fcc37897144bb0a229 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:15:44 compute-0 systemd[1]: libpod-conmon-6d6208ecb80219af8b4a57016c41135b0a4150b7c22789fcc37897144bb0a229.scope: Deactivated successfully.
Dec  5 02:15:45 compute-0 nova_compute[349548]: 2025-12-05 02:15:45.202 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:15:45 compute-0 nova_compute[349548]: 2025-12-05 02:15:45.204 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3632MB free_disk=59.92191696166992GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 02:15:45 compute-0 nova_compute[349548]: 2025-12-05 02:15:45.204 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:15:45 compute-0 nova_compute[349548]: 2025-12-05 02:15:45.204 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:15:45 compute-0 nova_compute[349548]: 2025-12-05 02:15:45.568 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 292fd084-0808-4a80-adc1-6ab1f28e188a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:15:45 compute-0 nova_compute[349548]: 2025-12-05 02:15:45.568 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:15:45 compute-0 nova_compute[349548]: 2025-12-05 02:15:45.569 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 02:15:45 compute-0 nova_compute[349548]: 2025-12-05 02:15:45.569 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 02:15:45 compute-0 nova_compute[349548]: 2025-12-05 02:15:45.721 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:15:45 compute-0 podman[455101]: 2025-12-05 02:15:45.924287259 +0000 UTC m=+0.047365121 container create d75648c9629aed1070fd2cf217a05308786d75be200b7ad9fd771b5ca14e6db0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  5 02:15:45 compute-0 systemd[1]: Started libpod-conmon-d75648c9629aed1070fd2cf217a05308786d75be200b7ad9fd771b5ca14e6db0.scope.
Dec  5 02:15:46 compute-0 podman[455101]: 2025-12-05 02:15:45.906230742 +0000 UTC m=+0.029308624 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:15:46 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:15:46 compute-0 podman[455101]: 2025-12-05 02:15:46.022314612 +0000 UTC m=+0.145392474 container init d75648c9629aed1070fd2cf217a05308786d75be200b7ad9fd771b5ca14e6db0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  5 02:15:46 compute-0 podman[455101]: 2025-12-05 02:15:46.033224607 +0000 UTC m=+0.156302469 container start d75648c9629aed1070fd2cf217a05308786d75be200b7ad9fd771b5ca14e6db0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_nightingale, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  5 02:15:46 compute-0 podman[455101]: 2025-12-05 02:15:46.037743284 +0000 UTC m=+0.160821176 container attach d75648c9629aed1070fd2cf217a05308786d75be200b7ad9fd771b5ca14e6db0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Dec  5 02:15:46 compute-0 vigilant_nightingale[455125]: 167 167
Dec  5 02:15:46 compute-0 systemd[1]: libpod-d75648c9629aed1070fd2cf217a05308786d75be200b7ad9fd771b5ca14e6db0.scope: Deactivated successfully.
Dec  5 02:15:46 compute-0 podman[455101]: 2025-12-05 02:15:46.044523635 +0000 UTC m=+0.167601497 container died d75648c9629aed1070fd2cf217a05308786d75be200b7ad9fd771b5ca14e6db0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_nightingale, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:15:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb64170fec3de1af5375b6865b134354075b8401a43b21ae9ec7ce5a6149a983-merged.mount: Deactivated successfully.
Dec  5 02:15:46 compute-0 podman[455101]: 2025-12-05 02:15:46.093878441 +0000 UTC m=+0.216956303 container remove d75648c9629aed1070fd2cf217a05308786d75be200b7ad9fd771b5ca14e6db0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_nightingale, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  5 02:15:46 compute-0 systemd[1]: libpod-conmon-d75648c9629aed1070fd2cf217a05308786d75be200b7ad9fd771b5ca14e6db0.scope: Deactivated successfully.
Dec  5 02:15:46 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:15:46 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2469863366' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:15:46 compute-0 nova_compute[349548]: 2025-12-05 02:15:46.270 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:15:46 compute-0 nova_compute[349548]: 2025-12-05 02:15:46.290 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:15:46 compute-0 nova_compute[349548]: 2025-12-05 02:15:46.309 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:15:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:15:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:15:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:15:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:15:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:15:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:15:46 compute-0 nova_compute[349548]: 2025-12-05 02:15:46.338 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 02:15:46 compute-0 nova_compute[349548]: 2025-12-05 02:15:46.339 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.135s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:15:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2003: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 478 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Dec  5 02:15:46 compute-0 podman[455151]: 2025-12-05 02:15:46.363867154 +0000 UTC m=+0.101886283 container create e5a55f21cbedfe4c8ba5d5f0b3bb02b21c99c41cc6a3be79e374ef2264d6effd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_feynman, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  5 02:15:46 compute-0 podman[455151]: 2025-12-05 02:15:46.334630363 +0000 UTC m=+0.072649512 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:15:46 compute-0 systemd[1]: Started libpod-conmon-e5a55f21cbedfe4c8ba5d5f0b3bb02b21c99c41cc6a3be79e374ef2264d6effd.scope.
Dec  5 02:15:46 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:15:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d133fa26ae8623aee71e65680a41e72415b46523ae7f1e18ae5731476be15bda/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:15:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d133fa26ae8623aee71e65680a41e72415b46523ae7f1e18ae5731476be15bda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:15:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d133fa26ae8623aee71e65680a41e72415b46523ae7f1e18ae5731476be15bda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:15:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d133fa26ae8623aee71e65680a41e72415b46523ae7f1e18ae5731476be15bda/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:15:46 compute-0 podman[455151]: 2025-12-05 02:15:46.509612107 +0000 UTC m=+0.247631246 container init e5a55f21cbedfe4c8ba5d5f0b3bb02b21c99c41cc6a3be79e374ef2264d6effd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_feynman, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:15:46 compute-0 podman[455151]: 2025-12-05 02:15:46.534060614 +0000 UTC m=+0.272079713 container start e5a55f21cbedfe4c8ba5d5f0b3bb02b21c99c41cc6a3be79e374ef2264d6effd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_feynman, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  5 02:15:46 compute-0 podman[455151]: 2025-12-05 02:15:46.542013697 +0000 UTC m=+0.280032806 container attach e5a55f21cbedfe4c8ba5d5f0b3bb02b21c99c41cc6a3be79e374ef2264d6effd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_feynman, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  5 02:15:47 compute-0 nova_compute[349548]: 2025-12-05 02:15:47.334 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]: {
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:    "0": [
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:        {
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:            "devices": [
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:                "/dev/loop3"
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:            ],
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:            "lv_name": "ceph_lv0",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:            "lv_size": "21470642176",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:            "name": "ceph_lv0",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:            "tags": {
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:                "ceph.cluster_name": "ceph",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:                "ceph.crush_device_class": "",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:                "ceph.encrypted": "0",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:                "ceph.osd_id": "0",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:                "ceph.type": "block",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:                "ceph.vdo": "0"
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:            },
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:            "type": "block",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:            "vg_name": "ceph_vg0"
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:        }
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:    ],
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:    "1": [
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:        {
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:            "devices": [
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:                "/dev/loop4"
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:            ],
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:            "lv_name": "ceph_lv1",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:            "lv_size": "21470642176",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:            "name": "ceph_lv1",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:            "tags": {
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:                "ceph.cluster_name": "ceph",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:                "ceph.crush_device_class": "",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:                "ceph.encrypted": "0",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:                "ceph.osd_id": "1",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:                "ceph.type": "block",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:                "ceph.vdo": "0"
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:            },
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:            "type": "block",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:            "vg_name": "ceph_vg1"
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:        }
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:    ],
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:    "2": [
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:        {
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:            "devices": [
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:                "/dev/loop5"
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:            ],
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:            "lv_name": "ceph_lv2",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:            "lv_size": "21470642176",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:            "name": "ceph_lv2",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:            "tags": {
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:                "ceph.cluster_name": "ceph",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:                "ceph.crush_device_class": "",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:                "ceph.encrypted": "0",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:                "ceph.osd_id": "2",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:                "ceph.type": "block",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:                "ceph.vdo": "0"
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:            },
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:            "type": "block",
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:            "vg_name": "ceph_vg2"
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:        }
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]:    ]
Dec  5 02:15:47 compute-0 upbeat_feynman[455164]: }
Dec  5 02:15:47 compute-0 podman[455151]: 2025-12-05 02:15:47.375583109 +0000 UTC m=+1.113602228 container died e5a55f21cbedfe4c8ba5d5f0b3bb02b21c99c41cc6a3be79e374ef2264d6effd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_feynman, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  5 02:15:47 compute-0 systemd[1]: libpod-e5a55f21cbedfe4c8ba5d5f0b3bb02b21c99c41cc6a3be79e374ef2264d6effd.scope: Deactivated successfully.
Dec  5 02:15:47 compute-0 nova_compute[349548]: 2025-12-05 02:15:47.377 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:15:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-d133fa26ae8623aee71e65680a41e72415b46523ae7f1e18ae5731476be15bda-merged.mount: Deactivated successfully.
Dec  5 02:15:47 compute-0 podman[455151]: 2025-12-05 02:15:47.4795934 +0000 UTC m=+1.217612519 container remove e5a55f21cbedfe4c8ba5d5f0b3bb02b21c99c41cc6a3be79e374ef2264d6effd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:15:47 compute-0 systemd[1]: libpod-conmon-e5a55f21cbedfe4c8ba5d5f0b3bb02b21c99c41cc6a3be79e374ef2264d6effd.scope: Deactivated successfully.
Dec  5 02:15:48 compute-0 nova_compute[349548]: 2025-12-05 02:15:48.061 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:15:48 compute-0 nova_compute[349548]: 2025-12-05 02:15:48.094 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:15:48 compute-0 nova_compute[349548]: 2025-12-05 02:15:48.095 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:15:48 compute-0 nova_compute[349548]: 2025-12-05 02:15:48.228 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:15:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:15:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2004: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 99 op/s
Dec  5 02:15:48 compute-0 podman[455323]: 2025-12-05 02:15:48.472448805 +0000 UTC m=+0.096650945 container create 5137c55ee084094c5d554f5487ccc611ca512c5139c327bd30aa9acce0229415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:15:48 compute-0 podman[455323]: 2025-12-05 02:15:48.425076485 +0000 UTC m=+0.049278685 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:15:48 compute-0 systemd[1]: Started libpod-conmon-5137c55ee084094c5d554f5487ccc611ca512c5139c327bd30aa9acce0229415.scope.
Dec  5 02:15:48 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:15:48 compute-0 podman[455323]: 2025-12-05 02:15:48.607168549 +0000 UTC m=+0.231370779 container init 5137c55ee084094c5d554f5487ccc611ca512c5139c327bd30aa9acce0229415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_spence, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:15:48 compute-0 podman[455323]: 2025-12-05 02:15:48.623602001 +0000 UTC m=+0.247804181 container start 5137c55ee084094c5d554f5487ccc611ca512c5139c327bd30aa9acce0229415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_spence, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  5 02:15:48 compute-0 podman[455323]: 2025-12-05 02:15:48.630265298 +0000 UTC m=+0.254467458 container attach 5137c55ee084094c5d554f5487ccc611ca512c5139c327bd30aa9acce0229415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_spence, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  5 02:15:48 compute-0 boring_spence[455337]: 167 167
Dec  5 02:15:48 compute-0 systemd[1]: libpod-5137c55ee084094c5d554f5487ccc611ca512c5139c327bd30aa9acce0229415.scope: Deactivated successfully.
Dec  5 02:15:48 compute-0 podman[455323]: 2025-12-05 02:15:48.638560731 +0000 UTC m=+0.262762901 container died 5137c55ee084094c5d554f5487ccc611ca512c5139c327bd30aa9acce0229415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_spence, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:15:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc266e145545c29e2941bdcd46e3b4bf9502f5d9d98b8c0cb8ac129a95774fcd-merged.mount: Deactivated successfully.
Dec  5 02:15:48 compute-0 podman[455323]: 2025-12-05 02:15:48.710230154 +0000 UTC m=+0.334432304 container remove 5137c55ee084094c5d554f5487ccc611ca512c5139c327bd30aa9acce0229415 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  5 02:15:48 compute-0 systemd[1]: libpod-conmon-5137c55ee084094c5d554f5487ccc611ca512c5139c327bd30aa9acce0229415.scope: Deactivated successfully.
Dec  5 02:15:48 compute-0 podman[455398]: 2025-12-05 02:15:48.911535098 +0000 UTC m=+0.055325915 container create bc4ce12973dd392959e42454e7999c7819a7067dd7be360a94a19e78a51aff5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bhaskara, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  5 02:15:48 compute-0 podman[455354]: 2025-12-05 02:15:48.91377613 +0000 UTC m=+0.134057156 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  5 02:15:48 compute-0 podman[455357]: 2025-12-05 02:15:48.936632602 +0000 UTC m=+0.146103194 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Dec  5 02:15:48 compute-0 podman[455356]: 2025-12-05 02:15:48.93832068 +0000 UTC m=+0.146657850 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., version=9.6, config_id=edpm, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container)
Dec  5 02:15:48 compute-0 podman[455362]: 2025-12-05 02:15:48.958850477 +0000 UTC m=+0.151588319 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:15:48 compute-0 systemd[1]: Started libpod-conmon-bc4ce12973dd392959e42454e7999c7819a7067dd7be360a94a19e78a51aff5b.scope.
Dec  5 02:15:48 compute-0 podman[455398]: 2025-12-05 02:15:48.890855047 +0000 UTC m=+0.034645884 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:15:48 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:15:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b6eee875e73fad3e187986392a410da2a22fd7cc23f40e2d8bee885374ad09f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:15:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b6eee875e73fad3e187986392a410da2a22fd7cc23f40e2d8bee885374ad09f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:15:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b6eee875e73fad3e187986392a410da2a22fd7cc23f40e2d8bee885374ad09f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:15:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b6eee875e73fad3e187986392a410da2a22fd7cc23f40e2d8bee885374ad09f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:15:49 compute-0 podman[455398]: 2025-12-05 02:15:49.014716696 +0000 UTC m=+0.158507563 container init bc4ce12973dd392959e42454e7999c7819a7067dd7be360a94a19e78a51aff5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bhaskara, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:15:49 compute-0 podman[455398]: 2025-12-05 02:15:49.028272616 +0000 UTC m=+0.172063433 container start bc4ce12973dd392959e42454e7999c7819a7067dd7be360a94a19e78a51aff5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bhaskara, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:15:49 compute-0 podman[455398]: 2025-12-05 02:15:49.032460814 +0000 UTC m=+0.176251671 container attach bc4ce12973dd392959e42454e7999c7819a7067dd7be360a94a19e78a51aff5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Dec  5 02:15:50 compute-0 gallant_bhaskara[455460]: {
Dec  5 02:15:50 compute-0 gallant_bhaskara[455460]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 02:15:50 compute-0 gallant_bhaskara[455460]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:15:50 compute-0 gallant_bhaskara[455460]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 02:15:50 compute-0 gallant_bhaskara[455460]:        "osd_id": 0,
Dec  5 02:15:50 compute-0 gallant_bhaskara[455460]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:15:50 compute-0 gallant_bhaskara[455460]:        "type": "bluestore"
Dec  5 02:15:50 compute-0 gallant_bhaskara[455460]:    },
Dec  5 02:15:50 compute-0 gallant_bhaskara[455460]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 02:15:50 compute-0 gallant_bhaskara[455460]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:15:50 compute-0 gallant_bhaskara[455460]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 02:15:50 compute-0 gallant_bhaskara[455460]:        "osd_id": 1,
Dec  5 02:15:50 compute-0 gallant_bhaskara[455460]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:15:50 compute-0 gallant_bhaskara[455460]:        "type": "bluestore"
Dec  5 02:15:50 compute-0 gallant_bhaskara[455460]:    },
Dec  5 02:15:50 compute-0 gallant_bhaskara[455460]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 02:15:50 compute-0 gallant_bhaskara[455460]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:15:50 compute-0 gallant_bhaskara[455460]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 02:15:50 compute-0 gallant_bhaskara[455460]:        "osd_id": 2,
Dec  5 02:15:50 compute-0 gallant_bhaskara[455460]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:15:50 compute-0 gallant_bhaskara[455460]:        "type": "bluestore"
Dec  5 02:15:50 compute-0 gallant_bhaskara[455460]:    }
Dec  5 02:15:50 compute-0 gallant_bhaskara[455460]: }
Dec  5 02:15:50 compute-0 systemd[1]: libpod-bc4ce12973dd392959e42454e7999c7819a7067dd7be360a94a19e78a51aff5b.scope: Deactivated successfully.
Dec  5 02:15:50 compute-0 podman[455398]: 2025-12-05 02:15:50.135648737 +0000 UTC m=+1.279439584 container died bc4ce12973dd392959e42454e7999c7819a7067dd7be360a94a19e78a51aff5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bhaskara, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  5 02:15:50 compute-0 systemd[1]: libpod-bc4ce12973dd392959e42454e7999c7819a7067dd7be360a94a19e78a51aff5b.scope: Consumed 1.094s CPU time.
Dec  5 02:15:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b6eee875e73fad3e187986392a410da2a22fd7cc23f40e2d8bee885374ad09f-merged.mount: Deactivated successfully.
Dec  5 02:15:50 compute-0 podman[455398]: 2025-12-05 02:15:50.238631689 +0000 UTC m=+1.382422526 container remove bc4ce12973dd392959e42454e7999c7819a7067dd7be360a94a19e78a51aff5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bhaskara, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:15:50 compute-0 systemd[1]: libpod-conmon-bc4ce12973dd392959e42454e7999c7819a7067dd7be360a94a19e78a51aff5b.scope: Deactivated successfully.
Dec  5 02:15:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 02:15:50 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:15:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 02:15:50 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:15:50 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev cdc8dde9-c94f-47ed-978b-d7bc417fcb43 does not exist
Dec  5 02:15:50 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev b59a0732-450c-404a-bc66-bb44c38e27dd does not exist
Dec  5 02:15:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2005: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Dec  5 02:15:51 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:15:51 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:15:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2006: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 426 B/s wr, 73 op/s
Dec  5 02:15:52 compute-0 nova_compute[349548]: 2025-12-05 02:15:52.378 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:15:53 compute-0 nova_compute[349548]: 2025-12-05 02:15:53.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:15:53 compute-0 nova_compute[349548]: 2025-12-05 02:15:53.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  5 02:15:53 compute-0 nova_compute[349548]: 2025-12-05 02:15:53.232 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:15:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:15:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2007: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Dec  5 02:15:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:15:56.213 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:15:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:15:56.213 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:15:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:15:56.214 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:15:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2008: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Dec  5 02:15:57 compute-0 nova_compute[349548]: 2025-12-05 02:15:57.381 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:15:57 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Dec  5 02:15:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:15:57.467501) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  5 02:15:57 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Dec  5 02:15:57 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900957467584, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 1991, "num_deletes": 251, "total_data_size": 3320324, "memory_usage": 3381232, "flush_reason": "Manual Compaction"}
Dec  5 02:15:57 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Dec  5 02:15:57 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900957495250, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 3255352, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39692, "largest_seqno": 41682, "table_properties": {"data_size": 3246231, "index_size": 5743, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 18351, "raw_average_key_size": 20, "raw_value_size": 3228145, "raw_average_value_size": 3539, "num_data_blocks": 255, "num_entries": 912, "num_filter_entries": 912, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764900739, "oldest_key_time": 1764900739, "file_creation_time": 1764900957, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Dec  5 02:15:57 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 27799 microseconds, and 14197 cpu microseconds.
Dec  5 02:15:57 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 02:15:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:15:57.495311) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 3255352 bytes OK
Dec  5 02:15:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:15:57.495340) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Dec  5 02:15:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:15:57.499577) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Dec  5 02:15:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:15:57.499600) EVENT_LOG_v1 {"time_micros": 1764900957499593, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  5 02:15:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:15:57.499624) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  5 02:15:57 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 3311944, prev total WAL file size 3311944, number of live WAL files 2.
Dec  5 02:15:57 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:15:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:15:57.501249) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Dec  5 02:15:57 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  5 02:15:57 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(3179KB)], [95(6091KB)]
Dec  5 02:15:57 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900957501299, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 9493528, "oldest_snapshot_seqno": -1}
Dec  5 02:15:57 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 5768 keys, 7791521 bytes, temperature: kUnknown
Dec  5 02:15:57 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900957544773, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 7791521, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7754952, "index_size": 21035, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14469, "raw_key_size": 149336, "raw_average_key_size": 25, "raw_value_size": 7652574, "raw_average_value_size": 1326, "num_data_blocks": 837, "num_entries": 5768, "num_filter_entries": 5768, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764900957, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Dec  5 02:15:57 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 02:15:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:15:57.545010) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 7791521 bytes
Dec  5 02:15:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:15:57.546848) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 218.1 rd, 179.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 5.9 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(5.3) write-amplify(2.4) OK, records in: 6282, records dropped: 514 output_compression: NoCompression
Dec  5 02:15:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:15:57.546863) EVENT_LOG_v1 {"time_micros": 1764900957546856, "job": 56, "event": "compaction_finished", "compaction_time_micros": 43532, "compaction_time_cpu_micros": 17880, "output_level": 6, "num_output_files": 1, "total_output_size": 7791521, "num_input_records": 6282, "num_output_records": 5768, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  5 02:15:57 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:15:57 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900957547477, "job": 56, "event": "table_file_deletion", "file_number": 97}
Dec  5 02:15:57 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:15:57 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764900957548408, "job": 56, "event": "table_file_deletion", "file_number": 95}
Dec  5 02:15:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:15:57.501104) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:15:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:15:57.548674) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:15:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:15:57.548681) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:15:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:15:57.548684) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:15:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:15:57.548687) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:15:57 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:15:57.548690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:15:58 compute-0 nova_compute[349548]: 2025-12-05 02:15:58.236 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:15:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:15:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2009: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 49 op/s
Dec  5 02:15:59 compute-0 podman[158197]: time="2025-12-05T02:15:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:15:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:15:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 02:15:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:15:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8656 "" "Go-http-client/1.1"
Dec  5 02:16:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2010: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 0 op/s
Dec  5 02:16:01 compute-0 openstack_network_exporter[366555]: ERROR   02:16:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:16:01 compute-0 openstack_network_exporter[366555]: ERROR   02:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:16:01 compute-0 openstack_network_exporter[366555]: ERROR   02:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:16:01 compute-0 openstack_network_exporter[366555]: ERROR   02:16:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:16:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:16:01 compute-0 openstack_network_exporter[366555]: ERROR   02:16:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:16:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:16:02 compute-0 nova_compute[349548]: 2025-12-05 02:16:02.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:16:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2011: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:16:02 compute-0 nova_compute[349548]: 2025-12-05 02:16:02.383 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:16:03 compute-0 nova_compute[349548]: 2025-12-05 02:16:03.239 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:16:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:16:03 compute-0 podman[455560]: 2025-12-05 02:16:03.694423965 +0000 UTC m=+0.098977611 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec  5 02:16:03 compute-0 podman[455561]: 2025-12-05 02:16:03.710513097 +0000 UTC m=+0.112111220 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 02:16:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2012: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:16:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2013: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 5 op/s
Dec  5 02:16:07 compute-0 nova_compute[349548]: 2025-12-05 02:16:07.388 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:16:08 compute-0 nova_compute[349548]: 2025-12-05 02:16:08.242 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:16:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:16:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2014: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 44 op/s
Dec  5 02:16:08 compute-0 podman[455600]: 2025-12-05 02:16:08.684848663 +0000 UTC m=+0.102595672 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  5 02:16:08 compute-0 podman[455601]: 2025-12-05 02:16:08.715034891 +0000 UTC m=+0.120426423 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi)
Dec  5 02:16:09 compute-0 ovn_controller[89286]: 2025-12-05T02:16:09Z|00182|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Dec  5 02:16:09 compute-0 podman[455637]: 2025-12-05 02:16:09.671553336 +0000 UTC m=+0.092152850 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.tags=base rhel9, release-0.7.12=, managed_by=edpm_ansible, version=9.4, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  5 02:16:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2015: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 0 B/s wr, 66 op/s
Dec  5 02:16:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2016: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Dec  5 02:16:12 compute-0 nova_compute[349548]: 2025-12-05 02:16:12.387 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:16:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:16:13 compute-0 nova_compute[349548]: 2025-12-05 02:16:13.246 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:16:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2017: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Dec  5 02:16:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:16:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:16:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:16:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:16:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:16:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:16:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2018: 321 pgs: 321 active+clean; 203 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 72 op/s
Dec  5 02:16:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:16:16
Dec  5 02:16:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 02:16:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 02:16:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'vms', 'volumes', 'backups', '.mgr', 'cephfs.cephfs.meta', 'images']
Dec  5 02:16:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 02:16:17 compute-0 nova_compute[349548]: 2025-12-05 02:16:17.388 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:16:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 02:16:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:16:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 02:16:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:16:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:16:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:16:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:16:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:16:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:16:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:16:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:16:18 compute-0 nova_compute[349548]: 2025-12-05 02:16:18.249 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:16:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2019: 321 pgs: 321 active+clean; 213 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 1.1 MiB/s wr, 80 op/s
Dec  5 02:16:18 compute-0 ovn_controller[89286]: 2025-12-05T02:16:18Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:69:80:52 10.100.2.8
Dec  5 02:16:18 compute-0 ovn_controller[89286]: 2025-12-05T02:16:18Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:69:80:52 10.100.2.8
Dec  5 02:16:19 compute-0 podman[455658]: 2025-12-05 02:16:19.687653124 +0000 UTC m=+0.090628117 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  5 02:16:19 compute-0 podman[455657]: 2025-12-05 02:16:19.697379577 +0000 UTC m=+0.099762873 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  5 02:16:19 compute-0 podman[455659]: 2025-12-05 02:16:19.731587158 +0000 UTC m=+0.133945723 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  5 02:16:19 compute-0 podman[455660]: 2025-12-05 02:16:19.735255991 +0000 UTC m=+0.129319923 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, architecture=x86_64, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, distribution-scope=public, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, name=ubi9-minimal, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6)
Dec  5 02:16:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2020: 321 pgs: 321 active+clean; 219 MiB data, 372 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 1.3 MiB/s wr, 57 op/s
Dec  5 02:16:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2021: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 295 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec  5 02:16:22 compute-0 nova_compute[349548]: 2025-12-05 02:16:22.390 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:16:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:16:23 compute-0 nova_compute[349548]: 2025-12-05 02:16:23.252 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:16:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2022: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Dec  5 02:16:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2023: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Dec  5 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015141583439148272 of space, bias 1.0, pg target 0.4542475031744482 quantized to 32 (current 32)
Dec  5 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  5 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:16:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 02:16:27 compute-0 nova_compute[349548]: 2025-12-05 02:16:27.392 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:16:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:16:28 compute-0 nova_compute[349548]: 2025-12-05 02:16:28.255 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:16:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2024: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Dec  5 02:16:29 compute-0 podman[158197]: time="2025-12-05T02:16:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:16:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:16:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 02:16:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:16:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8650 "" "Go-http-client/1.1"
Dec  5 02:16:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2025: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 276 KiB/s rd, 1.0 MiB/s wr, 44 op/s
Dec  5 02:16:31 compute-0 openstack_network_exporter[366555]: ERROR   02:16:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:16:31 compute-0 openstack_network_exporter[366555]: ERROR   02:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:16:31 compute-0 openstack_network_exporter[366555]: ERROR   02:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:16:31 compute-0 openstack_network_exporter[366555]: ERROR   02:16:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:16:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:16:31 compute-0 openstack_network_exporter[366555]: ERROR   02:16:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:16:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:16:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2026: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 210 KiB/s rd, 821 KiB/s wr, 29 op/s
Dec  5 02:16:32 compute-0 nova_compute[349548]: 2025-12-05 02:16:32.395 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:16:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:16:33 compute-0 nova_compute[349548]: 2025-12-05 02:16:33.258 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:16:34 compute-0 nova_compute[349548]: 2025-12-05 02:16:34.089 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:16:34 compute-0 nova_compute[349548]: 2025-12-05 02:16:34.122 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Triggering sync for uuid 292fd084-0808-4a80-adc1-6ab1f28e188a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  5 02:16:34 compute-0 nova_compute[349548]: 2025-12-05 02:16:34.123 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Triggering sync for uuid e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  5 02:16:34 compute-0 nova_compute[349548]: 2025-12-05 02:16:34.124 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "292fd084-0808-4a80-adc1-6ab1f28e188a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:16:34 compute-0 nova_compute[349548]: 2025-12-05 02:16:34.125 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "292fd084-0808-4a80-adc1-6ab1f28e188a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:16:34 compute-0 nova_compute[349548]: 2025-12-05 02:16:34.126 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:16:34 compute-0 nova_compute[349548]: 2025-12-05 02:16:34.127 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:16:34 compute-0 nova_compute[349548]: 2025-12-05 02:16:34.170 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.042s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:16:34 compute-0 nova_compute[349548]: 2025-12-05 02:16:34.172 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "292fd084-0808-4a80-adc1-6ab1f28e188a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:16:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2027: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Dec  5 02:16:34 compute-0 podman[455741]: 2025-12-05 02:16:34.714673218 +0000 UTC m=+0.123639334 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  5 02:16:34 compute-0 podman[455742]: 2025-12-05 02:16:34.755768902 +0000 UTC m=+0.153106141 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 02:16:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2028: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Dec  5 02:16:37 compute-0 nova_compute[349548]: 2025-12-05 02:16:37.398 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:16:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:16:38 compute-0 nova_compute[349548]: 2025-12-05 02:16:38.261 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.325 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  5 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.325 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  5 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.337 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '292fd084-0808-4a80-adc1-6ab1f28e188a', 'name': 'te-3255585-asg-ymkpcnuo2iqm-rsaqvth2jwvx-k3ipymnd45pa', 'flavor': {'id': 'bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'b01709a3378347e1a3f25eeb2b8b1bca', 'user_id': '99591ed8361e41579fee1d14f16bf0f7', 'hostId': '1d9ee94bfdb0c27cf886050001bab7f2a93221931735791e86b3ac18', 'status': 'active', 'metadata': {'metering.server_group': '92ca195d-98d1-443c-9947-dcb7ca7b926a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  5 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.341 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  5 02:16:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:38.343 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}03a5c5085f72a10a14834caf2c8f725d7bea9761ee1da0af3d318eb89d91a8ae" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  5 02:16:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2029: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.378 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1830 Content-Type: application/json Date: Fri, 05 Dec 2025 02:16:38 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-cd29bee7-01d3-4292-8507-29ac68d1958b x-openstack-request-id: req-cd29bee7-01d3-4292-8507-29ac68d1958b _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.379 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7", "name": "te-3255585-asg-ymkpcnuo2iqm-egephyv4dydi-sxgc5dh3lpwo", "status": "ACTIVE", "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "user_id": "99591ed8361e41579fee1d14f16bf0f7", "metadata": {"metering.server_group": "92ca195d-98d1-443c-9947-dcb7ca7b926a"}, "hostId": "1d9ee94bfdb0c27cf886050001bab7f2a93221931735791e86b3ac18", "image": {"id": "773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e"}]}, "flavor": {"id": "bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49"}]}, "created": "2025-12-05T02:15:29Z", "updated": "2025-12-05T02:15:42Z", "addresses": {"": [{"version": 4, "addr": "10.100.2.8", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:69:80:52"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-05T02:15:42.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000f", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.379 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 used request id req-cd29bee7-01d3-4292-8507-29ac68d1958b request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.381 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7', 'name': 'te-3255585-asg-ymkpcnuo2iqm-egephyv4dydi-sxgc5dh3lpwo', 'flavor': {'id': 'bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'b01709a3378347e1a3f25eeb2b8b1bca', 'user_id': '99591ed8361e41579fee1d14f16bf0f7', 'hostId': '1d9ee94bfdb0c27cf886050001bab7f2a93221931735791e86b3ac18', 'status': 'active', 'metadata': {'metering.server_group': '92ca195d-98d1-443c-9947-dcb7ca7b926a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.382 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.382 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd61438050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.382 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd61438050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.383 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.384 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.385 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.385 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.386 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.386 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.386 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.386 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-05T02:16:39.382996) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.387 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-05T02:16:39.386479) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.410 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.411 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.433 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.434 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.435 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.435 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.436 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.436 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.436 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.436 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.438 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.438 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.438 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.439 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.439 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.439 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.440 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.440 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: te-3255585-asg-ymkpcnuo2iqm-egephyv4dydi-sxgc5dh3lpwo>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-3255585-asg-ymkpcnuo2iqm-egephyv4dydi-sxgc5dh3lpwo>]
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.440 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.441 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.440 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-05T02:16:39.436788) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.441 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-05T02:16:39.439621) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.441 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.442 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.442 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.443 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-05T02:16:39.442347) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.506 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.bytes volume: 29961216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.507 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.595 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.bytes volume: 30075904 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.596 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.597 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.597 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.597 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.598 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.598 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.598 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.599 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-05T02:16:39.598458) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.599 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.latency volume: 3090417276 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.599 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.latency volume: 214244219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.600 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.latency volume: 2761905668 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.601 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.latency volume: 175446078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.602 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.602 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.602 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.603 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.603 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.603 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.604 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.requests volume: 1059 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.604 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-05T02:16:39.603585) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.605 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.605 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.requests volume: 1075 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.606 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.607 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.607 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.608 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.608 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.608 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.609 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-05T02:16:39.609088) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.609 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.609 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.610 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.611 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.611 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.612 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.613 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.613 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.613 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.614 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.614 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.615 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-05T02:16:39.614487) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.615 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.bytes volume: 72839168 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.615 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.616 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.bytes volume: 72785920 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.616 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.617 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.618 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.618 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.619 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.619 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.620 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.620 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-05T02:16:39.619844) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.664 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 podman[455783]: 2025-12-05 02:16:39.707841114 +0000 UTC m=+0.121766311 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.708 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.710 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.710 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.710 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.711 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.711 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.711 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.latency volume: 10935968399 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.712 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-05T02:16:39.711193) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.713 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.714 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.latency volume: 10282138591 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.714 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.715 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.715 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.715 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.716 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.716 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.716 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.requests volume: 290 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.717 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.717 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-05T02:16:39.716228) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.718 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.requests volume: 271 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.718 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.719 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.719 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.719 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.719 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.720 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.720 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.721 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-05T02:16:39.720624) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.725 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.729 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 / tapafc3cf6c-cb inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.729 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.packets volume: 10 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 podman[455784]: 2025-12-05 02:16:39.729859352 +0000 UTC m=+0.125272029 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.730 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.730 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.730 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.730 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.730 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.730 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.731 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-05T02:16:39.730412) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.731 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.732 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.732 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.733 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.733 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.733 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.733 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.733 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.733 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-05T02:16:39.733296) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.734 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.734 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.734 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.735 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.735 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.735 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.735 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.735 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.735 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.735 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.735 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.736 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.736 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-05T02:16:39.735694) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.737 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.737 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.737 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.737 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.738 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.738 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.738 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.739 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-05T02:16:39.738342) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.739 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.739 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.740 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.740 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.740 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.740 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.740 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.740 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-05T02:16:39.740406) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.741 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.741 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.742 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.742 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.742 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.742 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.742 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.743 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.743 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.743 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-05T02:16:39.743153) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.743 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: te-3255585-asg-ymkpcnuo2iqm-egephyv4dydi-sxgc5dh3lpwo>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-3255585-asg-ymkpcnuo2iqm-egephyv4dydi-sxgc5dh3lpwo>]
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.744 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.744 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.744 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.744 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.744 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.744 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-05T02:16:39.744502) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.745 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/memory.usage volume: 43.12109375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.745 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/memory.usage volume: 43.484375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.745 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.745 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.746 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.746 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.746 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.746 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.746 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-05T02:16:39.746229) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.747 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.747 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.bytes volume: 1346 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.748 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.748 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.748 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.748 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.748 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.748 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.749 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-05T02:16:39.748958) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.749 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.750 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.750 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.750 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.751 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.751 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.751 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.751 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.751 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.bytes.delta volume: 168 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.752 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-05T02:16:39.751501) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.752 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.753 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.753 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.753 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.753 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.754 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.754 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.754 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/cpu volume: 303170000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.755 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/cpu volume: 54360000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.755 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.755 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.755 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.756 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.756 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.756 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.756 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-05T02:16:39.754266) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.756 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.756 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-05T02:16:39.756106) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.757 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.757 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.757 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.757 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.757 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.757 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.757 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.758 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.758 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-05T02:16:39.757865) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.758 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.759 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.759 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.760 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.760 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.760 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.760 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.760 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.760 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.760 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.760 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.761 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.761 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.761 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.761 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.761 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.761 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.761 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.761 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.761 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.761 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.762 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.762 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.762 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.762 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.762 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.762 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:16:39 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:16:39.762 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:16:39 compute-0 podman[455818]: 2025-12-05 02:16:39.815382214 +0000 UTC m=+0.075078430 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, architecture=x86_64, com.redhat.component=ubi9-container, distribution-scope=public, io.buildah.version=1.29.0, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, maintainer=Red Hat, Inc., io.openshift.expose-services=, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, name=ubi9)
Dec  5 02:16:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2030: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s wr, 0 op/s
Dec  5 02:16:41 compute-0 nova_compute[349548]: 2025-12-05 02:16:41.104 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:16:41 compute-0 nova_compute[349548]: 2025-12-05 02:16:41.106 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 02:16:41 compute-0 nova_compute[349548]: 2025-12-05 02:16:41.107 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  5 02:16:41 compute-0 nova_compute[349548]: 2025-12-05 02:16:41.825 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:16:41 compute-0 nova_compute[349548]: 2025-12-05 02:16:41.826 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:16:41 compute-0 nova_compute[349548]: 2025-12-05 02:16:41.827 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  5 02:16:41 compute-0 nova_compute[349548]: 2025-12-05 02:16:41.828 349552 DEBUG nova.objects.instance [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 292fd084-0808-4a80-adc1-6ab1f28e188a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:16:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2031: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 5.1 KiB/s wr, 0 op/s
Dec  5 02:16:42 compute-0 nova_compute[349548]: 2025-12-05 02:16:42.399 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:16:43 compute-0 nova_compute[349548]: 2025-12-05 02:16:43.088 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updating instance_info_cache with network_info: [{"id": "706f9405-4061-481e-a252-9b14f4534a4e", "address": "fa:16:3e:cf:10:bc", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.151", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706f9405-40", "ovs_interfaceid": "706f9405-4061-481e-a252-9b14f4534a4e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:16:43 compute-0 nova_compute[349548]: 2025-12-05 02:16:43.116 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:16:43 compute-0 nova_compute[349548]: 2025-12-05 02:16:43.117 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  5 02:16:43 compute-0 nova_compute[349548]: 2025-12-05 02:16:43.118 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:16:43 compute-0 nova_compute[349548]: 2025-12-05 02:16:43.119 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:16:43 compute-0 nova_compute[349548]: 2025-12-05 02:16:43.119 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:16:43 compute-0 nova_compute[349548]: 2025-12-05 02:16:43.120 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 02:16:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:16:43 compute-0 nova_compute[349548]: 2025-12-05 02:16:43.265 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:16:44 compute-0 nova_compute[349548]: 2025-12-05 02:16:44.069 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:16:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2032: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec  5 02:16:45 compute-0 nova_compute[349548]: 2025-12-05 02:16:45.064 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:16:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 02:16:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/34920223' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 02:16:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 02:16:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/34920223' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 02:16:46 compute-0 nova_compute[349548]: 2025-12-05 02:16:46.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:16:46 compute-0 nova_compute[349548]: 2025-12-05 02:16:46.109 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:16:46 compute-0 nova_compute[349548]: 2025-12-05 02:16:46.110 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:16:46 compute-0 nova_compute[349548]: 2025-12-05 02:16:46.111 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:16:46 compute-0 nova_compute[349548]: 2025-12-05 02:16:46.112 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 02:16:46 compute-0 nova_compute[349548]: 2025-12-05 02:16:46.113 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:16:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:16:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:16:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:16:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:16:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:16:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:16:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2033: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec  5 02:16:46 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:16:46 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2984854046' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:16:46 compute-0 nova_compute[349548]: 2025-12-05 02:16:46.697 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:16:46 compute-0 nova_compute[349548]: 2025-12-05 02:16:46.818 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:16:46 compute-0 nova_compute[349548]: 2025-12-05 02:16:46.820 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:16:46 compute-0 nova_compute[349548]: 2025-12-05 02:16:46.828 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:16:46 compute-0 nova_compute[349548]: 2025-12-05 02:16:46.829 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:16:47 compute-0 nova_compute[349548]: 2025-12-05 02:16:47.402 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:16:47 compute-0 nova_compute[349548]: 2025-12-05 02:16:47.486 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:16:47 compute-0 nova_compute[349548]: 2025-12-05 02:16:47.487 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3588MB free_disk=59.897396087646484GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 02:16:47 compute-0 nova_compute[349548]: 2025-12-05 02:16:47.488 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:16:47 compute-0 nova_compute[349548]: 2025-12-05 02:16:47.489 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:16:47 compute-0 nova_compute[349548]: 2025-12-05 02:16:47.592 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 292fd084-0808-4a80-adc1-6ab1f28e188a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:16:47 compute-0 nova_compute[349548]: 2025-12-05 02:16:47.593 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:16:47 compute-0 nova_compute[349548]: 2025-12-05 02:16:47.594 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 02:16:47 compute-0 nova_compute[349548]: 2025-12-05 02:16:47.594 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 02:16:47 compute-0 nova_compute[349548]: 2025-12-05 02:16:47.651 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:16:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:16:48 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1398920761' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:16:48 compute-0 nova_compute[349548]: 2025-12-05 02:16:48.204 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:16:48 compute-0 nova_compute[349548]: 2025-12-05 02:16:48.219 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:16:48 compute-0 nova_compute[349548]: 2025-12-05 02:16:48.242 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:16:48 compute-0 nova_compute[349548]: 2025-12-05 02:16:48.246 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 02:16:48 compute-0 nova_compute[349548]: 2025-12-05 02:16:48.248 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:16:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:16:48 compute-0 nova_compute[349548]: 2025-12-05 02:16:48.268 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:16:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2034: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec  5 02:16:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2035: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.1 KiB/s wr, 0 op/s
Dec  5 02:16:50 compute-0 podman[455886]: 2025-12-05 02:16:50.717589709 +0000 UTC m=+0.112900822 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  5 02:16:50 compute-0 podman[455891]: 2025-12-05 02:16:50.723223177 +0000 UTC m=+0.126254317 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 02:16:50 compute-0 podman[455901]: 2025-12-05 02:16:50.754932608 +0000 UTC m=+0.137208615 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, managed_by=edpm_ansible, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, release=1755695350, io.openshift.tags=minimal rhel9, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter)
Dec  5 02:16:50 compute-0 podman[455893]: 2025-12-05 02:16:50.760468583 +0000 UTC m=+0.145518868 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  5 02:16:51 compute-0 nova_compute[349548]: 2025-12-05 02:16:51.250 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:16:51 compute-0 nova_compute[349548]: 2025-12-05 02:16:51.253 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:16:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2036: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 1.4 KiB/s wr, 0 op/s
Dec  5 02:16:52 compute-0 nova_compute[349548]: 2025-12-05 02:16:52.403 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:16:52 compute-0 podman[456235]: 2025-12-05 02:16:52.617229932 +0000 UTC m=+0.071627433 container create 13fe6aa58e5bd5f03ac2c9593df258e8f5636d65486248c5e6348dc20a077ad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shamir, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  5 02:16:52 compute-0 podman[456235]: 2025-12-05 02:16:52.582297161 +0000 UTC m=+0.036694712 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:16:52 compute-0 systemd[1]: Started libpod-conmon-13fe6aa58e5bd5f03ac2c9593df258e8f5636d65486248c5e6348dc20a077ad1.scope.
Dec  5 02:16:52 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:16:52 compute-0 podman[456235]: 2025-12-05 02:16:52.749051444 +0000 UTC m=+0.203448985 container init 13fe6aa58e5bd5f03ac2c9593df258e8f5636d65486248c5e6348dc20a077ad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shamir, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:16:52 compute-0 podman[456235]: 2025-12-05 02:16:52.760087274 +0000 UTC m=+0.214484765 container start 13fe6aa58e5bd5f03ac2c9593df258e8f5636d65486248c5e6348dc20a077ad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Dec  5 02:16:52 compute-0 podman[456235]: 2025-12-05 02:16:52.764292263 +0000 UTC m=+0.218689764 container attach 13fe6aa58e5bd5f03ac2c9593df258e8f5636d65486248c5e6348dc20a077ad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  5 02:16:52 compute-0 beautiful_shamir[456251]: 167 167
Dec  5 02:16:52 compute-0 podman[456235]: 2025-12-05 02:16:52.772585925 +0000 UTC m=+0.226983406 container died 13fe6aa58e5bd5f03ac2c9593df258e8f5636d65486248c5e6348dc20a077ad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shamir, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  5 02:16:52 compute-0 systemd[1]: libpod-13fe6aa58e5bd5f03ac2c9593df258e8f5636d65486248c5e6348dc20a077ad1.scope: Deactivated successfully.
Dec  5 02:16:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-457ba28c763674dd0952f6ab925630f61463c735e993551ac2c3f9f3056d8ab3-merged.mount: Deactivated successfully.
Dec  5 02:16:52 compute-0 podman[456235]: 2025-12-05 02:16:52.82687961 +0000 UTC m=+0.281277101 container remove 13fe6aa58e5bd5f03ac2c9593df258e8f5636d65486248c5e6348dc20a077ad1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  5 02:16:52 compute-0 systemd[1]: libpod-conmon-13fe6aa58e5bd5f03ac2c9593df258e8f5636d65486248c5e6348dc20a077ad1.scope: Deactivated successfully.
Dec  5 02:16:53 compute-0 podman[456274]: 2025-12-05 02:16:53.052788615 +0000 UTC m=+0.079643728 container create 0b5116f720f6528e8fbbf12dbdd388f98515519031e303497f26f4c86894abd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:16:53 compute-0 podman[456274]: 2025-12-05 02:16:53.01950324 +0000 UTC m=+0.046358393 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:16:53 compute-0 systemd[1]: Started libpod-conmon-0b5116f720f6528e8fbbf12dbdd388f98515519031e303497f26f4c86894abd2.scope.
Dec  5 02:16:53 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:16:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6b2d40ba2216777cbe4075b83b9e16e8c2227e4bfd3bd43dcf6d311340fe95a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:16:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6b2d40ba2216777cbe4075b83b9e16e8c2227e4bfd3bd43dcf6d311340fe95a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:16:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6b2d40ba2216777cbe4075b83b9e16e8c2227e4bfd3bd43dcf6d311340fe95a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:16:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6b2d40ba2216777cbe4075b83b9e16e8c2227e4bfd3bd43dcf6d311340fe95a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:16:53 compute-0 podman[456274]: 2025-12-05 02:16:53.235813806 +0000 UTC m=+0.262668969 container init 0b5116f720f6528e8fbbf12dbdd388f98515519031e303497f26f4c86894abd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  5 02:16:53 compute-0 podman[456274]: 2025-12-05 02:16:53.250850858 +0000 UTC m=+0.277705931 container start 0b5116f720f6528e8fbbf12dbdd388f98515519031e303497f26f4c86894abd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_proskuriakova, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:16:53 compute-0 podman[456274]: 2025-12-05 02:16:53.257201746 +0000 UTC m=+0.284056869 container attach 0b5116f720f6528e8fbbf12dbdd388f98515519031e303497f26f4c86894abd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_proskuriakova, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  5 02:16:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:16:53 compute-0 nova_compute[349548]: 2025-12-05 02:16:53.272 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:16:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2037: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  5 02:16:55 compute-0 wonderful_proskuriakova[456290]: [
Dec  5 02:16:55 compute-0 wonderful_proskuriakova[456290]:    {
Dec  5 02:16:55 compute-0 wonderful_proskuriakova[456290]:        "available": false,
Dec  5 02:16:55 compute-0 wonderful_proskuriakova[456290]:        "ceph_device": false,
Dec  5 02:16:55 compute-0 wonderful_proskuriakova[456290]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec  5 02:16:55 compute-0 wonderful_proskuriakova[456290]:        "lsm_data": {},
Dec  5 02:16:55 compute-0 wonderful_proskuriakova[456290]:        "lvs": [],
Dec  5 02:16:55 compute-0 wonderful_proskuriakova[456290]:        "path": "/dev/sr0",
Dec  5 02:16:55 compute-0 wonderful_proskuriakova[456290]:        "rejected_reasons": [
Dec  5 02:16:55 compute-0 wonderful_proskuriakova[456290]:            "Insufficient space (<5GB)",
Dec  5 02:16:55 compute-0 wonderful_proskuriakova[456290]:            "Has a FileSystem"
Dec  5 02:16:55 compute-0 wonderful_proskuriakova[456290]:        ],
Dec  5 02:16:55 compute-0 wonderful_proskuriakova[456290]:        "sys_api": {
Dec  5 02:16:55 compute-0 wonderful_proskuriakova[456290]:            "actuators": null,
Dec  5 02:16:55 compute-0 wonderful_proskuriakova[456290]:            "device_nodes": "sr0",
Dec  5 02:16:55 compute-0 wonderful_proskuriakova[456290]:            "devname": "sr0",
Dec  5 02:16:55 compute-0 wonderful_proskuriakova[456290]:            "human_readable_size": "482.00 KB",
Dec  5 02:16:55 compute-0 wonderful_proskuriakova[456290]:            "id_bus": "ata",
Dec  5 02:16:55 compute-0 wonderful_proskuriakova[456290]:            "model": "QEMU DVD-ROM",
Dec  5 02:16:55 compute-0 wonderful_proskuriakova[456290]:            "nr_requests": "2",
Dec  5 02:16:55 compute-0 wonderful_proskuriakova[456290]:            "parent": "/dev/sr0",
Dec  5 02:16:55 compute-0 wonderful_proskuriakova[456290]:            "partitions": {},
Dec  5 02:16:55 compute-0 wonderful_proskuriakova[456290]:            "path": "/dev/sr0",
Dec  5 02:16:55 compute-0 wonderful_proskuriakova[456290]:            "removable": "1",
Dec  5 02:16:55 compute-0 wonderful_proskuriakova[456290]:            "rev": "2.5+",
Dec  5 02:16:55 compute-0 wonderful_proskuriakova[456290]:            "ro": "0",
Dec  5 02:16:55 compute-0 wonderful_proskuriakova[456290]:            "rotational": "1",
Dec  5 02:16:55 compute-0 wonderful_proskuriakova[456290]:            "sas_address": "",
Dec  5 02:16:55 compute-0 wonderful_proskuriakova[456290]:            "sas_device_handle": "",
Dec  5 02:16:55 compute-0 wonderful_proskuriakova[456290]:            "scheduler_mode": "mq-deadline",
Dec  5 02:16:55 compute-0 wonderful_proskuriakova[456290]:            "sectors": 0,
Dec  5 02:16:55 compute-0 wonderful_proskuriakova[456290]:            "sectorsize": "2048",
Dec  5 02:16:55 compute-0 wonderful_proskuriakova[456290]:            "size": 493568.0,
Dec  5 02:16:55 compute-0 wonderful_proskuriakova[456290]:            "support_discard": "2048",
Dec  5 02:16:55 compute-0 wonderful_proskuriakova[456290]:            "type": "disk",
Dec  5 02:16:55 compute-0 wonderful_proskuriakova[456290]:            "vendor": "QEMU"
Dec  5 02:16:55 compute-0 wonderful_proskuriakova[456290]:        }
Dec  5 02:16:55 compute-0 wonderful_proskuriakova[456290]:    }
Dec  5 02:16:55 compute-0 wonderful_proskuriakova[456290]: ]
Dec  5 02:16:55 compute-0 systemd[1]: libpod-0b5116f720f6528e8fbbf12dbdd388f98515519031e303497f26f4c86894abd2.scope: Deactivated successfully.
Dec  5 02:16:55 compute-0 podman[456274]: 2025-12-05 02:16:55.615956133 +0000 UTC m=+2.642811226 container died 0b5116f720f6528e8fbbf12dbdd388f98515519031e303497f26f4c86894abd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  5 02:16:55 compute-0 systemd[1]: libpod-0b5116f720f6528e8fbbf12dbdd388f98515519031e303497f26f4c86894abd2.scope: Consumed 2.405s CPU time.
Dec  5 02:16:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6b2d40ba2216777cbe4075b83b9e16e8c2227e4bfd3bd43dcf6d311340fe95a-merged.mount: Deactivated successfully.
Dec  5 02:16:55 compute-0 podman[456274]: 2025-12-05 02:16:55.696433563 +0000 UTC m=+2.723288646 container remove 0b5116f720f6528e8fbbf12dbdd388f98515519031e303497f26f4c86894abd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_proskuriakova, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:16:55 compute-0 systemd[1]: libpod-conmon-0b5116f720f6528e8fbbf12dbdd388f98515519031e303497f26f4c86894abd2.scope: Deactivated successfully.
Dec  5 02:16:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 02:16:55 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:16:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 02:16:55 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:16:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:16:55 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:16:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 02:16:55 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:16:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 02:16:55 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:16:55 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 4165ef21-ac94-4975-9a10-db81e571b918 does not exist
Dec  5 02:16:55 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 01f6321f-a38c-4402-9dc0-53028a9b712f does not exist
Dec  5 02:16:55 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev f60816ee-95cb-4c73-b1a8-23870fcab235 does not exist
Dec  5 02:16:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 02:16:55 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 02:16:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 02:16:55 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:16:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:16:55 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:16:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:16:56.214 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:16:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:16:56.215 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:16:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:16:56.215 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:16:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2038: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  5 02:16:56 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:16:56 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:16:56 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:16:56 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:16:56 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:16:56 compute-0 podman[458588]: 2025-12-05 02:16:56.903593298 +0000 UTC m=+0.072201569 container create 8d6d60a63ccd24b8ad9d435607045d518a51f0e35e0729721910c959ed244618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_turing, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  5 02:16:56 compute-0 systemd[1]: Started libpod-conmon-8d6d60a63ccd24b8ad9d435607045d518a51f0e35e0729721910c959ed244618.scope.
Dec  5 02:16:56 compute-0 podman[458588]: 2025-12-05 02:16:56.873848442 +0000 UTC m=+0.042456723 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:16:57 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:16:57 compute-0 podman[458588]: 2025-12-05 02:16:57.031621713 +0000 UTC m=+0.200229994 container init 8d6d60a63ccd24b8ad9d435607045d518a51f0e35e0729721910c959ed244618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_turing, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  5 02:16:57 compute-0 podman[458588]: 2025-12-05 02:16:57.051735548 +0000 UTC m=+0.220343799 container start 8d6d60a63ccd24b8ad9d435607045d518a51f0e35e0729721910c959ed244618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_turing, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  5 02:16:57 compute-0 podman[458588]: 2025-12-05 02:16:57.057637524 +0000 UTC m=+0.226245775 container attach 8d6d60a63ccd24b8ad9d435607045d518a51f0e35e0729721910c959ed244618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_turing, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:16:57 compute-0 hungry_turing[458604]: 167 167
Dec  5 02:16:57 compute-0 systemd[1]: libpod-8d6d60a63ccd24b8ad9d435607045d518a51f0e35e0729721910c959ed244618.scope: Deactivated successfully.
Dec  5 02:16:57 compute-0 podman[458588]: 2025-12-05 02:16:57.065012441 +0000 UTC m=+0.233620712 container died 8d6d60a63ccd24b8ad9d435607045d518a51f0e35e0729721910c959ed244618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_turing, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  5 02:16:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb8bd229b227c2f5dbb3fbd49eee17dc109d7897911320d68c1c8697f6a0d367-merged.mount: Deactivated successfully.
Dec  5 02:16:57 compute-0 podman[458588]: 2025-12-05 02:16:57.132034513 +0000 UTC m=+0.300642744 container remove 8d6d60a63ccd24b8ad9d435607045d518a51f0e35e0729721910c959ed244618 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  5 02:16:57 compute-0 systemd[1]: libpod-conmon-8d6d60a63ccd24b8ad9d435607045d518a51f0e35e0729721910c959ed244618.scope: Deactivated successfully.
Dec  5 02:16:57 compute-0 nova_compute[349548]: 2025-12-05 02:16:57.407 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:16:57 compute-0 podman[458628]: 2025-12-05 02:16:57.425939178 +0000 UTC m=+0.111647977 container create 9aa73d19a6f5a16693c84660199a91f65f437d8e000ab8f471911759c51c1be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_montalcini, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  5 02:16:57 compute-0 podman[458628]: 2025-12-05 02:16:57.372644911 +0000 UTC m=+0.058353730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:16:57 compute-0 systemd[1]: Started libpod-conmon-9aa73d19a6f5a16693c84660199a91f65f437d8e000ab8f471911759c51c1be9.scope.
Dec  5 02:16:57 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:16:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efe12e73fdfe1a17fc543936ca492819258d205f2503f8b6b19ecc4d0617c387/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:16:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efe12e73fdfe1a17fc543936ca492819258d205f2503f8b6b19ecc4d0617c387/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:16:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efe12e73fdfe1a17fc543936ca492819258d205f2503f8b6b19ecc4d0617c387/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:16:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efe12e73fdfe1a17fc543936ca492819258d205f2503f8b6b19ecc4d0617c387/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:16:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efe12e73fdfe1a17fc543936ca492819258d205f2503f8b6b19ecc4d0617c387/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 02:16:57 compute-0 podman[458628]: 2025-12-05 02:16:57.626712496 +0000 UTC m=+0.312421315 container init 9aa73d19a6f5a16693c84660199a91f65f437d8e000ab8f471911759c51c1be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:16:57 compute-0 podman[458628]: 2025-12-05 02:16:57.644830185 +0000 UTC m=+0.330538984 container start 9aa73d19a6f5a16693c84660199a91f65f437d8e000ab8f471911759c51c1be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  5 02:16:57 compute-0 podman[458628]: 2025-12-05 02:16:57.651555884 +0000 UTC m=+0.337264683 container attach 9aa73d19a6f5a16693c84660199a91f65f437d8e000ab8f471911759c51c1be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_montalcini, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  5 02:16:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:16:58 compute-0 nova_compute[349548]: 2025-12-05 02:16:58.275 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:16:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2039: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec  5 02:16:58 compute-0 sharp_montalcini[458644]: --> passed data devices: 0 physical, 3 LVM
Dec  5 02:16:58 compute-0 sharp_montalcini[458644]: --> relative data size: 1.0
Dec  5 02:16:58 compute-0 sharp_montalcini[458644]: --> All data devices are unavailable
Dec  5 02:16:59 compute-0 systemd[1]: libpod-9aa73d19a6f5a16693c84660199a91f65f437d8e000ab8f471911759c51c1be9.scope: Deactivated successfully.
Dec  5 02:16:59 compute-0 podman[458628]: 2025-12-05 02:16:59.016432608 +0000 UTC m=+1.702141397 container died 9aa73d19a6f5a16693c84660199a91f65f437d8e000ab8f471911759c51c1be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_montalcini, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:16:59 compute-0 systemd[1]: libpod-9aa73d19a6f5a16693c84660199a91f65f437d8e000ab8f471911759c51c1be9.scope: Consumed 1.301s CPU time.
Dec  5 02:16:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-efe12e73fdfe1a17fc543936ca492819258d205f2503f8b6b19ecc4d0617c387-merged.mount: Deactivated successfully.
Dec  5 02:16:59 compute-0 podman[458628]: 2025-12-05 02:16:59.114506582 +0000 UTC m=+1.800215381 container remove 9aa73d19a6f5a16693c84660199a91f65f437d8e000ab8f471911759c51c1be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  5 02:16:59 compute-0 systemd[1]: libpod-conmon-9aa73d19a6f5a16693c84660199a91f65f437d8e000ab8f471911759c51c1be9.scope: Deactivated successfully.
Dec  5 02:16:59 compute-0 podman[158197]: time="2025-12-05T02:16:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:16:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:16:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 02:16:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:16:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8660 "" "Go-http-client/1.1"
Dec  5 02:17:00 compute-0 podman[458821]: 2025-12-05 02:17:00.229767765 +0000 UTC m=+0.086632954 container create 46a9acda12e9ddd6d943b988a6f8fb22e336e53db08b2c38f5f276eb600475b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hugle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:17:00 compute-0 podman[458821]: 2025-12-05 02:17:00.188004162 +0000 UTC m=+0.044869431 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:17:00 compute-0 systemd[1]: Started libpod-conmon-46a9acda12e9ddd6d943b988a6f8fb22e336e53db08b2c38f5f276eb600475b0.scope.
Dec  5 02:17:00 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:17:00 compute-0 podman[458821]: 2025-12-05 02:17:00.374661875 +0000 UTC m=+0.231527144 container init 46a9acda12e9ddd6d943b988a6f8fb22e336e53db08b2c38f5f276eb600475b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:17:00 compute-0 podman[458821]: 2025-12-05 02:17:00.387527806 +0000 UTC m=+0.244393025 container start 46a9acda12e9ddd6d943b988a6f8fb22e336e53db08b2c38f5f276eb600475b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hugle, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:17:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2040: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec  5 02:17:00 compute-0 podman[458821]: 2025-12-05 02:17:00.395228992 +0000 UTC m=+0.252094261 container attach 46a9acda12e9ddd6d943b988a6f8fb22e336e53db08b2c38f5f276eb600475b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hugle, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:17:00 compute-0 condescending_hugle[458837]: 167 167
Dec  5 02:17:00 compute-0 systemd[1]: libpod-46a9acda12e9ddd6d943b988a6f8fb22e336e53db08b2c38f5f276eb600475b0.scope: Deactivated successfully.
Dec  5 02:17:00 compute-0 podman[458821]: 2025-12-05 02:17:00.402733943 +0000 UTC m=+0.259599142 container died 46a9acda12e9ddd6d943b988a6f8fb22e336e53db08b2c38f5f276eb600475b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hugle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:17:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-666ba6de41b1383330473895541dcfd6248d0ef70fc972fd84f983cb73ad2be4-merged.mount: Deactivated successfully.
Dec  5 02:17:00 compute-0 podman[458821]: 2025-12-05 02:17:00.469968351 +0000 UTC m=+0.326833540 container remove 46a9acda12e9ddd6d943b988a6f8fb22e336e53db08b2c38f5f276eb600475b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_hugle, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  5 02:17:00 compute-0 systemd[1]: libpod-conmon-46a9acda12e9ddd6d943b988a6f8fb22e336e53db08b2c38f5f276eb600475b0.scope: Deactivated successfully.
Dec  5 02:17:00 compute-0 podman[458859]: 2025-12-05 02:17:00.754944875 +0000 UTC m=+0.091392908 container create cdfd21eca7a1c8d3f829074ef032461d8734eb71bb83e9e2a78cff182cd59c5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_black, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  5 02:17:00 compute-0 podman[458859]: 2025-12-05 02:17:00.719245113 +0000 UTC m=+0.055693186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:17:00 compute-0 systemd[1]: Started libpod-conmon-cdfd21eca7a1c8d3f829074ef032461d8734eb71bb83e9e2a78cff182cd59c5a.scope.
Dec  5 02:17:00 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:17:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33d60e7905d7cbb234bbdd64b99b10fa7128c0385f72c55a967d59c29f35a512/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:17:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33d60e7905d7cbb234bbdd64b99b10fa7128c0385f72c55a967d59c29f35a512/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:17:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33d60e7905d7cbb234bbdd64b99b10fa7128c0385f72c55a967d59c29f35a512/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:17:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33d60e7905d7cbb234bbdd64b99b10fa7128c0385f72c55a967d59c29f35a512/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:17:00 compute-0 podman[458859]: 2025-12-05 02:17:00.917695096 +0000 UTC m=+0.254143209 container init cdfd21eca7a1c8d3f829074ef032461d8734eb71bb83e9e2a78cff182cd59c5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_black, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  5 02:17:00 compute-0 podman[458859]: 2025-12-05 02:17:00.959982184 +0000 UTC m=+0.296430247 container start cdfd21eca7a1c8d3f829074ef032461d8734eb71bb83e9e2a78cff182cd59c5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_black, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  5 02:17:00 compute-0 podman[458859]: 2025-12-05 02:17:00.966565029 +0000 UTC m=+0.303013142 container attach cdfd21eca7a1c8d3f829074ef032461d8734eb71bb83e9e2a78cff182cd59c5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_black, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  5 02:17:01 compute-0 openstack_network_exporter[366555]: ERROR   02:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:17:01 compute-0 openstack_network_exporter[366555]: ERROR   02:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:17:01 compute-0 openstack_network_exporter[366555]: ERROR   02:17:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:17:01 compute-0 openstack_network_exporter[366555]: ERROR   02:17:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:17:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:17:01 compute-0 openstack_network_exporter[366555]: ERROR   02:17:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:17:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:17:01 compute-0 quirky_black[458875]: {
Dec  5 02:17:01 compute-0 quirky_black[458875]:    "0": [
Dec  5 02:17:01 compute-0 quirky_black[458875]:        {
Dec  5 02:17:01 compute-0 quirky_black[458875]:            "devices": [
Dec  5 02:17:01 compute-0 quirky_black[458875]:                "/dev/loop3"
Dec  5 02:17:01 compute-0 quirky_black[458875]:            ],
Dec  5 02:17:01 compute-0 quirky_black[458875]:            "lv_name": "ceph_lv0",
Dec  5 02:17:01 compute-0 quirky_black[458875]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:17:01 compute-0 quirky_black[458875]:            "lv_size": "21470642176",
Dec  5 02:17:01 compute-0 quirky_black[458875]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:17:01 compute-0 quirky_black[458875]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:17:01 compute-0 quirky_black[458875]:            "name": "ceph_lv0",
Dec  5 02:17:01 compute-0 quirky_black[458875]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:17:01 compute-0 quirky_black[458875]:            "tags": {
Dec  5 02:17:01 compute-0 quirky_black[458875]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:17:01 compute-0 quirky_black[458875]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:17:01 compute-0 quirky_black[458875]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:17:01 compute-0 quirky_black[458875]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:17:01 compute-0 quirky_black[458875]:                "ceph.cluster_name": "ceph",
Dec  5 02:17:01 compute-0 quirky_black[458875]:                "ceph.crush_device_class": "",
Dec  5 02:17:01 compute-0 quirky_black[458875]:                "ceph.encrypted": "0",
Dec  5 02:17:01 compute-0 quirky_black[458875]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:17:01 compute-0 quirky_black[458875]:                "ceph.osd_id": "0",
Dec  5 02:17:01 compute-0 quirky_black[458875]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:17:01 compute-0 quirky_black[458875]:                "ceph.type": "block",
Dec  5 02:17:01 compute-0 quirky_black[458875]:                "ceph.vdo": "0"
Dec  5 02:17:01 compute-0 quirky_black[458875]:            },
Dec  5 02:17:01 compute-0 quirky_black[458875]:            "type": "block",
Dec  5 02:17:01 compute-0 quirky_black[458875]:            "vg_name": "ceph_vg0"
Dec  5 02:17:01 compute-0 quirky_black[458875]:        }
Dec  5 02:17:01 compute-0 quirky_black[458875]:    ],
Dec  5 02:17:01 compute-0 quirky_black[458875]:    "1": [
Dec  5 02:17:01 compute-0 quirky_black[458875]:        {
Dec  5 02:17:01 compute-0 quirky_black[458875]:            "devices": [
Dec  5 02:17:01 compute-0 quirky_black[458875]:                "/dev/loop4"
Dec  5 02:17:01 compute-0 quirky_black[458875]:            ],
Dec  5 02:17:01 compute-0 quirky_black[458875]:            "lv_name": "ceph_lv1",
Dec  5 02:17:01 compute-0 quirky_black[458875]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:17:01 compute-0 quirky_black[458875]:            "lv_size": "21470642176",
Dec  5 02:17:01 compute-0 quirky_black[458875]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:17:01 compute-0 quirky_black[458875]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:17:01 compute-0 quirky_black[458875]:            "name": "ceph_lv1",
Dec  5 02:17:01 compute-0 quirky_black[458875]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:17:01 compute-0 quirky_black[458875]:            "tags": {
Dec  5 02:17:01 compute-0 quirky_black[458875]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:17:01 compute-0 quirky_black[458875]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:17:01 compute-0 quirky_black[458875]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:17:01 compute-0 quirky_black[458875]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:17:01 compute-0 quirky_black[458875]:                "ceph.cluster_name": "ceph",
Dec  5 02:17:01 compute-0 quirky_black[458875]:                "ceph.crush_device_class": "",
Dec  5 02:17:01 compute-0 quirky_black[458875]:                "ceph.encrypted": "0",
Dec  5 02:17:01 compute-0 quirky_black[458875]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:17:01 compute-0 quirky_black[458875]:                "ceph.osd_id": "1",
Dec  5 02:17:01 compute-0 quirky_black[458875]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:17:01 compute-0 quirky_black[458875]:                "ceph.type": "block",
Dec  5 02:17:01 compute-0 quirky_black[458875]:                "ceph.vdo": "0"
Dec  5 02:17:01 compute-0 quirky_black[458875]:            },
Dec  5 02:17:01 compute-0 quirky_black[458875]:            "type": "block",
Dec  5 02:17:01 compute-0 quirky_black[458875]:            "vg_name": "ceph_vg1"
Dec  5 02:17:01 compute-0 quirky_black[458875]:        }
Dec  5 02:17:01 compute-0 quirky_black[458875]:    ],
Dec  5 02:17:01 compute-0 quirky_black[458875]:    "2": [
Dec  5 02:17:01 compute-0 quirky_black[458875]:        {
Dec  5 02:17:01 compute-0 quirky_black[458875]:            "devices": [
Dec  5 02:17:01 compute-0 quirky_black[458875]:                "/dev/loop5"
Dec  5 02:17:01 compute-0 quirky_black[458875]:            ],
Dec  5 02:17:01 compute-0 quirky_black[458875]:            "lv_name": "ceph_lv2",
Dec  5 02:17:01 compute-0 quirky_black[458875]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:17:01 compute-0 quirky_black[458875]:            "lv_size": "21470642176",
Dec  5 02:17:01 compute-0 quirky_black[458875]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:17:01 compute-0 quirky_black[458875]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:17:01 compute-0 quirky_black[458875]:            "name": "ceph_lv2",
Dec  5 02:17:01 compute-0 quirky_black[458875]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:17:01 compute-0 quirky_black[458875]:            "tags": {
Dec  5 02:17:01 compute-0 quirky_black[458875]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:17:01 compute-0 quirky_black[458875]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:17:01 compute-0 quirky_black[458875]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:17:01 compute-0 quirky_black[458875]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:17:01 compute-0 quirky_black[458875]:                "ceph.cluster_name": "ceph",
Dec  5 02:17:01 compute-0 quirky_black[458875]:                "ceph.crush_device_class": "",
Dec  5 02:17:01 compute-0 quirky_black[458875]:                "ceph.encrypted": "0",
Dec  5 02:17:01 compute-0 quirky_black[458875]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:17:01 compute-0 quirky_black[458875]:                "ceph.osd_id": "2",
Dec  5 02:17:01 compute-0 quirky_black[458875]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:17:01 compute-0 quirky_black[458875]:                "ceph.type": "block",
Dec  5 02:17:01 compute-0 quirky_black[458875]:                "ceph.vdo": "0"
Dec  5 02:17:01 compute-0 quirky_black[458875]:            },
Dec  5 02:17:01 compute-0 quirky_black[458875]:            "type": "block",
Dec  5 02:17:01 compute-0 quirky_black[458875]:            "vg_name": "ceph_vg2"
Dec  5 02:17:01 compute-0 quirky_black[458875]:        }
Dec  5 02:17:01 compute-0 quirky_black[458875]:    ]
Dec  5 02:17:01 compute-0 quirky_black[458875]: }
Dec  5 02:17:01 compute-0 systemd[1]: libpod-cdfd21eca7a1c8d3f829074ef032461d8734eb71bb83e9e2a78cff182cd59c5a.scope: Deactivated successfully.
Dec  5 02:17:01 compute-0 podman[458859]: 2025-12-05 02:17:01.847961993 +0000 UTC m=+1.184410026 container died cdfd21eca7a1c8d3f829074ef032461d8734eb71bb83e9e2a78cff182cd59c5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Dec  5 02:17:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-33d60e7905d7cbb234bbdd64b99b10fa7128c0385f72c55a967d59c29f35a512-merged.mount: Deactivated successfully.
Dec  5 02:17:01 compute-0 podman[458859]: 2025-12-05 02:17:01.929182304 +0000 UTC m=+1.265630347 container remove cdfd21eca7a1c8d3f829074ef032461d8734eb71bb83e9e2a78cff182cd59c5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_black, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:17:01 compute-0 systemd[1]: libpod-conmon-cdfd21eca7a1c8d3f829074ef032461d8734eb71bb83e9e2a78cff182cd59c5a.scope: Deactivated successfully.
Dec  5 02:17:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2041: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec  5 02:17:02 compute-0 nova_compute[349548]: 2025-12-05 02:17:02.409 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:17:02 compute-0 podman[459031]: 2025-12-05 02:17:02.967622439 +0000 UTC m=+0.086610273 container create 6d71595d31ca17f12edd3a5164107ec50c4ad16ce680820f4743fadf164794aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  5 02:17:03 compute-0 podman[459031]: 2025-12-05 02:17:02.931425573 +0000 UTC m=+0.050413477 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:17:03 compute-0 systemd[1]: Started libpod-conmon-6d71595d31ca17f12edd3a5164107ec50c4ad16ce680820f4743fadf164794aa.scope.
Dec  5 02:17:03 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:17:03 compute-0 podman[459031]: 2025-12-05 02:17:03.104196135 +0000 UTC m=+0.223184009 container init 6d71595d31ca17f12edd3a5164107ec50c4ad16ce680820f4743fadf164794aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_buck, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:17:03 compute-0 podman[459031]: 2025-12-05 02:17:03.118798295 +0000 UTC m=+0.237786159 container start 6d71595d31ca17f12edd3a5164107ec50c4ad16ce680820f4743fadf164794aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_buck, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  5 02:17:03 compute-0 podman[459031]: 2025-12-05 02:17:03.125694619 +0000 UTC m=+0.244682463 container attach 6d71595d31ca17f12edd3a5164107ec50c4ad16ce680820f4743fadf164794aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_buck, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:17:03 compute-0 stoic_buck[459047]: 167 167
Dec  5 02:17:03 compute-0 systemd[1]: libpod-6d71595d31ca17f12edd3a5164107ec50c4ad16ce680820f4743fadf164794aa.scope: Deactivated successfully.
Dec  5 02:17:03 compute-0 podman[459031]: 2025-12-05 02:17:03.132370646 +0000 UTC m=+0.251358470 container died 6d71595d31ca17f12edd3a5164107ec50c4ad16ce680820f4743fadf164794aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_buck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Dec  5 02:17:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-45fe5e4ea339b04d6b2479de3025b567e8cee4c88bd908135385c03b6264eb38-merged.mount: Deactivated successfully.
Dec  5 02:17:03 compute-0 podman[459031]: 2025-12-05 02:17:03.209477452 +0000 UTC m=+0.328465286 container remove 6d71595d31ca17f12edd3a5164107ec50c4ad16ce680820f4743fadf164794aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_buck, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:17:03 compute-0 systemd[1]: libpod-conmon-6d71595d31ca17f12edd3a5164107ec50c4ad16ce680820f4743fadf164794aa.scope: Deactivated successfully.
Dec  5 02:17:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:17:03 compute-0 nova_compute[349548]: 2025-12-05 02:17:03.278 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:17:03 compute-0 podman[459070]: 2025-12-05 02:17:03.470120122 +0000 UTC m=+0.072558348 container create ffe6b1ae6d0f0ee9e54585a96e9eec76ae26b27adbb026a01e30913b47fcb6bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  5 02:17:03 compute-0 podman[459070]: 2025-12-05 02:17:03.434808691 +0000 UTC m=+0.037246947 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:17:03 compute-0 systemd[1]: Started libpod-conmon-ffe6b1ae6d0f0ee9e54585a96e9eec76ae26b27adbb026a01e30913b47fcb6bf.scope.
Dec  5 02:17:03 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:17:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df6b871b34d2003a152489f5d146bd2c6985f8f146941e8b1974fa098d77c9ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:17:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df6b871b34d2003a152489f5d146bd2c6985f8f146941e8b1974fa098d77c9ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:17:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df6b871b34d2003a152489f5d146bd2c6985f8f146941e8b1974fa098d77c9ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:17:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df6b871b34d2003a152489f5d146bd2c6985f8f146941e8b1974fa098d77c9ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:17:03 compute-0 podman[459070]: 2025-12-05 02:17:03.622394039 +0000 UTC m=+0.224832265 container init ffe6b1ae6d0f0ee9e54585a96e9eec76ae26b27adbb026a01e30913b47fcb6bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ride, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Dec  5 02:17:03 compute-0 podman[459070]: 2025-12-05 02:17:03.637280407 +0000 UTC m=+0.239718653 container start ffe6b1ae6d0f0ee9e54585a96e9eec76ae26b27adbb026a01e30913b47fcb6bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:17:03 compute-0 podman[459070]: 2025-12-05 02:17:03.644370176 +0000 UTC m=+0.246808422 container attach ffe6b1ae6d0f0ee9e54585a96e9eec76ae26b27adbb026a01e30913b47fcb6bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ride, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:17:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2042: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  5 02:17:04 compute-0 infallible_ride[459085]: {
Dec  5 02:17:04 compute-0 infallible_ride[459085]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 02:17:04 compute-0 infallible_ride[459085]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:17:04 compute-0 infallible_ride[459085]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 02:17:04 compute-0 infallible_ride[459085]:        "osd_id": 0,
Dec  5 02:17:04 compute-0 infallible_ride[459085]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:17:04 compute-0 infallible_ride[459085]:        "type": "bluestore"
Dec  5 02:17:04 compute-0 infallible_ride[459085]:    },
Dec  5 02:17:04 compute-0 infallible_ride[459085]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 02:17:04 compute-0 infallible_ride[459085]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:17:04 compute-0 infallible_ride[459085]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 02:17:04 compute-0 infallible_ride[459085]:        "osd_id": 1,
Dec  5 02:17:04 compute-0 infallible_ride[459085]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:17:04 compute-0 infallible_ride[459085]:        "type": "bluestore"
Dec  5 02:17:04 compute-0 infallible_ride[459085]:    },
Dec  5 02:17:04 compute-0 infallible_ride[459085]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 02:17:04 compute-0 infallible_ride[459085]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:17:04 compute-0 infallible_ride[459085]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 02:17:04 compute-0 infallible_ride[459085]:        "osd_id": 2,
Dec  5 02:17:04 compute-0 infallible_ride[459085]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:17:04 compute-0 infallible_ride[459085]:        "type": "bluestore"
Dec  5 02:17:04 compute-0 infallible_ride[459085]:    }
Dec  5 02:17:04 compute-0 infallible_ride[459085]: }
Dec  5 02:17:04 compute-0 systemd[1]: libpod-ffe6b1ae6d0f0ee9e54585a96e9eec76ae26b27adbb026a01e30913b47fcb6bf.scope: Deactivated successfully.
Dec  5 02:17:04 compute-0 systemd[1]: libpod-ffe6b1ae6d0f0ee9e54585a96e9eec76ae26b27adbb026a01e30913b47fcb6bf.scope: Consumed 1.158s CPU time.
Dec  5 02:17:04 compute-0 podman[459121]: 2025-12-05 02:17:04.895667189 +0000 UTC m=+0.070327676 container died ffe6b1ae6d0f0ee9e54585a96e9eec76ae26b27adbb026a01e30913b47fcb6bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ride, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  5 02:17:04 compute-0 podman[459122]: 2025-12-05 02:17:04.920736253 +0000 UTC m=+0.092051206 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  5 02:17:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-df6b871b34d2003a152489f5d146bd2c6985f8f146941e8b1974fa098d77c9ca-merged.mount: Deactivated successfully.
Dec  5 02:17:04 compute-0 podman[459120]: 2025-12-05 02:17:04.950025276 +0000 UTC m=+0.118542660 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  5 02:17:04 compute-0 podman[459121]: 2025-12-05 02:17:04.975910653 +0000 UTC m=+0.150571090 container remove ffe6b1ae6d0f0ee9e54585a96e9eec76ae26b27adbb026a01e30913b47fcb6bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ride, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:17:04 compute-0 systemd[1]: libpod-conmon-ffe6b1ae6d0f0ee9e54585a96e9eec76ae26b27adbb026a01e30913b47fcb6bf.scope: Deactivated successfully.
Dec  5 02:17:05 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 02:17:05 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:17:05 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 02:17:05 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:17:05 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev d6964181-190d-40d3-aa34-b4790c58d58f does not exist
Dec  5 02:17:05 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev ca745ec6-c8c6-4cf2-8985-5e583ab89944 does not exist
Dec  5 02:17:05 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:17:05 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:17:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2043: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  5 02:17:07 compute-0 nova_compute[349548]: 2025-12-05 02:17:07.414 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:17:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:17:08 compute-0 nova_compute[349548]: 2025-12-05 02:17:08.281 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:17:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2044: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  5 02:17:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2045: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 6.3 KiB/s rd, 1 op/s
Dec  5 02:17:10 compute-0 podman[459224]: 2025-12-05 02:17:10.676391625 +0000 UTC m=+0.092412786 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.schema-version=1.0)
Dec  5 02:17:10 compute-0 podman[459225]: 2025-12-05 02:17:10.698107995 +0000 UTC m=+0.111812931 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=kepler, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, managed_by=edpm_ansible, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, version=9.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, maintainer=Red Hat, Inc., distribution-scope=public, build-date=2024-09-18T21:23:30, release-0.7.12=, io.openshift.tags=base rhel9)
Dec  5 02:17:10 compute-0 podman[459226]: 2025-12-05 02:17:10.706326606 +0000 UTC m=+0.110595757 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 02:17:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2046: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 170 B/s wr, 4 op/s
Dec  5 02:17:12 compute-0 nova_compute[349548]: 2025-12-05 02:17:12.416 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:17:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:17:13 compute-0 nova_compute[349548]: 2025-12-05 02:17:13.285 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:17:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2047: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 170 B/s wr, 4 op/s
Dec  5 02:17:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:17:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:17:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:17:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:17:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:17:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:17:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:17:16
Dec  5 02:17:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 02:17:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 02:17:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['backups', 'images', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', 'vms', '.rgw.root', 'default.rgw.log']
Dec  5 02:17:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 02:17:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2048: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 7.8 KiB/s wr, 4 op/s
Dec  5 02:17:17 compute-0 nova_compute[349548]: 2025-12-05 02:17:17.431 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:17:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 02:17:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:17:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 02:17:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:17:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:17:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:17:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:17:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:17:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:17:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:17:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:17:18 compute-0 nova_compute[349548]: 2025-12-05 02:17:18.288 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:17:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2049: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Dec  5 02:17:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2050: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Dec  5 02:17:21 compute-0 podman[459283]: 2025-12-05 02:17:21.705519075 +0000 UTC m=+0.109100065 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  5 02:17:21 compute-0 podman[459282]: 2025-12-05 02:17:21.729296323 +0000 UTC m=+0.136257818 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  5 02:17:21 compute-0 podman[459285]: 2025-12-05 02:17:21.73276719 +0000 UTC m=+0.116910634 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, managed_by=edpm_ansible, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  5 02:17:21 compute-0 podman[459284]: 2025-12-05 02:17:21.766739665 +0000 UTC m=+0.157660619 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 02:17:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2051: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 11 KiB/s wr, 3 op/s
Dec  5 02:17:22 compute-0 nova_compute[349548]: 2025-12-05 02:17:22.429 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:17:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:17:23 compute-0 nova_compute[349548]: 2025-12-05 02:17:23.292 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:17:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2052: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 11 KiB/s wr, 0 op/s
Dec  5 02:17:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2053: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 11 KiB/s wr, 0 op/s
Dec  5 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015166383815198282 of space, bias 1.0, pg target 0.45499151445594843 quantized to 32 (current 32)
Dec  5 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  5 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:17:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 02:17:27 compute-0 nova_compute[349548]: 2025-12-05 02:17:27.431 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:17:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:17:28 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Dec  5 02:17:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:17:28.288689) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  5 02:17:28 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Dec  5 02:17:28 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901048288780, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 954, "num_deletes": 256, "total_data_size": 1374180, "memory_usage": 1400784, "flush_reason": "Manual Compaction"}
Dec  5 02:17:28 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Dec  5 02:17:28 compute-0 nova_compute[349548]: 2025-12-05 02:17:28.295 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:17:28 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901048305446, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 1350701, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41683, "largest_seqno": 42636, "table_properties": {"data_size": 1345919, "index_size": 2370, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10005, "raw_average_key_size": 19, "raw_value_size": 1336462, "raw_average_value_size": 2555, "num_data_blocks": 106, "num_entries": 523, "num_filter_entries": 523, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764900958, "oldest_key_time": 1764900958, "file_creation_time": 1764901048, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Dec  5 02:17:28 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 16799 microseconds, and 9564 cpu microseconds.
Dec  5 02:17:28 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 02:17:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:17:28.305497) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 1350701 bytes OK
Dec  5 02:17:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:17:28.305521) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Dec  5 02:17:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:17:28.308372) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Dec  5 02:17:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:17:28.308394) EVENT_LOG_v1 {"time_micros": 1764901048308387, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  5 02:17:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:17:28.308418) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  5 02:17:28 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 1369598, prev total WAL file size 1369598, number of live WAL files 2.
Dec  5 02:17:28 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:17:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:17:28.309680) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353037' seq:72057594037927935, type:22 .. '6C6F676D0031373539' seq:0, type:0; will stop at (end)
Dec  5 02:17:28 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  5 02:17:28 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(1319KB)], [98(7608KB)]
Dec  5 02:17:28 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901048309765, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 9142222, "oldest_snapshot_seqno": -1}
Dec  5 02:17:28 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 5767 keys, 9039648 bytes, temperature: kUnknown
Dec  5 02:17:28 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901048375266, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 9039648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9001125, "index_size": 22989, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14469, "raw_key_size": 150197, "raw_average_key_size": 26, "raw_value_size": 8896815, "raw_average_value_size": 1542, "num_data_blocks": 919, "num_entries": 5767, "num_filter_entries": 5767, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764901048, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Dec  5 02:17:28 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 02:17:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:17:28.375660) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 9039648 bytes
Dec  5 02:17:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:17:28.378222) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 139.2 rd, 137.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 7.4 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(13.5) write-amplify(6.7) OK, records in: 6291, records dropped: 524 output_compression: NoCompression
Dec  5 02:17:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:17:28.378254) EVENT_LOG_v1 {"time_micros": 1764901048378239, "job": 58, "event": "compaction_finished", "compaction_time_micros": 65662, "compaction_time_cpu_micros": 37590, "output_level": 6, "num_output_files": 1, "total_output_size": 9039648, "num_input_records": 6291, "num_output_records": 5767, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  5 02:17:28 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:17:28 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901048378850, "job": 58, "event": "table_file_deletion", "file_number": 100}
Dec  5 02:17:28 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:17:28 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901048381742, "job": 58, "event": "table_file_deletion", "file_number": 98}
Dec  5 02:17:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:17:28.309461) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:17:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:17:28.382024) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:17:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:17:28.382031) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:17:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:17:28.382033) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:17:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:17:28.382035) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:17:28 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:17:28.382037) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:17:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2054: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 3.1 KiB/s wr, 0 op/s
Dec  5 02:17:29 compute-0 podman[158197]: time="2025-12-05T02:17:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:17:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:17:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 02:17:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:17:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8648 "" "Go-http-client/1.1"
Dec  5 02:17:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2055: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 9.3 KiB/s wr, 0 op/s
Dec  5 02:17:31 compute-0 openstack_network_exporter[366555]: ERROR   02:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:17:31 compute-0 openstack_network_exporter[366555]: ERROR   02:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:17:31 compute-0 openstack_network_exporter[366555]: ERROR   02:17:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:17:31 compute-0 openstack_network_exporter[366555]: ERROR   02:17:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:17:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:17:31 compute-0 openstack_network_exporter[366555]: ERROR   02:17:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:17:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:17:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2056: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 9.7 KiB/s wr, 0 op/s
Dec  5 02:17:32 compute-0 nova_compute[349548]: 2025-12-05 02:17:32.433 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:17:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:17:33 compute-0 nova_compute[349548]: 2025-12-05 02:17:33.298 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:17:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2057: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 7.3 KiB/s wr, 0 op/s
Dec  5 02:17:35 compute-0 podman[459365]: 2025-12-05 02:17:35.683538387 +0000 UTC m=+0.097355716 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  5 02:17:35 compute-0 podman[459366]: 2025-12-05 02:17:35.689815883 +0000 UTC m=+0.097946262 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  5 02:17:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2058: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 7.3 KiB/s wr, 0 op/s
Dec  5 02:17:37 compute-0 nova_compute[349548]: 2025-12-05 02:17:37.437 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:17:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:17:38 compute-0 nova_compute[349548]: 2025-12-05 02:17:38.300 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:17:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2059: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 7.3 KiB/s wr, 0 op/s
Dec  5 02:17:39 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec  5 02:17:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2060: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s wr, 0 op/s
Dec  5 02:17:40 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  5 02:17:41 compute-0 podman[459409]: 2025-12-05 02:17:41.026863687 +0000 UTC m=+0.103397835 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, com.redhat.component=ubi9-container, container_name=kepler, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., release-0.7.12=, distribution-scope=public, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.tags=base rhel9, name=ubi9, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  5 02:17:41 compute-0 podman[459408]: 2025-12-05 02:17:41.036357114 +0000 UTC m=+0.113372545 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  5 02:17:41 compute-0 podman[459410]: 2025-12-05 02:17:41.060517792 +0000 UTC m=+0.124081476 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm)
Dec  5 02:17:41 compute-0 nova_compute[349548]: 2025-12-05 02:17:41.070 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:17:41 compute-0 nova_compute[349548]: 2025-12-05 02:17:41.070 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 02:17:41 compute-0 nova_compute[349548]: 2025-12-05 02:17:41.838 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:17:41 compute-0 nova_compute[349548]: 2025-12-05 02:17:41.839 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:17:41 compute-0 nova_compute[349548]: 2025-12-05 02:17:41.839 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  5 02:17:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2061: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  5 02:17:42 compute-0 nova_compute[349548]: 2025-12-05 02:17:42.439 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:17:43 compute-0 nova_compute[349548]: 2025-12-05 02:17:43.239 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Updating instance_info_cache with network_info: [{"id": "afc3cf6c-cbe3-4163-920e-7122f474d371", "address": "fa:16:3e:69:80:52", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafc3cf6c-cb", "ovs_interfaceid": "afc3cf6c-cbe3-4163-920e-7122f474d371", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:17:43 compute-0 nova_compute[349548]: 2025-12-05 02:17:43.264 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:17:43 compute-0 nova_compute[349548]: 2025-12-05 02:17:43.265 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  5 02:17:43 compute-0 nova_compute[349548]: 2025-12-05 02:17:43.266 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:17:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:17:43 compute-0 nova_compute[349548]: 2025-12-05 02:17:43.303 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:17:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2062: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:17:45 compute-0 nova_compute[349548]: 2025-12-05 02:17:45.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:17:45 compute-0 nova_compute[349548]: 2025-12-05 02:17:45.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:17:45 compute-0 nova_compute[349548]: 2025-12-05 02:17:45.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:17:45 compute-0 nova_compute[349548]: 2025-12-05 02:17:45.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:17:45 compute-0 nova_compute[349548]: 2025-12-05 02:17:45.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 02:17:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 02:17:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1081539781' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 02:17:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 02:17:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1081539781' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 02:17:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:17:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:17:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:17:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:17:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:17:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:17:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2063: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s wr, 0 op/s
Dec  5 02:17:47 compute-0 nova_compute[349548]: 2025-12-05 02:17:47.442 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:17:48 compute-0 nova_compute[349548]: 2025-12-05 02:17:48.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:17:48 compute-0 nova_compute[349548]: 2025-12-05 02:17:48.095 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:17:48 compute-0 nova_compute[349548]: 2025-12-05 02:17:48.096 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:17:48 compute-0 nova_compute[349548]: 2025-12-05 02:17:48.097 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:17:48 compute-0 nova_compute[349548]: 2025-12-05 02:17:48.099 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 02:17:48 compute-0 nova_compute[349548]: 2025-12-05 02:17:48.100 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:17:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:17:48 compute-0 nova_compute[349548]: 2025-12-05 02:17:48.305 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:17:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2064: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec  5 02:17:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:17:48 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/283683035' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:17:48 compute-0 nova_compute[349548]: 2025-12-05 02:17:48.637 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:17:48 compute-0 nova_compute[349548]: 2025-12-05 02:17:48.795 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:17:48 compute-0 nova_compute[349548]: 2025-12-05 02:17:48.801 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:17:48 compute-0 nova_compute[349548]: 2025-12-05 02:17:48.809 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:17:48 compute-0 nova_compute[349548]: 2025-12-05 02:17:48.810 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:17:49 compute-0 nova_compute[349548]: 2025-12-05 02:17:49.371 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:17:49 compute-0 nova_compute[349548]: 2025-12-05 02:17:49.373 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3566MB free_disk=59.897212982177734GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 02:17:49 compute-0 nova_compute[349548]: 2025-12-05 02:17:49.373 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:17:49 compute-0 nova_compute[349548]: 2025-12-05 02:17:49.374 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:17:49 compute-0 nova_compute[349548]: 2025-12-05 02:17:49.477 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 292fd084-0808-4a80-adc1-6ab1f28e188a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:17:49 compute-0 nova_compute[349548]: 2025-12-05 02:17:49.479 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:17:49 compute-0 nova_compute[349548]: 2025-12-05 02:17:49.480 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 02:17:49 compute-0 nova_compute[349548]: 2025-12-05 02:17:49.481 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 02:17:49 compute-0 nova_compute[349548]: 2025-12-05 02:17:49.558 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:17:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:17:50 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/473721791' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:17:50 compute-0 nova_compute[349548]: 2025-12-05 02:17:50.065 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:17:50 compute-0 nova_compute[349548]: 2025-12-05 02:17:50.077 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:17:50 compute-0 nova_compute[349548]: 2025-12-05 02:17:50.241 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:17:50 compute-0 nova_compute[349548]: 2025-12-05 02:17:50.244 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 02:17:50 compute-0 nova_compute[349548]: 2025-12-05 02:17:50.244 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.870s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:17:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2065: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec  5 02:17:51 compute-0 nova_compute[349548]: 2025-12-05 02:17:51.246 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:17:51 compute-0 nova_compute[349548]: 2025-12-05 02:17:51.290 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:17:51 compute-0 nova_compute[349548]: 2025-12-05 02:17:51.292 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:17:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2066: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec  5 02:17:52 compute-0 nova_compute[349548]: 2025-12-05 02:17:52.445 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:17:52 compute-0 podman[459508]: 2025-12-05 02:17:52.72546685 +0000 UTC m=+0.121065751 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  5 02:17:52 compute-0 podman[459507]: 2025-12-05 02:17:52.744620058 +0000 UTC m=+0.147332389 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec  5 02:17:52 compute-0 podman[459515]: 2025-12-05 02:17:52.766498023 +0000 UTC m=+0.137844683 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.expose-services=, version=9.6, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, architecture=x86_64, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Dec  5 02:17:52 compute-0 podman[459509]: 2025-12-05 02:17:52.802390691 +0000 UTC m=+0.188757343 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  5 02:17:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:17:53 compute-0 nova_compute[349548]: 2025-12-05 02:17:53.310 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:17:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2067: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec  5 02:17:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:17:56.215 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:17:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:17:56.215 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:17:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:17:56.216 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:17:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2068: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec  5 02:17:57 compute-0 nova_compute[349548]: 2025-12-05 02:17:57.448 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:17:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:17:58 compute-0 nova_compute[349548]: 2025-12-05 02:17:58.313 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:17:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2069: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s wr, 1 op/s
Dec  5 02:17:59 compute-0 podman[158197]: time="2025-12-05T02:17:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:17:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:17:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 02:17:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:17:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8663 "" "Go-http-client/1.1"
Dec  5 02:18:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2070: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:18:01 compute-0 openstack_network_exporter[366555]: ERROR   02:18:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:18:01 compute-0 openstack_network_exporter[366555]: ERROR   02:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:18:01 compute-0 openstack_network_exporter[366555]: ERROR   02:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:18:01 compute-0 openstack_network_exporter[366555]: ERROR   02:18:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:18:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:18:01 compute-0 openstack_network_exporter[366555]: ERROR   02:18:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:18:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:18:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2071: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  5 02:18:02 compute-0 nova_compute[349548]: 2025-12-05 02:18:02.451 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:18:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:18:03 compute-0 nova_compute[349548]: 2025-12-05 02:18:03.317 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:18:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2072: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  5 02:18:06 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 02:18:06 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:18:06 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 02:18:06 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:18:06 compute-0 podman[459705]: 2025-12-05 02:18:06.127532915 +0000 UTC m=+0.145823557 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  5 02:18:06 compute-0 podman[459704]: 2025-12-05 02:18:06.127965877 +0000 UTC m=+0.157478094 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  5 02:18:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2073: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  5 02:18:07 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:18:07 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:18:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:18:07 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:18:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 02:18:07 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:18:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 02:18:07 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:18:07 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 7966fe72-9659-42a0-9d24-adeee677e91c does not exist
Dec  5 02:18:07 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 74ef1ba2-9207-4cfe-8566-e095ae41bdcc does not exist
Dec  5 02:18:07 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 43d59e8b-a924-4f69-95f2-4a8b6b678bf7 does not exist
Dec  5 02:18:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 02:18:07 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 02:18:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 02:18:07 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:18:07 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:18:07 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:18:07 compute-0 nova_compute[349548]: 2025-12-05 02:18:07.453 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:18:08 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:18:08 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:18:08 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:18:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:18:08 compute-0 nova_compute[349548]: 2025-12-05 02:18:08.319 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:18:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2074: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  5 02:18:08 compute-0 podman[460016]: 2025-12-05 02:18:08.450693943 +0000 UTC m=+0.078479975 container create e7293b92c0fc078c2619dfdc5c97cb79933e2544a39a8f0b25df5ed179f52db6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  5 02:18:08 compute-0 podman[460016]: 2025-12-05 02:18:08.419690932 +0000 UTC m=+0.047476944 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:18:08 compute-0 systemd[1]: Started libpod-conmon-e7293b92c0fc078c2619dfdc5c97cb79933e2544a39a8f0b25df5ed179f52db6.scope.
Dec  5 02:18:08 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:18:08 compute-0 podman[460016]: 2025-12-05 02:18:08.595240253 +0000 UTC m=+0.223026305 container init e7293b92c0fc078c2619dfdc5c97cb79933e2544a39a8f0b25df5ed179f52db6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hoover, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:18:08 compute-0 podman[460016]: 2025-12-05 02:18:08.61258179 +0000 UTC m=+0.240367792 container start e7293b92c0fc078c2619dfdc5c97cb79933e2544a39a8f0b25df5ed179f52db6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hoover, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:18:08 compute-0 podman[460016]: 2025-12-05 02:18:08.618249859 +0000 UTC m=+0.246035891 container attach e7293b92c0fc078c2619dfdc5c97cb79933e2544a39a8f0b25df5ed179f52db6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:18:08 compute-0 romantic_hoover[460033]: 167 167
Dec  5 02:18:08 compute-0 systemd[1]: libpod-e7293b92c0fc078c2619dfdc5c97cb79933e2544a39a8f0b25df5ed179f52db6.scope: Deactivated successfully.
Dec  5 02:18:08 compute-0 podman[460016]: 2025-12-05 02:18:08.629376691 +0000 UTC m=+0.257162723 container died e7293b92c0fc078c2619dfdc5c97cb79933e2544a39a8f0b25df5ed179f52db6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hoover, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  5 02:18:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-35df3b719f303882b5de77a9d1fea60fdc19a4c6592c68b51f504ee4050c7d27-merged.mount: Deactivated successfully.
Dec  5 02:18:08 compute-0 podman[460016]: 2025-12-05 02:18:08.707435544 +0000 UTC m=+0.335221536 container remove e7293b92c0fc078c2619dfdc5c97cb79933e2544a39a8f0b25df5ed179f52db6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hoover, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  5 02:18:08 compute-0 systemd[1]: libpod-conmon-e7293b92c0fc078c2619dfdc5c97cb79933e2544a39a8f0b25df5ed179f52db6.scope: Deactivated successfully.
Dec  5 02:18:08 compute-0 podman[460056]: 2025-12-05 02:18:08.970025549 +0000 UTC m=+0.072531228 container create 5723a854545d34fff2a1a53b394f125d8ce8d9a548a095702695ed07f7598e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  5 02:18:09 compute-0 podman[460056]: 2025-12-05 02:18:08.949114182 +0000 UTC m=+0.051619891 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:18:09 compute-0 systemd[1]: Started libpod-conmon-5723a854545d34fff2a1a53b394f125d8ce8d9a548a095702695ed07f7598e8b.scope.
Dec  5 02:18:09 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:18:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4853bc06573e69b4c7b99346c42517ef181b40b33eef1abb9e99fdbe0e051f5c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:18:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4853bc06573e69b4c7b99346c42517ef181b40b33eef1abb9e99fdbe0e051f5c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:18:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4853bc06573e69b4c7b99346c42517ef181b40b33eef1abb9e99fdbe0e051f5c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:18:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4853bc06573e69b4c7b99346c42517ef181b40b33eef1abb9e99fdbe0e051f5c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:18:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4853bc06573e69b4c7b99346c42517ef181b40b33eef1abb9e99fdbe0e051f5c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 02:18:09 compute-0 podman[460056]: 2025-12-05 02:18:09.163372969 +0000 UTC m=+0.265878738 container init 5723a854545d34fff2a1a53b394f125d8ce8d9a548a095702695ed07f7598e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:18:09 compute-0 podman[460056]: 2025-12-05 02:18:09.181725805 +0000 UTC m=+0.284231514 container start 5723a854545d34fff2a1a53b394f125d8ce8d9a548a095702695ed07f7598e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  5 02:18:09 compute-0 podman[460056]: 2025-12-05 02:18:09.188506405 +0000 UTC m=+0.291012114 container attach 5723a854545d34fff2a1a53b394f125d8ce8d9a548a095702695ed07f7598e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Dec  5 02:18:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2075: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  5 02:18:10 compute-0 great_grothendieck[460073]: --> passed data devices: 0 physical, 3 LVM
Dec  5 02:18:10 compute-0 great_grothendieck[460073]: --> relative data size: 1.0
Dec  5 02:18:10 compute-0 great_grothendieck[460073]: --> All data devices are unavailable
Dec  5 02:18:10 compute-0 systemd[1]: libpod-5723a854545d34fff2a1a53b394f125d8ce8d9a548a095702695ed07f7598e8b.scope: Deactivated successfully.
Dec  5 02:18:10 compute-0 podman[460056]: 2025-12-05 02:18:10.577822704 +0000 UTC m=+1.680328443 container died 5723a854545d34fff2a1a53b394f125d8ce8d9a548a095702695ed07f7598e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  5 02:18:10 compute-0 systemd[1]: libpod-5723a854545d34fff2a1a53b394f125d8ce8d9a548a095702695ed07f7598e8b.scope: Consumed 1.315s CPU time.
Dec  5 02:18:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-4853bc06573e69b4c7b99346c42517ef181b40b33eef1abb9e99fdbe0e051f5c-merged.mount: Deactivated successfully.
Dec  5 02:18:10 compute-0 podman[460056]: 2025-12-05 02:18:10.683667467 +0000 UTC m=+1.786173156 container remove 5723a854545d34fff2a1a53b394f125d8ce8d9a548a095702695ed07f7598e8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  5 02:18:10 compute-0 systemd[1]: libpod-conmon-5723a854545d34fff2a1a53b394f125d8ce8d9a548a095702695ed07f7598e8b.scope: Deactivated successfully.
Dec  5 02:18:11 compute-0 podman[460190]: 2025-12-05 02:18:11.291433167 +0000 UTC m=+0.121647208 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  5 02:18:11 compute-0 podman[460188]: 2025-12-05 02:18:11.298572667 +0000 UTC m=+0.131286648 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  5 02:18:11 compute-0 podman[460189]: 2025-12-05 02:18:11.298928017 +0000 UTC m=+0.120664160 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, release=1214.1726694543, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=ubi9, vcs-type=git, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  5 02:18:11 compute-0 podman[460310]: 2025-12-05 02:18:11.902671724 +0000 UTC m=+0.113322674 container create 885e23d0cc39f77eb66ca93f5a57932f4259bca363a6985b1e7ccd0a7fc06424 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  5 02:18:11 compute-0 podman[460310]: 2025-12-05 02:18:11.847577007 +0000 UTC m=+0.058228037 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:18:11 compute-0 systemd[1]: Started libpod-conmon-885e23d0cc39f77eb66ca93f5a57932f4259bca363a6985b1e7ccd0a7fc06424.scope.
Dec  5 02:18:12 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:18:12 compute-0 podman[460310]: 2025-12-05 02:18:12.067813932 +0000 UTC m=+0.278464902 container init 885e23d0cc39f77eb66ca93f5a57932f4259bca363a6985b1e7ccd0a7fc06424 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_babbage, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  5 02:18:12 compute-0 podman[460310]: 2025-12-05 02:18:12.085566631 +0000 UTC m=+0.296217571 container start 885e23d0cc39f77eb66ca93f5a57932f4259bca363a6985b1e7ccd0a7fc06424 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_babbage, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:18:12 compute-0 podman[460310]: 2025-12-05 02:18:12.09159159 +0000 UTC m=+0.302242530 container attach 885e23d0cc39f77eb66ca93f5a57932f4259bca363a6985b1e7ccd0a7fc06424 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_babbage, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:18:12 compute-0 quizzical_babbage[460326]: 167 167
Dec  5 02:18:12 compute-0 systemd[1]: libpod-885e23d0cc39f77eb66ca93f5a57932f4259bca363a6985b1e7ccd0a7fc06424.scope: Deactivated successfully.
Dec  5 02:18:12 compute-0 podman[460310]: 2025-12-05 02:18:12.101709034 +0000 UTC m=+0.312359984 container died 885e23d0cc39f77eb66ca93f5a57932f4259bca363a6985b1e7ccd0a7fc06424 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_babbage, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:18:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-933ddd8ac423354d79f983853331efbe1bf6b94eabbf619cb1c124d85bc275cb-merged.mount: Deactivated successfully.
Dec  5 02:18:12 compute-0 podman[460310]: 2025-12-05 02:18:12.174961532 +0000 UTC m=+0.385612442 container remove 885e23d0cc39f77eb66ca93f5a57932f4259bca363a6985b1e7ccd0a7fc06424 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:18:12 compute-0 systemd[1]: libpod-conmon-885e23d0cc39f77eb66ca93f5a57932f4259bca363a6985b1e7ccd0a7fc06424.scope: Deactivated successfully.
Dec  5 02:18:12 compute-0 podman[460348]: 2025-12-05 02:18:12.409318774 +0000 UTC m=+0.072740364 container create 247b2cd8958bf7ae391fff0235fd169d8b5440910bd8e8f8b2f85740d201dec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:18:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2076: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  5 02:18:12 compute-0 nova_compute[349548]: 2025-12-05 02:18:12.456 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:18:12 compute-0 podman[460348]: 2025-12-05 02:18:12.378732145 +0000 UTC m=+0.042153715 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:18:12 compute-0 systemd[1]: Started libpod-conmon-247b2cd8958bf7ae391fff0235fd169d8b5440910bd8e8f8b2f85740d201dec6.scope.
Dec  5 02:18:12 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:18:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dde3470ec980a142432a3aee6b953e908d4520a47612f3e8cea57dbc588ff2a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:18:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dde3470ec980a142432a3aee6b953e908d4520a47612f3e8cea57dbc588ff2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:18:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dde3470ec980a142432a3aee6b953e908d4520a47612f3e8cea57dbc588ff2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:18:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dde3470ec980a142432a3aee6b953e908d4520a47612f3e8cea57dbc588ff2a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:18:12 compute-0 podman[460348]: 2025-12-05 02:18:12.579756371 +0000 UTC m=+0.243177941 container init 247b2cd8958bf7ae391fff0235fd169d8b5440910bd8e8f8b2f85740d201dec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_stonebraker, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  5 02:18:12 compute-0 podman[460348]: 2025-12-05 02:18:12.607580212 +0000 UTC m=+0.271001792 container start 247b2cd8958bf7ae391fff0235fd169d8b5440910bd8e8f8b2f85740d201dec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  5 02:18:12 compute-0 podman[460348]: 2025-12-05 02:18:12.615382991 +0000 UTC m=+0.278804581 container attach 247b2cd8958bf7ae391fff0235fd169d8b5440910bd8e8f8b2f85740d201dec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_stonebraker, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  5 02:18:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:18:13 compute-0 nova_compute[349548]: 2025-12-05 02:18:13.324 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]: {
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:    "0": [
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:        {
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:            "devices": [
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:                "/dev/loop3"
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:            ],
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:            "lv_name": "ceph_lv0",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:            "lv_size": "21470642176",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:            "name": "ceph_lv0",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:            "tags": {
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:                "ceph.cluster_name": "ceph",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:                "ceph.crush_device_class": "",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:                "ceph.encrypted": "0",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:                "ceph.osd_id": "0",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:                "ceph.type": "block",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:                "ceph.vdo": "0"
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:            },
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:            "type": "block",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:            "vg_name": "ceph_vg0"
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:        }
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:    ],
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:    "1": [
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:        {
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:            "devices": [
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:                "/dev/loop4"
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:            ],
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:            "lv_name": "ceph_lv1",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:            "lv_size": "21470642176",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:            "name": "ceph_lv1",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:            "tags": {
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:                "ceph.cluster_name": "ceph",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:                "ceph.crush_device_class": "",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:                "ceph.encrypted": "0",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:                "ceph.osd_id": "1",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:                "ceph.type": "block",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:                "ceph.vdo": "0"
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:            },
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:            "type": "block",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:            "vg_name": "ceph_vg1"
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:        }
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:    ],
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:    "2": [
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:        {
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:            "devices": [
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:                "/dev/loop5"
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:            ],
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:            "lv_name": "ceph_lv2",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:            "lv_size": "21470642176",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:            "name": "ceph_lv2",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:            "tags": {
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:                "ceph.cluster_name": "ceph",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:                "ceph.crush_device_class": "",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:                "ceph.encrypted": "0",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:                "ceph.osd_id": "2",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:                "ceph.type": "block",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:                "ceph.vdo": "0"
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:            },
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:            "type": "block",
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:            "vg_name": "ceph_vg2"
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:        }
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]:    ]
Dec  5 02:18:13 compute-0 pedantic_stonebraker[460364]: }
Dec  5 02:18:13 compute-0 systemd[1]: libpod-247b2cd8958bf7ae391fff0235fd169d8b5440910bd8e8f8b2f85740d201dec6.scope: Deactivated successfully.
Dec  5 02:18:13 compute-0 podman[460373]: 2025-12-05 02:18:13.623263188 +0000 UTC m=+0.054867913 container died 247b2cd8958bf7ae391fff0235fd169d8b5440910bd8e8f8b2f85740d201dec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_stonebraker, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  5 02:18:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-6dde3470ec980a142432a3aee6b953e908d4520a47612f3e8cea57dbc588ff2a-merged.mount: Deactivated successfully.
Dec  5 02:18:13 compute-0 podman[460373]: 2025-12-05 02:18:13.727855385 +0000 UTC m=+0.159460070 container remove 247b2cd8958bf7ae391fff0235fd169d8b5440910bd8e8f8b2f85740d201dec6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_stonebraker, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  5 02:18:13 compute-0 systemd[1]: libpod-conmon-247b2cd8958bf7ae391fff0235fd169d8b5440910bd8e8f8b2f85740d201dec6.scope: Deactivated successfully.
Dec  5 02:18:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2077: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:18:14 compute-0 podman[460526]: 2025-12-05 02:18:14.859255201 +0000 UTC m=+0.097860429 container create f2e11a6006fc7f2deae603a7c9c65499d99f8c87b90f23544984bb1cb699570f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  5 02:18:14 compute-0 podman[460526]: 2025-12-05 02:18:14.815527453 +0000 UTC m=+0.054132751 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:18:14 compute-0 systemd[1]: Started libpod-conmon-f2e11a6006fc7f2deae603a7c9c65499d99f8c87b90f23544984bb1cb699570f.scope.
Dec  5 02:18:14 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:18:15 compute-0 podman[460526]: 2025-12-05 02:18:15.000434487 +0000 UTC m=+0.239039765 container init f2e11a6006fc7f2deae603a7c9c65499d99f8c87b90f23544984bb1cb699570f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:18:15 compute-0 podman[460526]: 2025-12-05 02:18:15.01123685 +0000 UTC m=+0.249842078 container start f2e11a6006fc7f2deae603a7c9c65499d99f8c87b90f23544984bb1cb699570f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_feynman, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:18:15 compute-0 podman[460526]: 2025-12-05 02:18:15.016321163 +0000 UTC m=+0.254926471 container attach f2e11a6006fc7f2deae603a7c9c65499d99f8c87b90f23544984bb1cb699570f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_feynman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:18:15 compute-0 quirky_feynman[460542]: 167 167
Dec  5 02:18:15 compute-0 systemd[1]: libpod-f2e11a6006fc7f2deae603a7c9c65499d99f8c87b90f23544984bb1cb699570f.scope: Deactivated successfully.
Dec  5 02:18:15 compute-0 podman[460526]: 2025-12-05 02:18:15.021812977 +0000 UTC m=+0.260418215 container died f2e11a6006fc7f2deae603a7c9c65499d99f8c87b90f23544984bb1cb699570f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Dec  5 02:18:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-914ef960946ebc4ce862bfd48ece97a703e044cd6a709d9204e444fc70e04949-merged.mount: Deactivated successfully.
Dec  5 02:18:15 compute-0 podman[460526]: 2025-12-05 02:18:15.094433457 +0000 UTC m=+0.333038715 container remove f2e11a6006fc7f2deae603a7c9c65499d99f8c87b90f23544984bb1cb699570f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:18:15 compute-0 systemd[1]: libpod-conmon-f2e11a6006fc7f2deae603a7c9c65499d99f8c87b90f23544984bb1cb699570f.scope: Deactivated successfully.
Dec  5 02:18:15 compute-0 podman[460564]: 2025-12-05 02:18:15.407826129 +0000 UTC m=+0.083054924 container create a382d5f2aa612fe33bac13e40b6849a91a630b7f64b133671dd4631dc4e86d25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:18:15 compute-0 podman[460564]: 2025-12-05 02:18:15.379473772 +0000 UTC m=+0.054702557 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:18:15 compute-0 systemd[1]: Started libpod-conmon-a382d5f2aa612fe33bac13e40b6849a91a630b7f64b133671dd4631dc4e86d25.scope.
Dec  5 02:18:15 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:18:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b8a579164044e79e1344e491651b48ce80631954d581bf940e4b4b30044c31e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:18:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b8a579164044e79e1344e491651b48ce80631954d581bf940e4b4b30044c31e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:18:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b8a579164044e79e1344e491651b48ce80631954d581bf940e4b4b30044c31e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:18:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b8a579164044e79e1344e491651b48ce80631954d581bf940e4b4b30044c31e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:18:15 compute-0 podman[460564]: 2025-12-05 02:18:15.587809144 +0000 UTC m=+0.263037969 container init a382d5f2aa612fe33bac13e40b6849a91a630b7f64b133671dd4631dc4e86d25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_khorana, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:18:15 compute-0 podman[460564]: 2025-12-05 02:18:15.602678201 +0000 UTC m=+0.277906966 container start a382d5f2aa612fe33bac13e40b6849a91a630b7f64b133671dd4631dc4e86d25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  5 02:18:15 compute-0 podman[460564]: 2025-12-05 02:18:15.608427623 +0000 UTC m=+0.283656418 container attach a382d5f2aa612fe33bac13e40b6849a91a630b7f64b133671dd4631dc4e86d25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_khorana, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  5 02:18:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:18:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:18:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:18:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:18:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:18:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:18:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:18:16
Dec  5 02:18:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 02:18:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 02:18:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.log', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'vms', 'backups']
Dec  5 02:18:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 02:18:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2078: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:18:16 compute-0 flamboyant_khorana[460580]: {
Dec  5 02:18:16 compute-0 flamboyant_khorana[460580]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 02:18:16 compute-0 flamboyant_khorana[460580]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:18:16 compute-0 flamboyant_khorana[460580]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 02:18:16 compute-0 flamboyant_khorana[460580]:        "osd_id": 0,
Dec  5 02:18:16 compute-0 flamboyant_khorana[460580]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:18:16 compute-0 flamboyant_khorana[460580]:        "type": "bluestore"
Dec  5 02:18:16 compute-0 flamboyant_khorana[460580]:    },
Dec  5 02:18:16 compute-0 flamboyant_khorana[460580]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 02:18:16 compute-0 flamboyant_khorana[460580]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:18:16 compute-0 flamboyant_khorana[460580]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 02:18:16 compute-0 flamboyant_khorana[460580]:        "osd_id": 1,
Dec  5 02:18:16 compute-0 flamboyant_khorana[460580]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:18:16 compute-0 flamboyant_khorana[460580]:        "type": "bluestore"
Dec  5 02:18:16 compute-0 flamboyant_khorana[460580]:    },
Dec  5 02:18:16 compute-0 flamboyant_khorana[460580]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 02:18:16 compute-0 flamboyant_khorana[460580]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:18:16 compute-0 flamboyant_khorana[460580]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 02:18:16 compute-0 flamboyant_khorana[460580]:        "osd_id": 2,
Dec  5 02:18:16 compute-0 flamboyant_khorana[460580]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:18:16 compute-0 flamboyant_khorana[460580]:        "type": "bluestore"
Dec  5 02:18:16 compute-0 flamboyant_khorana[460580]:    }
Dec  5 02:18:16 compute-0 flamboyant_khorana[460580]: }
Dec  5 02:18:16 compute-0 systemd[1]: libpod-a382d5f2aa612fe33bac13e40b6849a91a630b7f64b133671dd4631dc4e86d25.scope: Deactivated successfully.
Dec  5 02:18:16 compute-0 systemd[1]: libpod-a382d5f2aa612fe33bac13e40b6849a91a630b7f64b133671dd4631dc4e86d25.scope: Consumed 1.278s CPU time.
Dec  5 02:18:16 compute-0 podman[460564]: 2025-12-05 02:18:16.888342989 +0000 UTC m=+1.563571754 container died a382d5f2aa612fe33bac13e40b6849a91a630b7f64b133671dd4631dc4e86d25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Dec  5 02:18:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b8a579164044e79e1344e491651b48ce80631954d581bf940e4b4b30044c31e-merged.mount: Deactivated successfully.
Dec  5 02:18:17 compute-0 podman[460564]: 2025-12-05 02:18:17.203857711 +0000 UTC m=+1.879086506 container remove a382d5f2aa612fe33bac13e40b6849a91a630b7f64b133671dd4631dc4e86d25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:18:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 02:18:17 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:18:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 02:18:17 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:18:17 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev f17468f8-2e18-492d-8c82-30d5bfe951d7 does not exist
Dec  5 02:18:17 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev d7fb63d6-c47d-42ed-96f6-7ae9b389c644 does not exist
Dec  5 02:18:17 compute-0 systemd[1]: libpod-conmon-a382d5f2aa612fe33bac13e40b6849a91a630b7f64b133671dd4631dc4e86d25.scope: Deactivated successfully.
Dec  5 02:18:17 compute-0 nova_compute[349548]: 2025-12-05 02:18:17.460 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:18:17 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:18:17 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:18:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 02:18:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:18:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 02:18:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:18:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:18:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:18:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:18:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:18:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:18:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:18:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:18:18 compute-0 nova_compute[349548]: 2025-12-05 02:18:18.328 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:18:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2079: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:18:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2080: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:18:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2081: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:18:22 compute-0 nova_compute[349548]: 2025-12-05 02:18:22.462 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:18:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:18:23 compute-0 nova_compute[349548]: 2025-12-05 02:18:23.330 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:18:23 compute-0 podman[460676]: 2025-12-05 02:18:23.718621052 +0000 UTC m=+0.109090335 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  5 02:18:23 compute-0 podman[460675]: 2025-12-05 02:18:23.747798151 +0000 UTC m=+0.135479966 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  5 02:18:23 compute-0 podman[460678]: 2025-12-05 02:18:23.757397691 +0000 UTC m=+0.130603389 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., name=ubi9-minimal, build-date=2025-08-20T13:12:41, config_id=edpm, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, release=1755695350, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, architecture=x86_64, container_name=openstack_network_exporter)
Dec  5 02:18:23 compute-0 podman[460677]: 2025-12-05 02:18:23.794204875 +0000 UTC m=+0.184090962 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  5 02:18:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2082: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:18:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2083: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015181009677997005 of space, bias 1.0, pg target 0.45543029033991017 quantized to 32 (current 32)
Dec  5 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  5 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:18:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 02:18:27 compute-0 nova_compute[349548]: 2025-12-05 02:18:27.466 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:18:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:18:28 compute-0 nova_compute[349548]: 2025-12-05 02:18:28.332 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:18:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2084: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:18:29 compute-0 podman[158197]: time="2025-12-05T02:18:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:18:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:18:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 02:18:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:18:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8657 "" "Go-http-client/1.1"
Dec  5 02:18:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2085: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:18:31 compute-0 openstack_network_exporter[366555]: ERROR   02:18:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:18:31 compute-0 openstack_network_exporter[366555]: ERROR   02:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:18:31 compute-0 openstack_network_exporter[366555]: ERROR   02:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:18:31 compute-0 openstack_network_exporter[366555]: ERROR   02:18:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:18:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:18:31 compute-0 openstack_network_exporter[366555]: ERROR   02:18:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:18:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:18:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2086: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:18:32 compute-0 nova_compute[349548]: 2025-12-05 02:18:32.468 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:18:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:18:33 compute-0 nova_compute[349548]: 2025-12-05 02:18:33.335 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:18:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2087: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:18:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2088: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:18:36 compute-0 podman[460762]: 2025-12-05 02:18:36.714416327 +0000 UTC m=+0.114058144 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 02:18:36 compute-0 podman[460761]: 2025-12-05 02:18:36.732429583 +0000 UTC m=+0.136348360 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Dec  5 02:18:37 compute-0 nova_compute[349548]: 2025-12-05 02:18:37.471 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:18:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.325 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.326 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.326 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d01f3e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.337 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '292fd084-0808-4a80-adc1-6ab1f28e188a', 'name': 'te-3255585-asg-ymkpcnuo2iqm-rsaqvth2jwvx-k3ipymnd45pa', 'flavor': {'id': 'bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'b01709a3378347e1a3f25eeb2b8b1bca', 'user_id': '99591ed8361e41579fee1d14f16bf0f7', 'hostId': '1d9ee94bfdb0c27cf886050001bab7f2a93221931735791e86b3ac18', 'status': 'active', 'metadata': {'metering.server_group': '92ca195d-98d1-443c-9947-dcb7ca7b926a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  5 02:18:38 compute-0 nova_compute[349548]: 2025-12-05 02:18:38.338 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.342 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7', 'name': 'te-3255585-asg-ymkpcnuo2iqm-egephyv4dydi-sxgc5dh3lpwo', 'flavor': {'id': 'bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'b01709a3378347e1a3f25eeb2b8b1bca', 'user_id': '99591ed8361e41579fee1d14f16bf0f7', 'hostId': '1d9ee94bfdb0c27cf886050001bab7f2a93221931735791e86b3ac18', 'status': 'active', 'metadata': {'metering.server_group': '92ca195d-98d1-443c-9947-dcb7ca7b926a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.343 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.343 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd61438050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.343 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd61438050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.344 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.345 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.346 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.346 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.346 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-05T02:18:38.343853) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.347 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.347 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.347 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-05T02:18:38.347270) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.367 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.368 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.391 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.392 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.392 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.392 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.393 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.393 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.393 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.393 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.393 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.394 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.394 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.394 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.394 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.395 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.394 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-05T02:18:38.393339) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.395 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.395 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.396 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-05T02:18:38.395163) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.439 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.bytes volume: 30882304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.440 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2089: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.495 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.bytes volume: 30075904 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.495 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.496 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.496 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.496 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.496 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.496 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.497 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.497 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.latency volume: 3200956192 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.497 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.latency volume: 237184283 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.497 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.latency volume: 2761905668 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.498 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.latency volume: 175446078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.498 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.498 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.498 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.499 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.499 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.499 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.499 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.requests volume: 1101 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.500 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.500 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.requests volume: 1075 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.500 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.500 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-05T02:18:38.496999) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.501 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.501 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.501 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-05T02:18:38.499280) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.501 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.501 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.501 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.501 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.501 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.501 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.502 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.502 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.502 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.503 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.503 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.503 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.503 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.503 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.503 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.bytes volume: 73146368 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.504 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.504 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.bytes volume: 72822784 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.504 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.504 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-05T02:18:38.501499) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.505 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.505 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.505 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.505 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.505 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.505 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.506 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-05T02:18:38.503771) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.506 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-05T02:18:38.505659) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.531 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.558 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.558 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.559 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.559 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.559 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.559 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.559 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.latency volume: 11353966152 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.559 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.560 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.latency volume: 10383107676 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.560 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.560 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.560 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.561 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.561 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.561 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.561 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.561 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.requests volume: 315 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.562 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.561 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-05T02:18:38.559378) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.562 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.requests volume: 277 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.562 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.562 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.563 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.563 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.563 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.563 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.563 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.563 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-05T02:18:38.561636) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.564 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-05T02:18:38.563725) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.568 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.573 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.573 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.574 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.574 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.574 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.574 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.574 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.574 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.574 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.575 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-05T02:18:38.574493) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.575 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.575 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.575 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.575 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.575 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.576 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.576 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.576 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.576 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.577 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-05T02:18:38.576017) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.577 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.577 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.577 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.578 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.578 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.578 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.578 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.578 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.578 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.579 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.579 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.579 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.579 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.579 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.579 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.580 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.580 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-05T02:18:38.578343) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.580 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-05T02:18:38.579745) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.580 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.580 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.580 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.581 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.581 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.581 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.581 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.581 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.581 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.582 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.582 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-05T02:18:38.581362) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.582 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.582 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.582 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.582 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.582 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.583 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.583 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.583 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/memory.usage volume: 42.4765625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.583 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/memory.usage volume: 43.47265625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.583 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.584 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.584 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.584 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.584 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.584 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.584 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.585 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.585 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.585 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.585 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-05T02:18:38.583096) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.585 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.585 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.586 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.586 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.586 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.586 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.586 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.587 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.587 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.587 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.587 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-05T02:18:38.584570) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.587 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.587 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.587 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.588 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.588 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.588 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.588 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.588 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.589 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.589 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.589 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/cpu volume: 333710000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.589 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-05T02:18:38.586194) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.589 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-05T02:18:38.587529) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.589 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/cpu volume: 172180000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.590 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.590 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.590 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.590 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.590 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.591 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.591 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.591 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-05T02:18:38.589157) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.591 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.591 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.592 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.592 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.592 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.592 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.592 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.592 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.593 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.593 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.593 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-05T02:18:38.590972) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.594 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.594 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.594 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.594 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.594 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.594 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.594 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-05T02:18:38.592716) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.594 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.595 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.596 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.596 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.596 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.596 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.596 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.596 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:18:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:18:38.596 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:18:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2090: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:18:41 compute-0 podman[460804]: 2025-12-05 02:18:41.70142099 +0000 UTC m=+0.101311007 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 02:18:41 compute-0 podman[460802]: 2025-12-05 02:18:41.713783387 +0000 UTC m=+0.123229262 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0)
Dec  5 02:18:41 compute-0 podman[460803]: 2025-12-05 02:18:41.71850834 +0000 UTC m=+0.130505897 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, build-date=2024-09-18T21:23:30, release=1214.1726694543, config_id=edpm, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, vendor=Red Hat, Inc., container_name=kepler, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, io.buildah.version=1.29.0, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  5 02:18:42 compute-0 nova_compute[349548]: 2025-12-05 02:18:42.068 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:18:42 compute-0 nova_compute[349548]: 2025-12-05 02:18:42.069 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 02:18:42 compute-0 nova_compute[349548]: 2025-12-05 02:18:42.069 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  5 02:18:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2091: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:18:42 compute-0 nova_compute[349548]: 2025-12-05 02:18:42.475 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:18:42 compute-0 nova_compute[349548]: 2025-12-05 02:18:42.518 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:18:42 compute-0 nova_compute[349548]: 2025-12-05 02:18:42.518 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:18:42 compute-0 nova_compute[349548]: 2025-12-05 02:18:42.519 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  5 02:18:42 compute-0 nova_compute[349548]: 2025-12-05 02:18:42.520 349552 DEBUG nova.objects.instance [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 292fd084-0808-4a80-adc1-6ab1f28e188a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:18:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:18:43 compute-0 nova_compute[349548]: 2025-12-05 02:18:43.341 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:18:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2092: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:18:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 02:18:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1119111735' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 02:18:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 02:18:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1119111735' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 02:18:45 compute-0 nova_compute[349548]: 2025-12-05 02:18:45.522 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updating instance_info_cache with network_info: [{"id": "706f9405-4061-481e-a252-9b14f4534a4e", "address": "fa:16:3e:cf:10:bc", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.151", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706f9405-40", "ovs_interfaceid": "706f9405-4061-481e-a252-9b14f4534a4e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:18:45 compute-0 nova_compute[349548]: 2025-12-05 02:18:45.549 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:18:45 compute-0 nova_compute[349548]: 2025-12-05 02:18:45.550 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  5 02:18:45 compute-0 nova_compute[349548]: 2025-12-05 02:18:45.551 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:18:46 compute-0 nova_compute[349548]: 2025-12-05 02:18:46.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:18:46 compute-0 nova_compute[349548]: 2025-12-05 02:18:46.068 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:18:46 compute-0 nova_compute[349548]: 2025-12-05 02:18:46.068 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 02:18:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:18:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:18:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:18:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:18:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:18:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:18:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2093: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:18:47 compute-0 nova_compute[349548]: 2025-12-05 02:18:47.063 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:18:47 compute-0 nova_compute[349548]: 2025-12-05 02:18:47.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:18:47 compute-0 nova_compute[349548]: 2025-12-05 02:18:47.479 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:18:48 compute-0 nova_compute[349548]: 2025-12-05 02:18:48.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:18:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:18:48 compute-0 nova_compute[349548]: 2025-12-05 02:18:48.338 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:18:48 compute-0 nova_compute[349548]: 2025-12-05 02:18:48.338 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:18:48 compute-0 nova_compute[349548]: 2025-12-05 02:18:48.338 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:18:48 compute-0 nova_compute[349548]: 2025-12-05 02:18:48.339 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 02:18:48 compute-0 nova_compute[349548]: 2025-12-05 02:18:48.339 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:18:48 compute-0 nova_compute[349548]: 2025-12-05 02:18:48.367 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:18:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2094: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:18:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:18:48 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3230553076' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:18:48 compute-0 nova_compute[349548]: 2025-12-05 02:18:48.830 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:18:48 compute-0 nova_compute[349548]: 2025-12-05 02:18:48.931 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:18:48 compute-0 nova_compute[349548]: 2025-12-05 02:18:48.932 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:18:48 compute-0 nova_compute[349548]: 2025-12-05 02:18:48.938 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:18:48 compute-0 nova_compute[349548]: 2025-12-05 02:18:48.939 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:18:49 compute-0 nova_compute[349548]: 2025-12-05 02:18:49.461 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:18:49 compute-0 nova_compute[349548]: 2025-12-05 02:18:49.463 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3548MB free_disk=59.897212982177734GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 02:18:49 compute-0 nova_compute[349548]: 2025-12-05 02:18:49.463 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:18:49 compute-0 nova_compute[349548]: 2025-12-05 02:18:49.463 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:18:49 compute-0 nova_compute[349548]: 2025-12-05 02:18:49.545 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 292fd084-0808-4a80-adc1-6ab1f28e188a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:18:49 compute-0 nova_compute[349548]: 2025-12-05 02:18:49.546 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:18:49 compute-0 nova_compute[349548]: 2025-12-05 02:18:49.546 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 02:18:49 compute-0 nova_compute[349548]: 2025-12-05 02:18:49.547 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 02:18:49 compute-0 nova_compute[349548]: 2025-12-05 02:18:49.561 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing inventories for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  5 02:18:49 compute-0 nova_compute[349548]: 2025-12-05 02:18:49.585 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating ProviderTree inventory for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  5 02:18:49 compute-0 nova_compute[349548]: 2025-12-05 02:18:49.586 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating inventory in ProviderTree for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  5 02:18:49 compute-0 nova_compute[349548]: 2025-12-05 02:18:49.599 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing aggregate associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  5 02:18:49 compute-0 nova_compute[349548]: 2025-12-05 02:18:49.618 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing trait associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, traits: HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,HW_CPU_X86_ABM,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE41,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI2,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  5 02:18:49 compute-0 nova_compute[349548]: 2025-12-05 02:18:49.681 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:18:50 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:18:50 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3375728888' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:18:50 compute-0 nova_compute[349548]: 2025-12-05 02:18:50.220 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:18:50 compute-0 nova_compute[349548]: 2025-12-05 02:18:50.237 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:18:50 compute-0 nova_compute[349548]: 2025-12-05 02:18:50.283 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:18:50 compute-0 nova_compute[349548]: 2025-12-05 02:18:50.288 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 02:18:50 compute-0 nova_compute[349548]: 2025-12-05 02:18:50.289 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:18:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2095: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:18:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2096: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:18:52 compute-0 nova_compute[349548]: 2025-12-05 02:18:52.481 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:18:53 compute-0 nova_compute[349548]: 2025-12-05 02:18:53.290 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:18:53 compute-0 nova_compute[349548]: 2025-12-05 02:18:53.290 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:18:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:18:53 compute-0 nova_compute[349548]: 2025-12-05 02:18:53.371 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:18:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2097: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:18:54 compute-0 podman[460902]: 2025-12-05 02:18:54.70627764 +0000 UTC m=+0.105551765 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 02:18:54 compute-0 podman[460904]: 2025-12-05 02:18:54.712399912 +0000 UTC m=+0.095235826 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, container_name=openstack_network_exporter, version=9.6, config_id=edpm, io.buildah.version=1.33.7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible)
Dec  5 02:18:54 compute-0 podman[460901]: 2025-12-05 02:18:54.741651314 +0000 UTC m=+0.146258289 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  5 02:18:54 compute-0 podman[460903]: 2025-12-05 02:18:54.773781476 +0000 UTC m=+0.167563657 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  5 02:18:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:18:56.216 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:18:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:18:56.217 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:18:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:18:56.218 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:18:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2098: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:18:57 compute-0 nova_compute[349548]: 2025-12-05 02:18:57.484 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:18:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:18:58 compute-0 nova_compute[349548]: 2025-12-05 02:18:58.373 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:18:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2099: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:18:59 compute-0 podman[158197]: time="2025-12-05T02:18:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:18:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:18:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 02:18:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:18:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8644 "" "Go-http-client/1.1"
Dec  5 02:19:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2100: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:19:01 compute-0 openstack_network_exporter[366555]: ERROR   02:19:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:19:01 compute-0 openstack_network_exporter[366555]: ERROR   02:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:19:01 compute-0 openstack_network_exporter[366555]: ERROR   02:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:19:01 compute-0 openstack_network_exporter[366555]: ERROR   02:19:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:19:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:19:01 compute-0 openstack_network_exporter[366555]: ERROR   02:19:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:19:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:19:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2101: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:19:02 compute-0 nova_compute[349548]: 2025-12-05 02:19:02.487 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:19:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:19:03 compute-0 nova_compute[349548]: 2025-12-05 02:19:03.376 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:19:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2102: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:19:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2103: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:19:07 compute-0 nova_compute[349548]: 2025-12-05 02:19:07.491 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:19:07 compute-0 podman[460982]: 2025-12-05 02:19:07.746312088 +0000 UTC m=+0.138430309 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  5 02:19:07 compute-0 podman[460981]: 2025-12-05 02:19:07.766810864 +0000 UTC m=+0.168259727 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  5 02:19:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:19:08 compute-0 nova_compute[349548]: 2025-12-05 02:19:08.380 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:19:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2104: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:19:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2105: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:19:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2106: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:19:12 compute-0 nova_compute[349548]: 2025-12-05 02:19:12.493 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:19:12 compute-0 podman[461020]: 2025-12-05 02:19:12.72547232 +0000 UTC m=+0.118914550 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, release-0.7.12=, distribution-scope=public, architecture=x86_64, com.redhat.component=ubi9-container, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  5 02:19:12 compute-0 podman[461019]: 2025-12-05 02:19:12.72973682 +0000 UTC m=+0.130246339 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  5 02:19:12 compute-0 podman[461021]: 2025-12-05 02:19:12.765038032 +0000 UTC m=+0.147671729 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  5 02:19:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:19:13 compute-0 nova_compute[349548]: 2025-12-05 02:19:13.384 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:19:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2107: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:19:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:19:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:19:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:19:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:19:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:19:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:19:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:19:16
Dec  5 02:19:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 02:19:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 02:19:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'backups', 'images', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'volumes', 'cephfs.cephfs.meta']
Dec  5 02:19:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 02:19:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2108: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:19:17 compute-0 nova_compute[349548]: 2025-12-05 02:19:17.497 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:19:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 02:19:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:19:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 02:19:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:19:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:19:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:19:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:19:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:19:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:19:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:19:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:19:18 compute-0 nova_compute[349548]: 2025-12-05 02:19:18.387 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:19:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2109: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:19:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Dec  5 02:19:18 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  5 02:19:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:19:18 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:19:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 02:19:18 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:19:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 02:19:18 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:19:18 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 024cbb5f-47fa-4486-8e2c-c319f394afb2 does not exist
Dec  5 02:19:18 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 8387e1ff-adde-4910-a410-080e0539f38f does not exist
Dec  5 02:19:18 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev eaf16309-2a74-4d45-b72b-3aefc30ded0e does not exist
Dec  5 02:19:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 02:19:18 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 02:19:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 02:19:18 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:19:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:19:18 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:19:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  5 02:19:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:19:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:19:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:19:20 compute-0 podman[461341]: 2025-12-05 02:19:20.24174487 +0000 UTC m=+0.122353987 container create 596c2e745da48c329b2a0418abc4fe122ac3d2a166502816007d7fb90844d184 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  5 02:19:20 compute-0 podman[461341]: 2025-12-05 02:19:20.198849495 +0000 UTC m=+0.079458692 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:19:20 compute-0 systemd[1]: Started libpod-conmon-596c2e745da48c329b2a0418abc4fe122ac3d2a166502816007d7fb90844d184.scope.
Dec  5 02:19:20 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:19:20 compute-0 podman[461341]: 2025-12-05 02:19:20.410543721 +0000 UTC m=+0.291152898 container init 596c2e745da48c329b2a0418abc4fe122ac3d2a166502816007d7fb90844d184 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_borg, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:19:20 compute-0 podman[461341]: 2025-12-05 02:19:20.423620838 +0000 UTC m=+0.304229965 container start 596c2e745da48c329b2a0418abc4fe122ac3d2a166502816007d7fb90844d184 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  5 02:19:20 compute-0 epic_borg[461354]: 167 167
Dec  5 02:19:20 compute-0 podman[461341]: 2025-12-05 02:19:20.436113239 +0000 UTC m=+0.316722376 container attach 596c2e745da48c329b2a0418abc4fe122ac3d2a166502816007d7fb90844d184 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:19:20 compute-0 systemd[1]: libpod-596c2e745da48c329b2a0418abc4fe122ac3d2a166502816007d7fb90844d184.scope: Deactivated successfully.
Dec  5 02:19:20 compute-0 podman[461341]: 2025-12-05 02:19:20.438610489 +0000 UTC m=+0.319219636 container died 596c2e745da48c329b2a0418abc4fe122ac3d2a166502816007d7fb90844d184 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_borg, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:19:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-1159187bb82294864e58c3b15b9bdd96e55ce2e5c842fa7bf7a027dacdd9cce9-merged.mount: Deactivated successfully.
Dec  5 02:19:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2110: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:19:20 compute-0 podman[461341]: 2025-12-05 02:19:20.509009736 +0000 UTC m=+0.389618853 container remove 596c2e745da48c329b2a0418abc4fe122ac3d2a166502816007d7fb90844d184 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:19:20 compute-0 systemd[1]: libpod-conmon-596c2e745da48c329b2a0418abc4fe122ac3d2a166502816007d7fb90844d184.scope: Deactivated successfully.
Dec  5 02:19:20 compute-0 podman[461381]: 2025-12-05 02:19:20.78655268 +0000 UTC m=+0.096099169 container create 5881e54b91e95abddf2aa0edeadc078d344edbb660d4772b569129f1d98a7760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:19:20 compute-0 podman[461381]: 2025-12-05 02:19:20.748170483 +0000 UTC m=+0.057717022 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:19:20 compute-0 systemd[1]: Started libpod-conmon-5881e54b91e95abddf2aa0edeadc078d344edbb660d4772b569129f1d98a7760.scope.
Dec  5 02:19:20 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:19:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e46f887179f8ab96706e950422306fb16a46f82d3862d17978238aa4cea4493e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:19:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e46f887179f8ab96706e950422306fb16a46f82d3862d17978238aa4cea4493e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:19:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e46f887179f8ab96706e950422306fb16a46f82d3862d17978238aa4cea4493e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:19:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e46f887179f8ab96706e950422306fb16a46f82d3862d17978238aa4cea4493e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:19:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e46f887179f8ab96706e950422306fb16a46f82d3862d17978238aa4cea4493e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 02:19:20 compute-0 podman[461381]: 2025-12-05 02:19:20.961415522 +0000 UTC m=+0.270962051 container init 5881e54b91e95abddf2aa0edeadc078d344edbb660d4772b569129f1d98a7760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_easley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:19:20 compute-0 podman[461381]: 2025-12-05 02:19:20.984468379 +0000 UTC m=+0.294014838 container start 5881e54b91e95abddf2aa0edeadc078d344edbb660d4772b569129f1d98a7760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_easley, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Dec  5 02:19:20 compute-0 podman[461381]: 2025-12-05 02:19:20.989421648 +0000 UTC m=+0.298968177 container attach 5881e54b91e95abddf2aa0edeadc078d344edbb660d4772b569129f1d98a7760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_easley, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  5 02:19:22 compute-0 intelligent_easley[461397]: --> passed data devices: 0 physical, 3 LVM
Dec  5 02:19:22 compute-0 intelligent_easley[461397]: --> relative data size: 1.0
Dec  5 02:19:22 compute-0 intelligent_easley[461397]: --> All data devices are unavailable
Dec  5 02:19:22 compute-0 systemd[1]: libpod-5881e54b91e95abddf2aa0edeadc078d344edbb660d4772b569129f1d98a7760.scope: Deactivated successfully.
Dec  5 02:19:22 compute-0 systemd[1]: libpod-5881e54b91e95abddf2aa0edeadc078d344edbb660d4772b569129f1d98a7760.scope: Consumed 1.239s CPU time.
Dec  5 02:19:22 compute-0 podman[461381]: 2025-12-05 02:19:22.291363684 +0000 UTC m=+1.600910133 container died 5881e54b91e95abddf2aa0edeadc078d344edbb660d4772b569129f1d98a7760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_easley, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:19:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-e46f887179f8ab96706e950422306fb16a46f82d3862d17978238aa4cea4493e-merged.mount: Deactivated successfully.
Dec  5 02:19:22 compute-0 podman[461381]: 2025-12-05 02:19:22.362715918 +0000 UTC m=+1.672262397 container remove 5881e54b91e95abddf2aa0edeadc078d344edbb660d4772b569129f1d98a7760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_easley, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  5 02:19:22 compute-0 systemd[1]: libpod-conmon-5881e54b91e95abddf2aa0edeadc078d344edbb660d4772b569129f1d98a7760.scope: Deactivated successfully.
Dec  5 02:19:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2111: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:19:22 compute-0 nova_compute[349548]: 2025-12-05 02:19:22.498 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:19:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:19:23 compute-0 nova_compute[349548]: 2025-12-05 02:19:23.391 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:19:23 compute-0 podman[461574]: 2025-12-05 02:19:23.531091923 +0000 UTC m=+0.081664524 container create d042dd5fb061a7621b743fe5e084b2cdb61b2ff906dea1e3751d3b40024e3390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hamilton, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:19:23 compute-0 podman[461574]: 2025-12-05 02:19:23.509353683 +0000 UTC m=+0.059926314 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:19:23 compute-0 systemd[1]: Started libpod-conmon-d042dd5fb061a7621b743fe5e084b2cdb61b2ff906dea1e3751d3b40024e3390.scope.
Dec  5 02:19:23 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:19:23 compute-0 podman[461574]: 2025-12-05 02:19:23.688471993 +0000 UTC m=+0.239044605 container init d042dd5fb061a7621b743fe5e084b2cdb61b2ff906dea1e3751d3b40024e3390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Dec  5 02:19:23 compute-0 podman[461574]: 2025-12-05 02:19:23.704668178 +0000 UTC m=+0.255240779 container start d042dd5fb061a7621b743fe5e084b2cdb61b2ff906dea1e3751d3b40024e3390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hamilton, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:19:23 compute-0 podman[461574]: 2025-12-05 02:19:23.709822933 +0000 UTC m=+0.260395584 container attach d042dd5fb061a7621b743fe5e084b2cdb61b2ff906dea1e3751d3b40024e3390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hamilton, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:19:23 compute-0 clever_hamilton[461590]: 167 167
Dec  5 02:19:23 compute-0 systemd[1]: libpod-d042dd5fb061a7621b743fe5e084b2cdb61b2ff906dea1e3751d3b40024e3390.scope: Deactivated successfully.
Dec  5 02:19:23 compute-0 conmon[461590]: conmon d042dd5fb061a7621b74 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d042dd5fb061a7621b743fe5e084b2cdb61b2ff906dea1e3751d3b40024e3390.scope/container/memory.events
Dec  5 02:19:23 compute-0 podman[461574]: 2025-12-05 02:19:23.718472476 +0000 UTC m=+0.269045077 container died d042dd5fb061a7621b743fe5e084b2cdb61b2ff906dea1e3751d3b40024e3390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hamilton, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:19:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-7740b3bd1f2708578514e3c9e0ebb068b624f9095e379a33db8697c1138ef150-merged.mount: Deactivated successfully.
Dec  5 02:19:23 compute-0 podman[461574]: 2025-12-05 02:19:23.78198702 +0000 UTC m=+0.332559621 container remove d042dd5fb061a7621b743fe5e084b2cdb61b2ff906dea1e3751d3b40024e3390 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_hamilton, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Dec  5 02:19:23 compute-0 systemd[1]: libpod-conmon-d042dd5fb061a7621b743fe5e084b2cdb61b2ff906dea1e3751d3b40024e3390.scope: Deactivated successfully.
Dec  5 02:19:24 compute-0 podman[461612]: 2025-12-05 02:19:24.052487647 +0000 UTC m=+0.079342829 container create 606289e9eaee9ba9664cd9a09823d60c8dd7cda24f86b45a93e887cbb44904b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_moore, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  5 02:19:24 compute-0 podman[461612]: 2025-12-05 02:19:24.015700124 +0000 UTC m=+0.042555386 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:19:24 compute-0 systemd[1]: Started libpod-conmon-606289e9eaee9ba9664cd9a09823d60c8dd7cda24f86b45a93e887cbb44904b1.scope.
Dec  5 02:19:24 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:19:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6e2b61fff105ab6905be48daba95fea5987be6bfa5ce64100db01b177a42d18/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:19:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6e2b61fff105ab6905be48daba95fea5987be6bfa5ce64100db01b177a42d18/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:19:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6e2b61fff105ab6905be48daba95fea5987be6bfa5ce64100db01b177a42d18/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:19:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6e2b61fff105ab6905be48daba95fea5987be6bfa5ce64100db01b177a42d18/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:19:24 compute-0 podman[461612]: 2025-12-05 02:19:24.247074172 +0000 UTC m=+0.273929384 container init 606289e9eaee9ba9664cd9a09823d60c8dd7cda24f86b45a93e887cbb44904b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_moore, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Dec  5 02:19:24 compute-0 podman[461612]: 2025-12-05 02:19:24.263217096 +0000 UTC m=+0.290072308 container start 606289e9eaee9ba9664cd9a09823d60c8dd7cda24f86b45a93e887cbb44904b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_moore, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  5 02:19:24 compute-0 podman[461612]: 2025-12-05 02:19:24.287275441 +0000 UTC m=+0.314130653 container attach 606289e9eaee9ba9664cd9a09823d60c8dd7cda24f86b45a93e887cbb44904b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_moore, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:19:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2112: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]: {
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:    "0": [
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:        {
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:            "devices": [
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:                "/dev/loop3"
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:            ],
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:            "lv_name": "ceph_lv0",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:            "lv_size": "21470642176",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:            "name": "ceph_lv0",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:            "tags": {
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:                "ceph.cluster_name": "ceph",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:                "ceph.crush_device_class": "",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:                "ceph.encrypted": "0",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:                "ceph.osd_id": "0",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:                "ceph.type": "block",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:                "ceph.vdo": "0"
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:            },
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:            "type": "block",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:            "vg_name": "ceph_vg0"
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:        }
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:    ],
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:    "1": [
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:        {
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:            "devices": [
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:                "/dev/loop4"
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:            ],
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:            "lv_name": "ceph_lv1",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:            "lv_size": "21470642176",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:            "name": "ceph_lv1",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:            "tags": {
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:                "ceph.cluster_name": "ceph",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:                "ceph.crush_device_class": "",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:                "ceph.encrypted": "0",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:                "ceph.osd_id": "1",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:                "ceph.type": "block",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:                "ceph.vdo": "0"
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:            },
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:            "type": "block",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:            "vg_name": "ceph_vg1"
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:        }
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:    ],
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:    "2": [
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:        {
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:            "devices": [
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:                "/dev/loop5"
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:            ],
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:            "lv_name": "ceph_lv2",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:            "lv_size": "21470642176",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:            "name": "ceph_lv2",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:            "tags": {
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:                "ceph.cluster_name": "ceph",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:                "ceph.crush_device_class": "",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:                "ceph.encrypted": "0",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:                "ceph.osd_id": "2",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:                "ceph.type": "block",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:                "ceph.vdo": "0"
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:            },
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:            "type": "block",
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:            "vg_name": "ceph_vg2"
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:        }
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]:    ]
Dec  5 02:19:25 compute-0 flamboyant_moore[461628]: }
Dec  5 02:19:25 compute-0 systemd[1]: libpod-606289e9eaee9ba9664cd9a09823d60c8dd7cda24f86b45a93e887cbb44904b1.scope: Deactivated successfully.
Dec  5 02:19:25 compute-0 podman[461612]: 2025-12-05 02:19:25.185711924 +0000 UTC m=+1.212567116 container died 606289e9eaee9ba9664cd9a09823d60c8dd7cda24f86b45a93e887cbb44904b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_moore, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  5 02:19:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6e2b61fff105ab6905be48daba95fea5987be6bfa5ce64100db01b177a42d18-merged.mount: Deactivated successfully.
Dec  5 02:19:25 compute-0 podman[461612]: 2025-12-05 02:19:25.284510809 +0000 UTC m=+1.311365991 container remove 606289e9eaee9ba9664cd9a09823d60c8dd7cda24f86b45a93e887cbb44904b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:19:25 compute-0 systemd[1]: libpod-conmon-606289e9eaee9ba9664cd9a09823d60c8dd7cda24f86b45a93e887cbb44904b1.scope: Deactivated successfully.
Dec  5 02:19:25 compute-0 podman[461648]: 2025-12-05 02:19:25.354814903 +0000 UTC m=+0.110434142 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.openshift.expose-services=, release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, version=9.6, managed_by=edpm_ansible, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, name=ubi9-minimal, distribution-scope=public, config_id=edpm)
Dec  5 02:19:25 compute-0 podman[461646]: 2025-12-05 02:19:25.359973418 +0000 UTC m=+0.130357222 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 02:19:25 compute-0 podman[461639]: 2025-12-05 02:19:25.376356148 +0000 UTC m=+0.130075754 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  5 02:19:25 compute-0 podman[461647]: 2025-12-05 02:19:25.383801697 +0000 UTC m=+0.145946200 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller)
Dec  5 02:19:26 compute-0 podman[461870]: 2025-12-05 02:19:26.304478345 +0000 UTC m=+0.086707526 container create 7c38b0fe964def9e3c96c23a2a7de63abc38e2e50cde9b85e84a96fbe7b39161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  5 02:19:26 compute-0 podman[461870]: 2025-12-05 02:19:26.273842785 +0000 UTC m=+0.056071976 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:19:26 compute-0 systemd[1]: Started libpod-conmon-7c38b0fe964def9e3c96c23a2a7de63abc38e2e50cde9b85e84a96fbe7b39161.scope.
Dec  5 02:19:26 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:19:26 compute-0 podman[461870]: 2025-12-05 02:19:26.456994279 +0000 UTC m=+0.239223530 container init 7c38b0fe964def9e3c96c23a2a7de63abc38e2e50cde9b85e84a96fbe7b39161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_banach, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:19:26 compute-0 podman[461870]: 2025-12-05 02:19:26.476274571 +0000 UTC m=+0.258503722 container start 7c38b0fe964def9e3c96c23a2a7de63abc38e2e50cde9b85e84a96fbe7b39161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_banach, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:19:26 compute-0 podman[461870]: 2025-12-05 02:19:26.481442876 +0000 UTC m=+0.263672067 container attach 7c38b0fe964def9e3c96c23a2a7de63abc38e2e50cde9b85e84a96fbe7b39161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_banach, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Dec  5 02:19:26 compute-0 festive_banach[461885]: 167 167
Dec  5 02:19:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2113: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:19:26 compute-0 systemd[1]: libpod-7c38b0fe964def9e3c96c23a2a7de63abc38e2e50cde9b85e84a96fbe7b39161.scope: Deactivated successfully.
Dec  5 02:19:26 compute-0 conmon[461885]: conmon 7c38b0fe964def9e3c96 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7c38b0fe964def9e3c96c23a2a7de63abc38e2e50cde9b85e84a96fbe7b39161.scope/container/memory.events
Dec  5 02:19:26 compute-0 podman[461870]: 2025-12-05 02:19:26.493160425 +0000 UTC m=+0.275389656 container died 7c38b0fe964def9e3c96c23a2a7de63abc38e2e50cde9b85e84a96fbe7b39161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_banach, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  5 02:19:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-1206816c622bf730949c2ef77af6ac171ffdbac9722e34b78b1a5ecedd73f227-merged.mount: Deactivated successfully.
Dec  5 02:19:26 compute-0 podman[461870]: 2025-12-05 02:19:26.57667875 +0000 UTC m=+0.358907921 container remove 7c38b0fe964def9e3c96c23a2a7de63abc38e2e50cde9b85e84a96fbe7b39161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_banach, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:19:26 compute-0 systemd[1]: libpod-conmon-7c38b0fe964def9e3c96c23a2a7de63abc38e2e50cde9b85e84a96fbe7b39161.scope: Deactivated successfully.
Dec  5 02:19:26 compute-0 podman[461907]: 2025-12-05 02:19:26.815019434 +0000 UTC m=+0.055218131 container create fc52739f843ab7650caa025c6f55b98ec4f08e0deb7062038aef24a4e9c767ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lalande, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Dec  5 02:19:26 compute-0 systemd[1]: Started libpod-conmon-fc52739f843ab7650caa025c6f55b98ec4f08e0deb7062038aef24a4e9c767ee.scope.
Dec  5 02:19:26 compute-0 podman[461907]: 2025-12-05 02:19:26.79171477 +0000 UTC m=+0.031913497 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:19:26 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:19:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bced3ab3699c217c4d16a2582797ad466ae2a995cfc85ab2e10ae30d025e476/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:19:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bced3ab3699c217c4d16a2582797ad466ae2a995cfc85ab2e10ae30d025e476/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:19:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bced3ab3699c217c4d16a2582797ad466ae2a995cfc85ab2e10ae30d025e476/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:19:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bced3ab3699c217c4d16a2582797ad466ae2a995cfc85ab2e10ae30d025e476/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:19:27 compute-0 podman[461907]: 2025-12-05 02:19:27.030376453 +0000 UTC m=+0.270575170 container init fc52739f843ab7650caa025c6f55b98ec4f08e0deb7062038aef24a4e9c767ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:19:27 compute-0 podman[461907]: 2025-12-05 02:19:27.048246665 +0000 UTC m=+0.288445352 container start fc52739f843ab7650caa025c6f55b98ec4f08e0deb7062038aef24a4e9c767ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lalande, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:19:27 compute-0 podman[461907]: 2025-12-05 02:19:27.053437511 +0000 UTC m=+0.293636208 container attach fc52739f843ab7650caa025c6f55b98ec4f08e0deb7062038aef24a4e9c767ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lalande, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  5 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015181009677997005 of space, bias 1.0, pg target 0.45543029033991017 quantized to 32 (current 32)
Dec  5 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  5 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:19:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 02:19:27 compute-0 nova_compute[349548]: 2025-12-05 02:19:27.501 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:19:28 compute-0 bold_lalande[461923]: {
Dec  5 02:19:28 compute-0 bold_lalande[461923]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 02:19:28 compute-0 bold_lalande[461923]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:19:28 compute-0 bold_lalande[461923]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 02:19:28 compute-0 bold_lalande[461923]:        "osd_id": 0,
Dec  5 02:19:28 compute-0 bold_lalande[461923]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:19:28 compute-0 bold_lalande[461923]:        "type": "bluestore"
Dec  5 02:19:28 compute-0 bold_lalande[461923]:    },
Dec  5 02:19:28 compute-0 bold_lalande[461923]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 02:19:28 compute-0 bold_lalande[461923]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:19:28 compute-0 bold_lalande[461923]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 02:19:28 compute-0 bold_lalande[461923]:        "osd_id": 1,
Dec  5 02:19:28 compute-0 bold_lalande[461923]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:19:28 compute-0 bold_lalande[461923]:        "type": "bluestore"
Dec  5 02:19:28 compute-0 bold_lalande[461923]:    },
Dec  5 02:19:28 compute-0 bold_lalande[461923]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 02:19:28 compute-0 bold_lalande[461923]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:19:28 compute-0 bold_lalande[461923]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 02:19:28 compute-0 bold_lalande[461923]:        "osd_id": 2,
Dec  5 02:19:28 compute-0 bold_lalande[461923]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:19:28 compute-0 bold_lalande[461923]:        "type": "bluestore"
Dec  5 02:19:28 compute-0 bold_lalande[461923]:    }
Dec  5 02:19:28 compute-0 bold_lalande[461923]: }
Dec  5 02:19:28 compute-0 systemd[1]: libpod-fc52739f843ab7650caa025c6f55b98ec4f08e0deb7062038aef24a4e9c767ee.scope: Deactivated successfully.
Dec  5 02:19:28 compute-0 systemd[1]: libpod-fc52739f843ab7650caa025c6f55b98ec4f08e0deb7062038aef24a4e9c767ee.scope: Consumed 1.125s CPU time.
Dec  5 02:19:28 compute-0 conmon[461923]: conmon fc52739f843ab7650caa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fc52739f843ab7650caa025c6f55b98ec4f08e0deb7062038aef24a4e9c767ee.scope/container/memory.events
Dec  5 02:19:28 compute-0 podman[461907]: 2025-12-05 02:19:28.175482793 +0000 UTC m=+1.415681480 container died fc52739f843ab7650caa025c6f55b98ec4f08e0deb7062038aef24a4e9c767ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:19:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bced3ab3699c217c4d16a2582797ad466ae2a995cfc85ab2e10ae30d025e476-merged.mount: Deactivated successfully.
Dec  5 02:19:28 compute-0 podman[461907]: 2025-12-05 02:19:28.24799299 +0000 UTC m=+1.488191697 container remove fc52739f843ab7650caa025c6f55b98ec4f08e0deb7062038aef24a4e9c767ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lalande, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:19:28 compute-0 systemd[1]: libpod-conmon-fc52739f843ab7650caa025c6f55b98ec4f08e0deb7062038aef24a4e9c767ee.scope: Deactivated successfully.
Dec  5 02:19:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 02:19:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:19:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 02:19:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:19:28 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev d383363a-c9e9-4365-9f2a-cce2440de5b1 does not exist
Dec  5 02:19:28 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev dc2edbbd-150d-4fb1-946b-d10bba2b1ced does not exist
Dec  5 02:19:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:19:28 compute-0 nova_compute[349548]: 2025-12-05 02:19:28.393 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:19:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2114: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:19:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:19:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:19:29 compute-0 podman[158197]: time="2025-12-05T02:19:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:19:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:19:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 02:19:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:19:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8658 "" "Go-http-client/1.1"
Dec  5 02:19:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2115: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:19:31 compute-0 openstack_network_exporter[366555]: ERROR   02:19:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:19:31 compute-0 openstack_network_exporter[366555]: ERROR   02:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:19:31 compute-0 openstack_network_exporter[366555]: ERROR   02:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:19:31 compute-0 openstack_network_exporter[366555]: ERROR   02:19:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:19:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:19:31 compute-0 openstack_network_exporter[366555]: ERROR   02:19:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:19:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:19:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2116: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:19:32 compute-0 nova_compute[349548]: 2025-12-05 02:19:32.506 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:19:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:19:33 compute-0 nova_compute[349548]: 2025-12-05 02:19:33.399 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:19:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2117: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:19:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2118: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:19:37 compute-0 nova_compute[349548]: 2025-12-05 02:19:37.511 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:19:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:19:38 compute-0 nova_compute[349548]: 2025-12-05 02:19:38.402 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:19:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2119: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:19:38 compute-0 podman[462021]: 2025-12-05 02:19:38.688405256 +0000 UTC m=+0.098153817 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  5 02:19:38 compute-0 podman[462020]: 2025-12-05 02:19:38.7166804 +0000 UTC m=+0.126814032 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  5 02:19:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2120: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:19:42 compute-0 nova_compute[349548]: 2025-12-05 02:19:42.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:19:42 compute-0 nova_compute[349548]: 2025-12-05 02:19:42.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 02:19:42 compute-0 nova_compute[349548]: 2025-12-05 02:19:42.314 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:19:42 compute-0 nova_compute[349548]: 2025-12-05 02:19:42.315 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:19:42 compute-0 nova_compute[349548]: 2025-12-05 02:19:42.316 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  5 02:19:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2121: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:19:42 compute-0 nova_compute[349548]: 2025-12-05 02:19:42.514 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:19:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:19:43 compute-0 nova_compute[349548]: 2025-12-05 02:19:43.406 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:19:43 compute-0 podman[462060]: 2025-12-05 02:19:43.689066133 +0000 UTC m=+0.098902569 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  5 02:19:43 compute-0 podman[462062]: 2025-12-05 02:19:43.710602117 +0000 UTC m=+0.097599232 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  5 02:19:43 compute-0 podman[462061]: 2025-12-05 02:19:43.714390164 +0000 UTC m=+0.118119609 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, config_id=edpm, release=1214.1726694543, version=9.4, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, release-0.7.12=, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  5 02:19:44 compute-0 nova_compute[349548]: 2025-12-05 02:19:44.369 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Updating instance_info_cache with network_info: [{"id": "afc3cf6c-cbe3-4163-920e-7122f474d371", "address": "fa:16:3e:69:80:52", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafc3cf6c-cb", "ovs_interfaceid": "afc3cf6c-cbe3-4163-920e-7122f474d371", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:19:44 compute-0 nova_compute[349548]: 2025-12-05 02:19:44.393 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:19:44 compute-0 nova_compute[349548]: 2025-12-05 02:19:44.394 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  5 02:19:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2122: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:19:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 02:19:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/296804695' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 02:19:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 02:19:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/296804695' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 02:19:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:19:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:19:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:19:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:19:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:19:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:19:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2123: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:19:47 compute-0 nova_compute[349548]: 2025-12-05 02:19:47.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:19:47 compute-0 nova_compute[349548]: 2025-12-05 02:19:47.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:19:47 compute-0 nova_compute[349548]: 2025-12-05 02:19:47.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 02:19:47 compute-0 nova_compute[349548]: 2025-12-05 02:19:47.517 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:19:47 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Dec  5 02:19:47 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:19:47.684218) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  5 02:19:47 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Dec  5 02:19:47 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901187684288, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 1347, "num_deletes": 251, "total_data_size": 2126585, "memory_usage": 2161584, "flush_reason": "Manual Compaction"}
Dec  5 02:19:47 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Dec  5 02:19:47 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901187701688, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 2084484, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42637, "largest_seqno": 43983, "table_properties": {"data_size": 2078037, "index_size": 3650, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13268, "raw_average_key_size": 19, "raw_value_size": 2065254, "raw_average_value_size": 3091, "num_data_blocks": 164, "num_entries": 668, "num_filter_entries": 668, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764901049, "oldest_key_time": 1764901049, "file_creation_time": 1764901187, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Dec  5 02:19:47 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 17573 microseconds, and 9519 cpu microseconds.
Dec  5 02:19:47 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 02:19:47 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:19:47.701797) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 2084484 bytes OK
Dec  5 02:19:47 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:19:47.701820) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Dec  5 02:19:47 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:19:47.705156) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Dec  5 02:19:47 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:19:47.705179) EVENT_LOG_v1 {"time_micros": 1764901187705172, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  5 02:19:47 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:19:47.705200) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  5 02:19:47 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 2120577, prev total WAL file size 2120577, number of live WAL files 2.
Dec  5 02:19:47 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:19:47 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:19:47.706735) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Dec  5 02:19:47 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  5 02:19:47 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(2035KB)], [101(8827KB)]
Dec  5 02:19:47 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901187706828, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 11124132, "oldest_snapshot_seqno": -1}
Dec  5 02:19:47 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 5921 keys, 9408859 bytes, temperature: kUnknown
Dec  5 02:19:47 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901187777170, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 9408859, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9368914, "index_size": 24027, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14853, "raw_key_size": 154038, "raw_average_key_size": 26, "raw_value_size": 9261461, "raw_average_value_size": 1564, "num_data_blocks": 958, "num_entries": 5921, "num_filter_entries": 5921, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764901187, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Dec  5 02:19:47 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 02:19:47 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:19:47.777422) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 9408859 bytes
Dec  5 02:19:47 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:19:47.780266) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 158.0 rd, 133.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 8.6 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(9.9) write-amplify(4.5) OK, records in: 6435, records dropped: 514 output_compression: NoCompression
Dec  5 02:19:47 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:19:47.780295) EVENT_LOG_v1 {"time_micros": 1764901187780282, "job": 60, "event": "compaction_finished", "compaction_time_micros": 70413, "compaction_time_cpu_micros": 42020, "output_level": 6, "num_output_files": 1, "total_output_size": 9408859, "num_input_records": 6435, "num_output_records": 5921, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  5 02:19:47 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:19:47 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901187781104, "job": 60, "event": "table_file_deletion", "file_number": 103}
Dec  5 02:19:47 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:19:47 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901187784003, "job": 60, "event": "table_file_deletion", "file_number": 101}
Dec  5 02:19:47 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:19:47.706451) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:19:47 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:19:47.784200) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:19:47 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:19:47.784209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:19:47 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:19:47.784213) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:19:47 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:19:47.784217) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:19:47 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:19:47.784221) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:19:48 compute-0 nova_compute[349548]: 2025-12-05 02:19:48.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:19:48 compute-0 nova_compute[349548]: 2025-12-05 02:19:48.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:19:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:19:48 compute-0 nova_compute[349548]: 2025-12-05 02:19:48.410 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:19:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2124: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:19:49 compute-0 nova_compute[349548]: 2025-12-05 02:19:49.063 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:19:49 compute-0 nova_compute[349548]: 2025-12-05 02:19:49.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:19:49 compute-0 nova_compute[349548]: 2025-12-05 02:19:49.107 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:19:49 compute-0 nova_compute[349548]: 2025-12-05 02:19:49.108 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:19:49 compute-0 nova_compute[349548]: 2025-12-05 02:19:49.109 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:19:49 compute-0 nova_compute[349548]: 2025-12-05 02:19:49.110 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 02:19:49 compute-0 nova_compute[349548]: 2025-12-05 02:19:49.111 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:19:49 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:19:49 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1496427799' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:19:49 compute-0 nova_compute[349548]: 2025-12-05 02:19:49.596 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:19:49 compute-0 nova_compute[349548]: 2025-12-05 02:19:49.700 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:19:49 compute-0 nova_compute[349548]: 2025-12-05 02:19:49.700 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:19:49 compute-0 nova_compute[349548]: 2025-12-05 02:19:49.715 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:19:49 compute-0 nova_compute[349548]: 2025-12-05 02:19:49.716 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:19:50 compute-0 nova_compute[349548]: 2025-12-05 02:19:50.374 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:19:50 compute-0 nova_compute[349548]: 2025-12-05 02:19:50.376 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3501MB free_disk=59.897212982177734GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 02:19:50 compute-0 nova_compute[349548]: 2025-12-05 02:19:50.376 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:19:50 compute-0 nova_compute[349548]: 2025-12-05 02:19:50.377 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:19:50 compute-0 nova_compute[349548]: 2025-12-05 02:19:50.465 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 292fd084-0808-4a80-adc1-6ab1f28e188a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:19:50 compute-0 nova_compute[349548]: 2025-12-05 02:19:50.466 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:19:50 compute-0 nova_compute[349548]: 2025-12-05 02:19:50.466 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 02:19:50 compute-0 nova_compute[349548]: 2025-12-05 02:19:50.467 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 02:19:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2125: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:19:50 compute-0 nova_compute[349548]: 2025-12-05 02:19:50.535 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:19:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:19:51 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3049850232' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:19:51 compute-0 nova_compute[349548]: 2025-12-05 02:19:51.087 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:19:51 compute-0 nova_compute[349548]: 2025-12-05 02:19:51.099 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:19:51 compute-0 nova_compute[349548]: 2025-12-05 02:19:51.123 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:19:51 compute-0 nova_compute[349548]: 2025-12-05 02:19:51.127 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 02:19:51 compute-0 nova_compute[349548]: 2025-12-05 02:19:51.127 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:19:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2126: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:19:52 compute-0 nova_compute[349548]: 2025-12-05 02:19:52.521 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:19:53 compute-0 nova_compute[349548]: 2025-12-05 02:19:53.125 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:19:53 compute-0 nova_compute[349548]: 2025-12-05 02:19:53.149 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:19:53 compute-0 nova_compute[349548]: 2025-12-05 02:19:53.150 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:19:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:19:53 compute-0 nova_compute[349548]: 2025-12-05 02:19:53.413 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:19:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2127: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:19:55 compute-0 podman[462157]: 2025-12-05 02:19:55.708745863 +0000 UTC m=+0.109180077 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  5 02:19:55 compute-0 podman[462158]: 2025-12-05 02:19:55.747506692 +0000 UTC m=+0.135325522 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 02:19:55 compute-0 podman[462160]: 2025-12-05 02:19:55.755804865 +0000 UTC m=+0.133689226 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, config_id=edpm, architecture=x86_64, container_name=openstack_network_exporter, distribution-scope=public, version=9.6, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-type=git)
Dec  5 02:19:55 compute-0 podman[462159]: 2025-12-05 02:19:55.781000623 +0000 UTC m=+0.165516410 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:19:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:19:56.218 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:19:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:19:56.218 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:19:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:19:56.219 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:19:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2128: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:19:57 compute-0 nova_compute[349548]: 2025-12-05 02:19:57.525 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:19:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:19:58 compute-0 nova_compute[349548]: 2025-12-05 02:19:58.416 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:19:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2129: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:19:59 compute-0 podman[158197]: time="2025-12-05T02:19:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:19:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:19:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 02:19:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:19:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8638 "" "Go-http-client/1.1"
Dec  5 02:20:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2130: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:20:01 compute-0 openstack_network_exporter[366555]: ERROR   02:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:20:01 compute-0 openstack_network_exporter[366555]: ERROR   02:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:20:01 compute-0 openstack_network_exporter[366555]: ERROR   02:20:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:20:01 compute-0 openstack_network_exporter[366555]: ERROR   02:20:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:20:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:20:01 compute-0 openstack_network_exporter[366555]: ERROR   02:20:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:20:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:20:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2131: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:20:02 compute-0 nova_compute[349548]: 2025-12-05 02:20:02.528 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:20:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:20:03 compute-0 nova_compute[349548]: 2025-12-05 02:20:03.419 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:20:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2132: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:20:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2133: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:20:07 compute-0 nova_compute[349548]: 2025-12-05 02:20:07.531 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:20:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:20:08 compute-0 nova_compute[349548]: 2025-12-05 02:20:08.423 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:20:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2134: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:20:09 compute-0 podman[462240]: 2025-12-05 02:20:09.692212168 +0000 UTC m=+0.093428385 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 02:20:09 compute-0 podman[462241]: 2025-12-05 02:20:09.733785156 +0000 UTC m=+0.129475198 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 02:20:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2135: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:20:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2136: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:20:12 compute-0 nova_compute[349548]: 2025-12-05 02:20:12.535 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:20:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:20:13 compute-0 nova_compute[349548]: 2025-12-05 02:20:13.425 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:20:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2137: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:20:14 compute-0 podman[462280]: 2025-12-05 02:20:14.736824139 +0000 UTC m=+0.137693949 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  5 02:20:14 compute-0 podman[462282]: 2025-12-05 02:20:14.74009124 +0000 UTC m=+0.134864729 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  5 02:20:14 compute-0 podman[462281]: 2025-12-05 02:20:14.754925837 +0000 UTC m=+0.152355730 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, name=ubi9, architecture=x86_64, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, config_id=edpm, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, release-0.7.12=, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container)
Dec  5 02:20:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:20:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:20:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:20:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:20:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:20:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:20:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:20:16
Dec  5 02:20:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 02:20:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 02:20:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', 'vms', 'default.rgw.control', 'default.rgw.meta', '.rgw.root', '.mgr', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups']
Dec  5 02:20:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 02:20:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2138: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:20:17 compute-0 nova_compute[349548]: 2025-12-05 02:20:17.540 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:20:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 02:20:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:20:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 02:20:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:20:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:20:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:20:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:20:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:20:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:20:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:20:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:20:18 compute-0 nova_compute[349548]: 2025-12-05 02:20:18.427 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:20:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2139: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:20:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2140: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:20:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2141: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:20:22 compute-0 nova_compute[349548]: 2025-12-05 02:20:22.543 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:20:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:20:23 compute-0 nova_compute[349548]: 2025-12-05 02:20:23.431 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:20:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2142: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:20:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2143: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:20:26 compute-0 podman[462337]: 2025-12-05 02:20:26.716167026 +0000 UTC m=+0.117533112 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  5 02:20:26 compute-0 podman[462339]: 2025-12-05 02:20:26.721601628 +0000 UTC m=+0.102596112 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, release=1755695350, version=9.6, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, build-date=2025-08-20T13:12:41)
Dec  5 02:20:26 compute-0 podman[462336]: 2025-12-05 02:20:26.742051853 +0000 UTC m=+0.146866746 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd)
Dec  5 02:20:26 compute-0 podman[462338]: 2025-12-05 02:20:26.781383567 +0000 UTC m=+0.172072134 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  5 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015181009677997005 of space, bias 1.0, pg target 0.45543029033991017 quantized to 32 (current 32)
Dec  5 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  5 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:20:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 02:20:27 compute-0 nova_compute[349548]: 2025-12-05 02:20:27.546 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:20:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:20:28 compute-0 nova_compute[349548]: 2025-12-05 02:20:28.434 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:20:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2144: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:20:29 compute-0 podman[158197]: time="2025-12-05T02:20:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:20:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:20:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 02:20:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:20:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8664 "" "Go-http-client/1.1"
Dec  5 02:20:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:20:29 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:20:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 02:20:29 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:20:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 02:20:29 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:20:29 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev d78192db-4b9a-4b0d-b611-8379a2a3ec9a does not exist
Dec  5 02:20:29 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 9162fe8d-8ae2-49ea-8334-c9b251a7a41a does not exist
Dec  5 02:20:29 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev ef73b983-9d66-4211-8018-5d968d1f552d does not exist
Dec  5 02:20:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 02:20:30 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 02:20:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 02:20:30 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:20:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:20:30 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:20:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2145: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:20:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:20:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:20:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:20:31 compute-0 podman[462685]: 2025-12-05 02:20:31.085823391 +0000 UTC m=+0.076265513 container create e6e0646d7b34835984c52b1513a9e129a05aa7fa24d2b677a1b105ac0b5b5e2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lehmann, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  5 02:20:31 compute-0 podman[462685]: 2025-12-05 02:20:31.052588517 +0000 UTC m=+0.043030689 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:20:31 compute-0 systemd[1]: Started libpod-conmon-e6e0646d7b34835984c52b1513a9e129a05aa7fa24d2b677a1b105ac0b5b5e2a.scope.
Dec  5 02:20:31 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:20:31 compute-0 podman[462685]: 2025-12-05 02:20:31.294032258 +0000 UTC m=+0.284474410 container init e6e0646d7b34835984c52b1513a9e129a05aa7fa24d2b677a1b105ac0b5b5e2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:20:31 compute-0 podman[462685]: 2025-12-05 02:20:31.315747038 +0000 UTC m=+0.306189140 container start e6e0646d7b34835984c52b1513a9e129a05aa7fa24d2b677a1b105ac0b5b5e2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  5 02:20:31 compute-0 podman[462685]: 2025-12-05 02:20:31.32079473 +0000 UTC m=+0.311236832 container attach e6e0646d7b34835984c52b1513a9e129a05aa7fa24d2b677a1b105ac0b5b5e2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lehmann, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:20:31 compute-0 quirky_lehmann[462701]: 167 167
Dec  5 02:20:31 compute-0 systemd[1]: libpod-e6e0646d7b34835984c52b1513a9e129a05aa7fa24d2b677a1b105ac0b5b5e2a.scope: Deactivated successfully.
Dec  5 02:20:31 compute-0 conmon[462701]: conmon e6e0646d7b34835984c5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e6e0646d7b34835984c52b1513a9e129a05aa7fa24d2b677a1b105ac0b5b5e2a.scope/container/memory.events
Dec  5 02:20:31 compute-0 podman[462685]: 2025-12-05 02:20:31.332966692 +0000 UTC m=+0.323408784 container died e6e0646d7b34835984c52b1513a9e129a05aa7fa24d2b677a1b105ac0b5b5e2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:20:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5a0b7623a66856cf8003bb224e77346c8a1e3e12e2032103147363361245318-merged.mount: Deactivated successfully.
Dec  5 02:20:31 compute-0 podman[462685]: 2025-12-05 02:20:31.395700224 +0000 UTC m=+0.386142316 container remove e6e0646d7b34835984c52b1513a9e129a05aa7fa24d2b677a1b105ac0b5b5e2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lehmann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  5 02:20:31 compute-0 openstack_network_exporter[366555]: ERROR   02:20:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:20:31 compute-0 openstack_network_exporter[366555]: ERROR   02:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:20:31 compute-0 openstack_network_exporter[366555]: ERROR   02:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:20:31 compute-0 openstack_network_exporter[366555]: ERROR   02:20:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:20:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:20:31 compute-0 systemd[1]: libpod-conmon-e6e0646d7b34835984c52b1513a9e129a05aa7fa24d2b677a1b105ac0b5b5e2a.scope: Deactivated successfully.
Dec  5 02:20:31 compute-0 openstack_network_exporter[366555]: ERROR   02:20:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:20:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:20:31 compute-0 podman[462723]: 2025-12-05 02:20:31.69822007 +0000 UTC m=+0.087341704 container create c893a13fb1a1b06cc000de47b93fc2ff36b0f3a2360fad0bf8f04f373d6a6140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hertz, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:20:31 compute-0 podman[462723]: 2025-12-05 02:20:31.662324132 +0000 UTC m=+0.051445816 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:20:31 compute-0 systemd[1]: Started libpod-conmon-c893a13fb1a1b06cc000de47b93fc2ff36b0f3a2360fad0bf8f04f373d6a6140.scope.
Dec  5 02:20:31 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:20:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/015ea9561c7b9afc648a84141aeb5d0539b625d175f77d4beb05640d4b3921c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:20:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/015ea9561c7b9afc648a84141aeb5d0539b625d175f77d4beb05640d4b3921c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:20:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/015ea9561c7b9afc648a84141aeb5d0539b625d175f77d4beb05640d4b3921c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:20:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/015ea9561c7b9afc648a84141aeb5d0539b625d175f77d4beb05640d4b3921c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:20:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/015ea9561c7b9afc648a84141aeb5d0539b625d175f77d4beb05640d4b3921c0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 02:20:31 compute-0 podman[462723]: 2025-12-05 02:20:31.886227661 +0000 UTC m=+0.275349285 container init c893a13fb1a1b06cc000de47b93fc2ff36b0f3a2360fad0bf8f04f373d6a6140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hertz, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:20:31 compute-0 podman[462723]: 2025-12-05 02:20:31.914610628 +0000 UTC m=+0.303732232 container start c893a13fb1a1b06cc000de47b93fc2ff36b0f3a2360fad0bf8f04f373d6a6140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:20:31 compute-0 podman[462723]: 2025-12-05 02:20:31.919189286 +0000 UTC m=+0.308310890 container attach c893a13fb1a1b06cc000de47b93fc2ff36b0f3a2360fad0bf8f04f373d6a6140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hertz, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:20:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2146: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:20:32 compute-0 nova_compute[349548]: 2025-12-05 02:20:32.549 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:20:33 compute-0 dreamy_hertz[462739]: --> passed data devices: 0 physical, 3 LVM
Dec  5 02:20:33 compute-0 dreamy_hertz[462739]: --> relative data size: 1.0
Dec  5 02:20:33 compute-0 dreamy_hertz[462739]: --> All data devices are unavailable
Dec  5 02:20:33 compute-0 systemd[1]: libpod-c893a13fb1a1b06cc000de47b93fc2ff36b0f3a2360fad0bf8f04f373d6a6140.scope: Deactivated successfully.
Dec  5 02:20:33 compute-0 systemd[1]: libpod-c893a13fb1a1b06cc000de47b93fc2ff36b0f3a2360fad0bf8f04f373d6a6140.scope: Consumed 1.146s CPU time.
Dec  5 02:20:33 compute-0 podman[462723]: 2025-12-05 02:20:33.116006669 +0000 UTC m=+1.505128323 container died c893a13fb1a1b06cc000de47b93fc2ff36b0f3a2360fad0bf8f04f373d6a6140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hertz, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Dec  5 02:20:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-015ea9561c7b9afc648a84141aeb5d0539b625d175f77d4beb05640d4b3921c0-merged.mount: Deactivated successfully.
Dec  5 02:20:33 compute-0 podman[462723]: 2025-12-05 02:20:33.207447807 +0000 UTC m=+1.596569421 container remove c893a13fb1a1b06cc000de47b93fc2ff36b0f3a2360fad0bf8f04f373d6a6140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hertz, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Dec  5 02:20:33 compute-0 systemd[1]: libpod-conmon-c893a13fb1a1b06cc000de47b93fc2ff36b0f3a2360fad0bf8f04f373d6a6140.scope: Deactivated successfully.
Dec  5 02:20:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:20:33 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Dec  5 02:20:33 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:20:33.354748) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  5 02:20:33 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Dec  5 02:20:33 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901233354794, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 593, "num_deletes": 250, "total_data_size": 663003, "memory_usage": 674784, "flush_reason": "Manual Compaction"}
Dec  5 02:20:33 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Dec  5 02:20:33 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901233361396, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 433384, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43984, "largest_seqno": 44576, "table_properties": {"data_size": 430566, "index_size": 790, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7550, "raw_average_key_size": 20, "raw_value_size": 424717, "raw_average_value_size": 1150, "num_data_blocks": 36, "num_entries": 369, "num_filter_entries": 369, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764901188, "oldest_key_time": 1764901188, "file_creation_time": 1764901233, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Dec  5 02:20:33 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 7030 microseconds, and 2443 cpu microseconds.
Dec  5 02:20:33 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 02:20:33 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:20:33.361773) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 433384 bytes OK
Dec  5 02:20:33 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:20:33.361795) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Dec  5 02:20:33 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:20:33.365126) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Dec  5 02:20:33 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:20:33.365152) EVENT_LOG_v1 {"time_micros": 1764901233365144, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  5 02:20:33 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:20:33.365171) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  5 02:20:33 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 659754, prev total WAL file size 659754, number of live WAL files 2.
Dec  5 02:20:33 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:20:33 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:20:33.366380) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373533' seq:72057594037927935, type:22 .. '6D6772737461740032303034' seq:0, type:0; will stop at (end)
Dec  5 02:20:33 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  5 02:20:33 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(423KB)], [104(9188KB)]
Dec  5 02:20:33 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901233366436, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 9842243, "oldest_snapshot_seqno": -1}
Dec  5 02:20:33 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 5797 keys, 6776292 bytes, temperature: kUnknown
Dec  5 02:20:33 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901233422937, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 6776292, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6741445, "index_size": 19249, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14533, "raw_key_size": 151643, "raw_average_key_size": 26, "raw_value_size": 6640378, "raw_average_value_size": 1145, "num_data_blocks": 760, "num_entries": 5797, "num_filter_entries": 5797, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764901233, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Dec  5 02:20:33 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 02:20:33 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:20:33.423617) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 6776292 bytes
Dec  5 02:20:33 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:20:33.426105) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 174.0 rd, 119.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.0 +0.0 blob) out(6.5 +0.0 blob), read-write-amplify(38.3) write-amplify(15.6) OK, records in: 6290, records dropped: 493 output_compression: NoCompression
Dec  5 02:20:33 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:20:33.426128) EVENT_LOG_v1 {"time_micros": 1764901233426118, "job": 62, "event": "compaction_finished", "compaction_time_micros": 56571, "compaction_time_cpu_micros": 31189, "output_level": 6, "num_output_files": 1, "total_output_size": 6776292, "num_input_records": 6290, "num_output_records": 5797, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  5 02:20:33 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:20:33 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901233426690, "job": 62, "event": "table_file_deletion", "file_number": 106}
Dec  5 02:20:33 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:20:33 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901233428737, "job": 62, "event": "table_file_deletion", "file_number": 104}
Dec  5 02:20:33 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:20:33.366106) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:20:33 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:20:33.429033) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:20:33 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:20:33.429042) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:20:33 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:20:33.429045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:20:33 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:20:33.429048) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:20:33 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:20:33.429051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:20:33 compute-0 nova_compute[349548]: 2025-12-05 02:20:33.436 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:20:34 compute-0 podman[462916]: 2025-12-05 02:20:34.425384354 +0000 UTC m=+0.096297265 container create 8066b69f096faf35806cc122e7fb61799008f922de339694b3625b703acc6c60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dhawan, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:20:34 compute-0 podman[462916]: 2025-12-05 02:20:34.391599405 +0000 UTC m=+0.062512366 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:20:34 compute-0 systemd[1]: Started libpod-conmon-8066b69f096faf35806cc122e7fb61799008f922de339694b3625b703acc6c60.scope.
Dec  5 02:20:34 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:20:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2147: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:20:34 compute-0 podman[462916]: 2025-12-05 02:20:34.558523924 +0000 UTC m=+0.229436825 container init 8066b69f096faf35806cc122e7fb61799008f922de339694b3625b703acc6c60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:20:34 compute-0 podman[462916]: 2025-12-05 02:20:34.568812663 +0000 UTC m=+0.239725544 container start 8066b69f096faf35806cc122e7fb61799008f922de339694b3625b703acc6c60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dhawan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  5 02:20:34 compute-0 condescending_dhawan[462931]: 167 167
Dec  5 02:20:34 compute-0 podman[462916]: 2025-12-05 02:20:34.573431322 +0000 UTC m=+0.244344203 container attach 8066b69f096faf35806cc122e7fb61799008f922de339694b3625b703acc6c60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dhawan, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:20:34 compute-0 systemd[1]: libpod-8066b69f096faf35806cc122e7fb61799008f922de339694b3625b703acc6c60.scope: Deactivated successfully.
Dec  5 02:20:34 compute-0 podman[462916]: 2025-12-05 02:20:34.576122928 +0000 UTC m=+0.247035809 container died 8066b69f096faf35806cc122e7fb61799008f922de339694b3625b703acc6c60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dhawan, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  5 02:20:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb8c01d3b3a6f0cfd6fb4bf8d21981b15a4ffa90cb66f2855d791992cab23ea5-merged.mount: Deactivated successfully.
Dec  5 02:20:34 compute-0 podman[462916]: 2025-12-05 02:20:34.628678784 +0000 UTC m=+0.299591665 container remove 8066b69f096faf35806cc122e7fb61799008f922de339694b3625b703acc6c60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_dhawan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Dec  5 02:20:34 compute-0 systemd[1]: libpod-conmon-8066b69f096faf35806cc122e7fb61799008f922de339694b3625b703acc6c60.scope: Deactivated successfully.
Dec  5 02:20:34 compute-0 podman[462954]: 2025-12-05 02:20:34.872169183 +0000 UTC m=+0.086644795 container create b26ca18470a3b5468d22033325a2389df3b2c7dbb86b60f92874d229ab711b11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_chandrasekhar, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:20:34 compute-0 podman[462954]: 2025-12-05 02:20:34.834364501 +0000 UTC m=+0.048840163 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:20:34 compute-0 systemd[1]: Started libpod-conmon-b26ca18470a3b5468d22033325a2389df3b2c7dbb86b60f92874d229ab711b11.scope.
Dec  5 02:20:34 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:20:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ad38289dbc43e83f63dd5f0af46d66dd8fd06e17d0ac7b15516b8450f656e0d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:20:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ad38289dbc43e83f63dd5f0af46d66dd8fd06e17d0ac7b15516b8450f656e0d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:20:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ad38289dbc43e83f63dd5f0af46d66dd8fd06e17d0ac7b15516b8450f656e0d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:20:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ad38289dbc43e83f63dd5f0af46d66dd8fd06e17d0ac7b15516b8450f656e0d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:20:35 compute-0 podman[462954]: 2025-12-05 02:20:35.012412702 +0000 UTC m=+0.226888364 container init b26ca18470a3b5468d22033325a2389df3b2c7dbb86b60f92874d229ab711b11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_chandrasekhar, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  5 02:20:35 compute-0 podman[462954]: 2025-12-05 02:20:35.036189509 +0000 UTC m=+0.250665121 container start b26ca18470a3b5468d22033325a2389df3b2c7dbb86b60f92874d229ab711b11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Dec  5 02:20:35 compute-0 podman[462954]: 2025-12-05 02:20:35.042418154 +0000 UTC m=+0.256893826 container attach b26ca18470a3b5468d22033325a2389df3b2c7dbb86b60f92874d229ab711b11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_chandrasekhar, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]: {
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:    "0": [
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:        {
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:            "devices": [
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:                "/dev/loop3"
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:            ],
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:            "lv_name": "ceph_lv0",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:            "lv_size": "21470642176",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:            "name": "ceph_lv0",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:            "tags": {
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:                "ceph.cluster_name": "ceph",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:                "ceph.crush_device_class": "",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:                "ceph.encrypted": "0",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:                "ceph.osd_id": "0",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:                "ceph.type": "block",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:                "ceph.vdo": "0"
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:            },
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:            "type": "block",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:            "vg_name": "ceph_vg0"
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:        }
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:    ],
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:    "1": [
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:        {
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:            "devices": [
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:                "/dev/loop4"
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:            ],
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:            "lv_name": "ceph_lv1",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:            "lv_size": "21470642176",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:            "name": "ceph_lv1",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:            "tags": {
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:                "ceph.cluster_name": "ceph",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:                "ceph.crush_device_class": "",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:                "ceph.encrypted": "0",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:                "ceph.osd_id": "1",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:                "ceph.type": "block",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:                "ceph.vdo": "0"
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:            },
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:            "type": "block",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:            "vg_name": "ceph_vg1"
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:        }
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:    ],
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:    "2": [
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:        {
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:            "devices": [
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:                "/dev/loop5"
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:            ],
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:            "lv_name": "ceph_lv2",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:            "lv_size": "21470642176",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:            "name": "ceph_lv2",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:            "tags": {
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:                "ceph.cluster_name": "ceph",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:                "ceph.crush_device_class": "",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:                "ceph.encrypted": "0",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:                "ceph.osd_id": "2",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:                "ceph.type": "block",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:                "ceph.vdo": "0"
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:            },
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:            "type": "block",
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:            "vg_name": "ceph_vg2"
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:        }
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]:    ]
Dec  5 02:20:35 compute-0 fervent_chandrasekhar[462970]: }
Dec  5 02:20:35 compute-0 systemd[1]: libpod-b26ca18470a3b5468d22033325a2389df3b2c7dbb86b60f92874d229ab711b11.scope: Deactivated successfully.
Dec  5 02:20:35 compute-0 podman[462954]: 2025-12-05 02:20:35.797134981 +0000 UTC m=+1.011610563 container died b26ca18470a3b5468d22033325a2389df3b2c7dbb86b60f92874d229ab711b11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:20:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ad38289dbc43e83f63dd5f0af46d66dd8fd06e17d0ac7b15516b8450f656e0d-merged.mount: Deactivated successfully.
Dec  5 02:20:35 compute-0 podman[462954]: 2025-12-05 02:20:35.885220875 +0000 UTC m=+1.099696447 container remove b26ca18470a3b5468d22033325a2389df3b2c7dbb86b60f92874d229ab711b11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Dec  5 02:20:35 compute-0 systemd[1]: libpod-conmon-b26ca18470a3b5468d22033325a2389df3b2c7dbb86b60f92874d229ab711b11.scope: Deactivated successfully.
Dec  5 02:20:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2148: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:20:37 compute-0 podman[463127]: 2025-12-05 02:20:37.096386161 +0000 UTC m=+0.098544459 container create 2d895d60c514f0d5ea572a5e32fb437152d3e3e5b376c53d4a1efba8689c9266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_liskov, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Dec  5 02:20:37 compute-0 podman[463127]: 2025-12-05 02:20:37.059033082 +0000 UTC m=+0.061191440 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:20:37 compute-0 systemd[1]: Started libpod-conmon-2d895d60c514f0d5ea572a5e32fb437152d3e3e5b376c53d4a1efba8689c9266.scope.
Dec  5 02:20:37 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:20:37 compute-0 podman[463127]: 2025-12-05 02:20:37.233744869 +0000 UTC m=+0.235903167 container init 2d895d60c514f0d5ea572a5e32fb437152d3e3e5b376c53d4a1efba8689c9266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  5 02:20:37 compute-0 podman[463127]: 2025-12-05 02:20:37.250255152 +0000 UTC m=+0.252413440 container start 2d895d60c514f0d5ea572a5e32fb437152d3e3e5b376c53d4a1efba8689c9266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_liskov, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  5 02:20:37 compute-0 podman[463127]: 2025-12-05 02:20:37.257682651 +0000 UTC m=+0.259840999 container attach 2d895d60c514f0d5ea572a5e32fb437152d3e3e5b376c53d4a1efba8689c9266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_liskov, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:20:37 compute-0 nervous_liskov[463143]: 167 167
Dec  5 02:20:37 compute-0 systemd[1]: libpod-2d895d60c514f0d5ea572a5e32fb437152d3e3e5b376c53d4a1efba8689c9266.scope: Deactivated successfully.
Dec  5 02:20:37 compute-0 conmon[463143]: conmon 2d895d60c514f0d5ea57 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2d895d60c514f0d5ea572a5e32fb437152d3e3e5b376c53d4a1efba8689c9266.scope/container/memory.events
Dec  5 02:20:37 compute-0 podman[463148]: 2025-12-05 02:20:37.348597864 +0000 UTC m=+0.059432360 container died 2d895d60c514f0d5ea572a5e32fb437152d3e3e5b376c53d4a1efba8689c9266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_liskov, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:20:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-1417fbb7d2e13be256f38812a6c06daf9ec7396f38f98d146e726615b3e01d29-merged.mount: Deactivated successfully.
Dec  5 02:20:37 compute-0 podman[463148]: 2025-12-05 02:20:37.42356944 +0000 UTC m=+0.134403866 container remove 2d895d60c514f0d5ea572a5e32fb437152d3e3e5b376c53d4a1efba8689c9266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  5 02:20:37 compute-0 systemd[1]: libpod-conmon-2d895d60c514f0d5ea572a5e32fb437152d3e3e5b376c53d4a1efba8689c9266.scope: Deactivated successfully.
Dec  5 02:20:37 compute-0 nova_compute[349548]: 2025-12-05 02:20:37.551 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:20:37 compute-0 podman[463170]: 2025-12-05 02:20:37.661188804 +0000 UTC m=+0.062544708 container create 3dbbd3cd8d7f6eedaf5dd9656ebb96d3af606652a8eb60d40472952dcaa46164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:20:37 compute-0 podman[463170]: 2025-12-05 02:20:37.641271124 +0000 UTC m=+0.042627048 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:20:37 compute-0 systemd[1]: Started libpod-conmon-3dbbd3cd8d7f6eedaf5dd9656ebb96d3af606652a8eb60d40472952dcaa46164.scope.
Dec  5 02:20:37 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:20:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85494183898c1b5641806e6429cbc1b4e3252187a7673d3131ec9933adf2f2f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:20:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85494183898c1b5641806e6429cbc1b4e3252187a7673d3131ec9933adf2f2f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:20:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85494183898c1b5641806e6429cbc1b4e3252187a7673d3131ec9933adf2f2f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:20:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85494183898c1b5641806e6429cbc1b4e3252187a7673d3131ec9933adf2f2f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:20:37 compute-0 podman[463170]: 2025-12-05 02:20:37.86151876 +0000 UTC m=+0.262874754 container init 3dbbd3cd8d7f6eedaf5dd9656ebb96d3af606652a8eb60d40472952dcaa46164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hofstadter, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:20:37 compute-0 podman[463170]: 2025-12-05 02:20:37.878673192 +0000 UTC m=+0.280029096 container start 3dbbd3cd8d7f6eedaf5dd9656ebb96d3af606652a8eb60d40472952dcaa46164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:20:37 compute-0 podman[463170]: 2025-12-05 02:20:37.88321902 +0000 UTC m=+0.284575004 container attach 3dbbd3cd8d7f6eedaf5dd9656ebb96d3af606652a8eb60d40472952dcaa46164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hofstadter, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.326 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.329 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.340 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '292fd084-0808-4a80-adc1-6ab1f28e188a', 'name': 'te-3255585-asg-ymkpcnuo2iqm-rsaqvth2jwvx-k3ipymnd45pa', 'flavor': {'id': 'bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'b01709a3378347e1a3f25eeb2b8b1bca', 'user_id': '99591ed8361e41579fee1d14f16bf0f7', 'hostId': '1d9ee94bfdb0c27cf886050001bab7f2a93221931735791e86b3ac18', 'status': 'active', 'metadata': {'metering.server_group': '92ca195d-98d1-443c-9947-dcb7ca7b926a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.349 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.350 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.351 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.352 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:20:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.353 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.354 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.355 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.356 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.357 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.354 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7', 'name': 'te-3255585-asg-ymkpcnuo2iqm-egephyv4dydi-sxgc5dh3lpwo', 'flavor': {'id': 'bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'b01709a3378347e1a3f25eeb2b8b1bca', 'user_id': '99591ed8361e41579fee1d14f16bf0f7', 'hostId': '1d9ee94bfdb0c27cf886050001bab7f2a93221931735791e86b3ac18', 'status': 'active', 'metadata': {'metering.server_group': '92ca195d-98d1-443c-9947-dcb7ca7b926a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.358 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.359 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd61438050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.359 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd61438050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.359 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.360 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.360 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.361 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.361 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.361 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.361 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-05T02:20:38.359251) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.361 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.362 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-05T02:20:38.361849) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.383 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.383 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.403 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.403 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.404 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.404 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.404 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.405 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.405 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.405 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.406 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.406 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.406 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.406 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.407 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.407 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.407 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-05T02:20:38.405589) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.407 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.407 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-05T02:20:38.407532) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.407 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.439 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.bytes volume: 30882304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.439 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 nova_compute[349548]: 2025-12-05 02:20:38.439 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.479 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.bytes volume: 30075904 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.479 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.480 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.481 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.481 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.481 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.481 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.482 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.482 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-05T02:20:38.481858) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.482 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.latency volume: 3200956192 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.483 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.latency volume: 237184283 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.483 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.latency volume: 2761905668 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.484 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.latency volume: 175446078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.484 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.485 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.485 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.485 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.485 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.485 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.486 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-05T02:20:38.485852) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.486 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.requests volume: 1101 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.486 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.487 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.requests volume: 1075 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.487 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.488 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.488 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.488 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.489 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.489 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.489 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.489 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.489 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-05T02:20:38.489373) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.490 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.490 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.491 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.492 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.492 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.493 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.493 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.493 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.493 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.494 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.bytes volume: 73146368 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.494 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.495 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.bytes volume: 72822784 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.496 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.497 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.497 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.498 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.498 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.499 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-05T02:20:38.493763) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.499 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.499 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.499 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-05T02:20:38.499464) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.527 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2149: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.555 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.555 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.556 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.556 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.556 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.556 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.556 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.556 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.latency volume: 11353966152 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.556 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.556 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.latency volume: 10383107676 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.557 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.557 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.557 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.557 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.557 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.557 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.557 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.557 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.requests volume: 315 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.558 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.558 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.requests volume: 277 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.558 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.558 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.559 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.559 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.559 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.559 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.559 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.560 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-05T02:20:38.556398) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.560 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-05T02:20:38.557858) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.561 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-05T02:20:38.559318) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.573 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.576 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.577 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.577 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.577 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.577 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.577 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.578 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-05T02:20:38.577813) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.577 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.578 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.579 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.580 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.580 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.580 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.580 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.581 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.581 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.581 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.582 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.582 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.583 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.583 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.583 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.583 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.583 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.584 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.584 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.584 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.584 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.585 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.585 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.585 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.585 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.585 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.585 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.585 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.586 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.586 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.586 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.586 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.586 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.586 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.586 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.587 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.587 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.587 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.588 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.588 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.588 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.588 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.588 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.588 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.588 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.588 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/memory.usage volume: 42.4765625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.589 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/memory.usage volume: 43.47265625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.589 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.589 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.589 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.589 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.590 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.590 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.590 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.590 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.591 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.591 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.591 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.591 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.591 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.591 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.591 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.592 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.592 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.592 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.593 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.593 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.593 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.593 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.593 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.593 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.594 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.594 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.594 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.594 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.594 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.594 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.595 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/cpu volume: 335710000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.595 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/cpu volume: 291630000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.595 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.596 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.596 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.596 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.596 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.596 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.596 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.596 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.597 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.597 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.597 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.597 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.597 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.598 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.598 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.598 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.598 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.599 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.599 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.600 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.601 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.602 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-05T02:20:38.581198) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.602 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-05T02:20:38.584167) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.602 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-05T02:20:38.585585) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.602 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-05T02:20:38.586957) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.602 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-05T02:20:38.588714) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.602 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-05T02:20:38.590198) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.602 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-05T02:20:38.591671) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.603 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-05T02:20:38.593315) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.603 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-05T02:20:38.594847) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.603 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-05T02:20:38.596575) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:20:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:20:38.603 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-05T02:20:38.598052) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:20:38 compute-0 stupefied_hofstadter[463186]: {
Dec  5 02:20:38 compute-0 stupefied_hofstadter[463186]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 02:20:38 compute-0 stupefied_hofstadter[463186]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:20:38 compute-0 stupefied_hofstadter[463186]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 02:20:38 compute-0 stupefied_hofstadter[463186]:        "osd_id": 0,
Dec  5 02:20:38 compute-0 stupefied_hofstadter[463186]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:20:38 compute-0 stupefied_hofstadter[463186]:        "type": "bluestore"
Dec  5 02:20:38 compute-0 stupefied_hofstadter[463186]:    },
Dec  5 02:20:38 compute-0 stupefied_hofstadter[463186]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 02:20:38 compute-0 stupefied_hofstadter[463186]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:20:38 compute-0 stupefied_hofstadter[463186]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 02:20:38 compute-0 stupefied_hofstadter[463186]:        "osd_id": 1,
Dec  5 02:20:38 compute-0 stupefied_hofstadter[463186]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:20:38 compute-0 stupefied_hofstadter[463186]:        "type": "bluestore"
Dec  5 02:20:38 compute-0 stupefied_hofstadter[463186]:    },
Dec  5 02:20:38 compute-0 stupefied_hofstadter[463186]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 02:20:38 compute-0 stupefied_hofstadter[463186]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:20:38 compute-0 stupefied_hofstadter[463186]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 02:20:38 compute-0 stupefied_hofstadter[463186]:        "osd_id": 2,
Dec  5 02:20:38 compute-0 stupefied_hofstadter[463186]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:20:38 compute-0 stupefied_hofstadter[463186]:        "type": "bluestore"
Dec  5 02:20:38 compute-0 stupefied_hofstadter[463186]:    }
Dec  5 02:20:38 compute-0 stupefied_hofstadter[463186]: }
Dec  5 02:20:39 compute-0 systemd[1]: libpod-3dbbd3cd8d7f6eedaf5dd9656ebb96d3af606652a8eb60d40472952dcaa46164.scope: Deactivated successfully.
Dec  5 02:20:39 compute-0 podman[463170]: 2025-12-05 02:20:39.009958045 +0000 UTC m=+1.411313959 container died 3dbbd3cd8d7f6eedaf5dd9656ebb96d3af606652a8eb60d40472952dcaa46164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hofstadter, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Dec  5 02:20:39 compute-0 systemd[1]: libpod-3dbbd3cd8d7f6eedaf5dd9656ebb96d3af606652a8eb60d40472952dcaa46164.scope: Consumed 1.119s CPU time.
Dec  5 02:20:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-85494183898c1b5641806e6429cbc1b4e3252187a7673d3131ec9933adf2f2f9-merged.mount: Deactivated successfully.
Dec  5 02:20:39 compute-0 podman[463170]: 2025-12-05 02:20:39.10128851 +0000 UTC m=+1.502644464 container remove 3dbbd3cd8d7f6eedaf5dd9656ebb96d3af606652a8eb60d40472952dcaa46164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_hofstadter, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:20:39 compute-0 systemd[1]: libpod-conmon-3dbbd3cd8d7f6eedaf5dd9656ebb96d3af606652a8eb60d40472952dcaa46164.scope: Deactivated successfully.
Dec  5 02:20:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 02:20:39 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:20:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 02:20:39 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:20:39 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev b60b39e4-1c15-4620-8d30-9fd9ba303b62 does not exist
Dec  5 02:20:39 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 2c5987b6-66a2-4217-b4fe-a3274340f6c2 does not exist
Dec  5 02:20:40 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:20:40 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:20:40 compute-0 nova_compute[349548]: 2025-12-05 02:20:40.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:20:40 compute-0 nova_compute[349548]: 2025-12-05 02:20:40.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  5 02:20:40 compute-0 nova_compute[349548]: 2025-12-05 02:20:40.083 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  5 02:20:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2150: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:20:40 compute-0 podman[463282]: 2025-12-05 02:20:40.721278548 +0000 UTC m=+0.125621189 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  5 02:20:40 compute-0 podman[463283]: 2025-12-05 02:20:40.727371959 +0000 UTC m=+0.137769230 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  5 02:20:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2151: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:20:42 compute-0 nova_compute[349548]: 2025-12-05 02:20:42.555 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:20:43 compute-0 nova_compute[349548]: 2025-12-05 02:20:43.084 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:20:43 compute-0 nova_compute[349548]: 2025-12-05 02:20:43.084 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 02:20:43 compute-0 nova_compute[349548]: 2025-12-05 02:20:43.084 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  5 02:20:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:20:43 compute-0 nova_compute[349548]: 2025-12-05 02:20:43.442 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:20:44 compute-0 nova_compute[349548]: 2025-12-05 02:20:44.368 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:20:44 compute-0 nova_compute[349548]: 2025-12-05 02:20:44.368 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:20:44 compute-0 nova_compute[349548]: 2025-12-05 02:20:44.369 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  5 02:20:44 compute-0 nova_compute[349548]: 2025-12-05 02:20:44.370 349552 DEBUG nova.objects.instance [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 292fd084-0808-4a80-adc1-6ab1f28e188a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:20:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2152: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:20:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 02:20:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3474213154' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 02:20:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 02:20:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3474213154' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 02:20:45 compute-0 podman[463326]: 2025-12-05 02:20:45.729790206 +0000 UTC m=+0.123993983 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Dec  5 02:20:45 compute-0 podman[463324]: 2025-12-05 02:20:45.730005182 +0000 UTC m=+0.131672689 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec  5 02:20:45 compute-0 podman[463325]: 2025-12-05 02:20:45.730223598 +0000 UTC m=+0.130692271 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.buildah.version=1.29.0, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, container_name=kepler, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, com.redhat.component=ubi9-container, release=1214.1726694543, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9)
Dec  5 02:20:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:20:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:20:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:20:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:20:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:20:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:20:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2153: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:20:46 compute-0 nova_compute[349548]: 2025-12-05 02:20:46.586 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updating instance_info_cache with network_info: [{"id": "706f9405-4061-481e-a252-9b14f4534a4e", "address": "fa:16:3e:cf:10:bc", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.151", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706f9405-40", "ovs_interfaceid": "706f9405-4061-481e-a252-9b14f4534a4e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:20:46 compute-0 nova_compute[349548]: 2025-12-05 02:20:46.609 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:20:46 compute-0 nova_compute[349548]: 2025-12-05 02:20:46.610 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  5 02:20:47 compute-0 nova_compute[349548]: 2025-12-05 02:20:47.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:20:47 compute-0 nova_compute[349548]: 2025-12-05 02:20:47.561 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:20:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:20:48 compute-0 nova_compute[349548]: 2025-12-05 02:20:48.445 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:20:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2154: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:20:49 compute-0 nova_compute[349548]: 2025-12-05 02:20:49.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:20:49 compute-0 nova_compute[349548]: 2025-12-05 02:20:49.066 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 02:20:50 compute-0 nova_compute[349548]: 2025-12-05 02:20:50.062 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:20:50 compute-0 nova_compute[349548]: 2025-12-05 02:20:50.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:20:50 compute-0 nova_compute[349548]: 2025-12-05 02:20:50.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:20:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2155: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:20:51 compute-0 nova_compute[349548]: 2025-12-05 02:20:51.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:20:51 compute-0 nova_compute[349548]: 2025-12-05 02:20:51.106 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:20:51 compute-0 nova_compute[349548]: 2025-12-05 02:20:51.107 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:20:51 compute-0 nova_compute[349548]: 2025-12-05 02:20:51.108 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:20:51 compute-0 nova_compute[349548]: 2025-12-05 02:20:51.109 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 02:20:51 compute-0 nova_compute[349548]: 2025-12-05 02:20:51.109 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:20:51 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:20:51 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1805188336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:20:51 compute-0 nova_compute[349548]: 2025-12-05 02:20:51.675 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:20:51 compute-0 nova_compute[349548]: 2025-12-05 02:20:51.764 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:20:51 compute-0 nova_compute[349548]: 2025-12-05 02:20:51.765 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:20:51 compute-0 nova_compute[349548]: 2025-12-05 02:20:51.772 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:20:51 compute-0 nova_compute[349548]: 2025-12-05 02:20:51.773 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:20:52 compute-0 nova_compute[349548]: 2025-12-05 02:20:52.279 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:20:52 compute-0 nova_compute[349548]: 2025-12-05 02:20:52.280 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3523MB free_disk=59.897212982177734GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 02:20:52 compute-0 nova_compute[349548]: 2025-12-05 02:20:52.281 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:20:52 compute-0 nova_compute[349548]: 2025-12-05 02:20:52.281 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:20:52 compute-0 nova_compute[349548]: 2025-12-05 02:20:52.443 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 292fd084-0808-4a80-adc1-6ab1f28e188a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:20:52 compute-0 nova_compute[349548]: 2025-12-05 02:20:52.443 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:20:52 compute-0 nova_compute[349548]: 2025-12-05 02:20:52.444 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 02:20:52 compute-0 nova_compute[349548]: 2025-12-05 02:20:52.444 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 02:20:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2156: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:20:52 compute-0 nova_compute[349548]: 2025-12-05 02:20:52.562 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:20:52 compute-0 nova_compute[349548]: 2025-12-05 02:20:52.656 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:20:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:20:53 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1724534420' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:20:53 compute-0 nova_compute[349548]: 2025-12-05 02:20:53.175 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:20:53 compute-0 nova_compute[349548]: 2025-12-05 02:20:53.191 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:20:53 compute-0 nova_compute[349548]: 2025-12-05 02:20:53.212 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:20:53 compute-0 nova_compute[349548]: 2025-12-05 02:20:53.218 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 02:20:53 compute-0 nova_compute[349548]: 2025-12-05 02:20:53.219 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.938s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:20:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:20:53 compute-0 nova_compute[349548]: 2025-12-05 02:20:53.449 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:20:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2157: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:20:56 compute-0 nova_compute[349548]: 2025-12-05 02:20:56.220 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:20:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:20:56.219 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:20:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:20:56.220 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:20:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:20:56.221 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:20:56 compute-0 nova_compute[349548]: 2025-12-05 02:20:56.221 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:20:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2158: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:20:57 compute-0 nova_compute[349548]: 2025-12-05 02:20:57.567 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:20:57 compute-0 podman[463424]: 2025-12-05 02:20:57.724024221 +0000 UTC m=+0.112444389 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 02:20:57 compute-0 podman[463431]: 2025-12-05 02:20:57.724239047 +0000 UTC m=+0.103081076 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., version=9.6, config_id=edpm, io.buildah.version=1.33.7, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, distribution-scope=public, vendor=Red Hat, Inc.)
Dec  5 02:20:57 compute-0 podman[463423]: 2025-12-05 02:20:57.739794764 +0000 UTC m=+0.145283271 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125)
Dec  5 02:20:57 compute-0 podman[463425]: 2025-12-05 02:20:57.775212659 +0000 UTC m=+0.161517548 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:20:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:20:58 compute-0 nova_compute[349548]: 2025-12-05 02:20:58.454 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:20:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2159: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:20:59 compute-0 podman[158197]: time="2025-12-05T02:20:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:20:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:20:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 02:20:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:20:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8675 "" "Go-http-client/1.1"
Dec  5 02:21:00 compute-0 nova_compute[349548]: 2025-12-05 02:21:00.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:21:00 compute-0 nova_compute[349548]: 2025-12-05 02:21:00.068 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  5 02:21:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2160: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:21:01 compute-0 openstack_network_exporter[366555]: ERROR   02:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:21:01 compute-0 openstack_network_exporter[366555]: ERROR   02:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:21:01 compute-0 openstack_network_exporter[366555]: ERROR   02:21:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:21:01 compute-0 openstack_network_exporter[366555]: ERROR   02:21:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:21:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:21:01 compute-0 openstack_network_exporter[366555]: ERROR   02:21:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:21:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:21:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2161: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:21:02 compute-0 nova_compute[349548]: 2025-12-05 02:21:02.568 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:21:03 compute-0 nova_compute[349548]: 2025-12-05 02:21:03.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:21:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:21:03 compute-0 nova_compute[349548]: 2025-12-05 02:21:03.456 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:21:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2162: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:21:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2163: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:21:07 compute-0 nova_compute[349548]: 2025-12-05 02:21:07.570 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:21:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:21:08 compute-0 nova_compute[349548]: 2025-12-05 02:21:08.460 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:21:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2164: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:21:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2165: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:21:11 compute-0 podman[463509]: 2025-12-05 02:21:11.719764571 +0000 UTC m=+0.119495388 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  5 02:21:11 compute-0 podman[463508]: 2025-12-05 02:21:11.736802919 +0000 UTC m=+0.139719824 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  5 02:21:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2166: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:21:12 compute-0 nova_compute[349548]: 2025-12-05 02:21:12.574 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:21:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:21:13 compute-0 nova_compute[349548]: 2025-12-05 02:21:13.464 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:21:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2167: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:21:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:21:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:21:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:21:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:21:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:21:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:21:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:21:16
Dec  5 02:21:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 02:21:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 02:21:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['backups', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'default.rgw.log', '.mgr', 'images', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta']
Dec  5 02:21:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 02:21:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2168: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:21:16 compute-0 podman[463554]: 2025-12-05 02:21:16.733724341 +0000 UTC m=+0.119025694 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  5 02:21:16 compute-0 podman[463553]: 2025-12-05 02:21:16.748703862 +0000 UTC m=+0.146178817 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, release=1214.1726694543, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler)
Dec  5 02:21:16 compute-0 podman[463552]: 2025-12-05 02:21:16.76322531 +0000 UTC m=+0.165103298 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  5 02:21:17 compute-0 nova_compute[349548]: 2025-12-05 02:21:17.579 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:21:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 02:21:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:21:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 02:21:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:21:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:21:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:21:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:21:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:21:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:21:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:21:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:21:18 compute-0 nova_compute[349548]: 2025-12-05 02:21:18.467 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:21:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2169: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:21:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2170: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s rd, 2 op/s
Dec  5 02:21:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2171: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 170 B/s wr, 4 op/s
Dec  5 02:21:22 compute-0 nova_compute[349548]: 2025-12-05 02:21:22.581 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:21:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:21:23 compute-0 nova_compute[349548]: 2025-12-05 02:21:23.471 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:21:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2172: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 170 B/s wr, 4 op/s
Dec  5 02:21:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2173: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 170 B/s wr, 4 op/s
Dec  5 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015181009677997005 of space, bias 1.0, pg target 0.45543029033991017 quantized to 32 (current 32)
Dec  5 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  5 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:21:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 02:21:27 compute-0 nova_compute[349548]: 2025-12-05 02:21:27.585 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:21:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:21:28 compute-0 nova_compute[349548]: 2025-12-05 02:21:28.475 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:21:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2174: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Dec  5 02:21:28 compute-0 podman[463611]: 2025-12-05 02:21:28.712875753 +0000 UTC m=+0.105388431 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd)
Dec  5 02:21:28 compute-0 podman[463612]: 2025-12-05 02:21:28.747840885 +0000 UTC m=+0.134854909 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 02:21:28 compute-0 podman[463613]: 2025-12-05 02:21:28.776821269 +0000 UTC m=+0.160570031 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible)
Dec  5 02:21:28 compute-0 podman[463614]: 2025-12-05 02:21:28.778417443 +0000 UTC m=+0.138124370 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, config_id=edpm, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  5 02:21:29 compute-0 podman[158197]: time="2025-12-05T02:21:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:21:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:21:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 02:21:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:21:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8664 "" "Go-http-client/1.1"
Dec  5 02:21:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2175: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Dec  5 02:21:31 compute-0 openstack_network_exporter[366555]: ERROR   02:21:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:21:31 compute-0 openstack_network_exporter[366555]: ERROR   02:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:21:31 compute-0 openstack_network_exporter[366555]: ERROR   02:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:21:31 compute-0 openstack_network_exporter[366555]: ERROR   02:21:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:21:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:21:31 compute-0 openstack_network_exporter[366555]: ERROR   02:21:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:21:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:21:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2176: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 8.6 KiB/s wr, 2 op/s
Dec  5 02:21:32 compute-0 nova_compute[349548]: 2025-12-05 02:21:32.588 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:21:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:21:33 compute-0 nova_compute[349548]: 2025-12-05 02:21:33.479 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:21:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2177: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 8.4 KiB/s wr, 0 op/s
Dec  5 02:21:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2178: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 0 B/s rd, 8.4 KiB/s wr, 0 op/s
Dec  5 02:21:37 compute-0 nova_compute[349548]: 2025-12-05 02:21:37.592 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:21:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:21:38 compute-0 nova_compute[349548]: 2025-12-05 02:21:38.482 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:21:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2179: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 8.4 KiB/s wr, 0 op/s
Dec  5 02:21:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2180: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Dec  5 02:21:40 compute-0 podman[463867]: 2025-12-05 02:21:40.876866063 +0000 UTC m=+0.133394708 container exec aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:21:41 compute-0 podman[463867]: 2025-12-05 02:21:41.024150909 +0000 UTC m=+0.280679504 container exec_died aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:21:41 compute-0 podman[463966]: 2025-12-05 02:21:41.932075639 +0000 UTC m=+0.121015779 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Dec  5 02:21:41 compute-0 podman[463967]: 2025-12-05 02:21:41.95953289 +0000 UTC m=+0.144995383 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 02:21:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 02:21:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:21:42 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 02:21:42 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:21:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2181: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 7.3 KiB/s wr, 0 op/s
Dec  5 02:21:42 compute-0 nova_compute[349548]: 2025-12-05 02:21:42.596 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:21:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:21:43 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:21:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:21:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:21:43 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:21:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 02:21:43 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:21:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 02:21:43 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:21:43 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 05ca3532-828f-43a6-a572-23b8e9172fd5 does not exist
Dec  5 02:21:43 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 285a2eae-9077-4e33-92ab-1902929aafd3 does not exist
Dec  5 02:21:43 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 1119e751-fca1-4640-98dd-9d1938c924f9 does not exist
Dec  5 02:21:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 02:21:43 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 02:21:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 02:21:43 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:21:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:21:43 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:21:43 compute-0 nova_compute[349548]: 2025-12-05 02:21:43.485 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:21:44 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:21:44 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:21:44 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:21:44 compute-0 podman[464321]: 2025-12-05 02:21:44.544761088 +0000 UTC m=+0.074224036 container create 39adaf01a479cdbd8c929057a517104407bd8fd39dcdd08e6e3f9e6de0ee3f3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wright, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  5 02:21:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2182: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 7.3 KiB/s wr, 0 op/s
Dec  5 02:21:44 compute-0 systemd[1]: Started libpod-conmon-39adaf01a479cdbd8c929057a517104407bd8fd39dcdd08e6e3f9e6de0ee3f3b.scope.
Dec  5 02:21:44 compute-0 podman[464321]: 2025-12-05 02:21:44.524591722 +0000 UTC m=+0.054054670 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:21:44 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:21:44 compute-0 podman[464321]: 2025-12-05 02:21:44.676547379 +0000 UTC m=+0.206010377 container init 39adaf01a479cdbd8c929057a517104407bd8fd39dcdd08e6e3f9e6de0ee3f3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:21:44 compute-0 podman[464321]: 2025-12-05 02:21:44.69080647 +0000 UTC m=+0.220269448 container start 39adaf01a479cdbd8c929057a517104407bd8fd39dcdd08e6e3f9e6de0ee3f3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wright, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Dec  5 02:21:44 compute-0 suspicious_wright[464336]: 167 167
Dec  5 02:21:44 compute-0 systemd[1]: libpod-39adaf01a479cdbd8c929057a517104407bd8fd39dcdd08e6e3f9e6de0ee3f3b.scope: Deactivated successfully.
Dec  5 02:21:44 compute-0 conmon[464336]: conmon 39adaf01a479cdbd8c92 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-39adaf01a479cdbd8c929057a517104407bd8fd39dcdd08e6e3f9e6de0ee3f3b.scope/container/memory.events
Dec  5 02:21:44 compute-0 podman[464321]: 2025-12-05 02:21:44.699626538 +0000 UTC m=+0.229089486 container attach 39adaf01a479cdbd8c929057a517104407bd8fd39dcdd08e6e3f9e6de0ee3f3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wright, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  5 02:21:44 compute-0 podman[464342]: 2025-12-05 02:21:44.770316013 +0000 UTC m=+0.049769209 container died 39adaf01a479cdbd8c929057a517104407bd8fd39dcdd08e6e3f9e6de0ee3f3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wright, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:21:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-9927bfbd29561224a3dc5ef1847c0d8adeda4f2ea8d25d5d2223132d8fa432d3-merged.mount: Deactivated successfully.
Dec  5 02:21:44 compute-0 podman[464342]: 2025-12-05 02:21:44.854024034 +0000 UTC m=+0.133477210 container remove 39adaf01a479cdbd8c929057a517104407bd8fd39dcdd08e6e3f9e6de0ee3f3b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wright, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:21:44 compute-0 systemd[1]: libpod-conmon-39adaf01a479cdbd8c929057a517104407bd8fd39dcdd08e6e3f9e6de0ee3f3b.scope: Deactivated successfully.
Dec  5 02:21:45 compute-0 nova_compute[349548]: 2025-12-05 02:21:45.115 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:21:45 compute-0 nova_compute[349548]: 2025-12-05 02:21:45.115 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 02:21:45 compute-0 podman[464362]: 2025-12-05 02:21:45.153474404 +0000 UTC m=+0.081025536 container create f31ce91c05a3b09329bf6ca9f9006e75ad372e6fff109519bb17fbf21858d20c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_jemison, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:21:45 compute-0 podman[464362]: 2025-12-05 02:21:45.124380857 +0000 UTC m=+0.051932029 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:21:45 compute-0 systemd[1]: Started libpod-conmon-f31ce91c05a3b09329bf6ca9f9006e75ad372e6fff109519bb17fbf21858d20c.scope.
Dec  5 02:21:45 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:21:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33473c531ea96d8ddfc377e4d713c49e3c1ef57caa3b65fc50749d60b3d064eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:21:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33473c531ea96d8ddfc377e4d713c49e3c1ef57caa3b65fc50749d60b3d064eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:21:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33473c531ea96d8ddfc377e4d713c49e3c1ef57caa3b65fc50749d60b3d064eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:21:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33473c531ea96d8ddfc377e4d713c49e3c1ef57caa3b65fc50749d60b3d064eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:21:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33473c531ea96d8ddfc377e4d713c49e3c1ef57caa3b65fc50749d60b3d064eb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 02:21:45 compute-0 podman[464362]: 2025-12-05 02:21:45.32424088 +0000 UTC m=+0.251792082 container init f31ce91c05a3b09329bf6ca9f9006e75ad372e6fff109519bb17fbf21858d20c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  5 02:21:45 compute-0 podman[464362]: 2025-12-05 02:21:45.339317084 +0000 UTC m=+0.266868236 container start f31ce91c05a3b09329bf6ca9f9006e75ad372e6fff109519bb17fbf21858d20c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_jemison, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  5 02:21:45 compute-0 podman[464362]: 2025-12-05 02:21:45.346798504 +0000 UTC m=+0.274349656 container attach f31ce91c05a3b09329bf6ca9f9006e75ad372e6fff109519bb17fbf21858d20c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_jemison, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  5 02:21:45 compute-0 nova_compute[349548]: 2025-12-05 02:21:45.382 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:21:45 compute-0 nova_compute[349548]: 2025-12-05 02:21:45.384 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:21:45 compute-0 nova_compute[349548]: 2025-12-05 02:21:45.398 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  5 02:21:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 02:21:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1591678610' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 02:21:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 02:21:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1591678610' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 02:21:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:21:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:21:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:21:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:21:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:21:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:21:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2183: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 7.3 KiB/s wr, 0 op/s
Dec  5 02:21:46 compute-0 nova_compute[349548]: 2025-12-05 02:21:46.625 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Updating instance_info_cache with network_info: [{"id": "afc3cf6c-cbe3-4163-920e-7122f474d371", "address": "fa:16:3e:69:80:52", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafc3cf6c-cb", "ovs_interfaceid": "afc3cf6c-cbe3-4163-920e-7122f474d371", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:21:46 compute-0 nova_compute[349548]: 2025-12-05 02:21:46.643 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:21:46 compute-0 nova_compute[349548]: 2025-12-05 02:21:46.644 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  5 02:21:46 compute-0 eager_jemison[464377]: --> passed data devices: 0 physical, 3 LVM
Dec  5 02:21:46 compute-0 eager_jemison[464377]: --> relative data size: 1.0
Dec  5 02:21:46 compute-0 eager_jemison[464377]: --> All data devices are unavailable
Dec  5 02:21:46 compute-0 systemd[1]: libpod-f31ce91c05a3b09329bf6ca9f9006e75ad372e6fff109519bb17fbf21858d20c.scope: Deactivated successfully.
Dec  5 02:21:46 compute-0 systemd[1]: libpod-f31ce91c05a3b09329bf6ca9f9006e75ad372e6fff109519bb17fbf21858d20c.scope: Consumed 1.301s CPU time.
Dec  5 02:21:46 compute-0 podman[464406]: 2025-12-05 02:21:46.775616724 +0000 UTC m=+0.050885740 container died f31ce91c05a3b09329bf6ca9f9006e75ad372e6fff109519bb17fbf21858d20c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_jemison, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:21:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-33473c531ea96d8ddfc377e4d713c49e3c1ef57caa3b65fc50749d60b3d064eb-merged.mount: Deactivated successfully.
Dec  5 02:21:46 compute-0 podman[464406]: 2025-12-05 02:21:46.888400961 +0000 UTC m=+0.163669907 container remove f31ce91c05a3b09329bf6ca9f9006e75ad372e6fff109519bb17fbf21858d20c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_jemison, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Dec  5 02:21:46 compute-0 systemd[1]: libpod-conmon-f31ce91c05a3b09329bf6ca9f9006e75ad372e6fff109519bb17fbf21858d20c.scope: Deactivated successfully.
Dec  5 02:21:46 compute-0 podman[464419]: 2025-12-05 02:21:46.952873822 +0000 UTC m=+0.108384195 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  5 02:21:46 compute-0 podman[464421]: 2025-12-05 02:21:46.961824994 +0000 UTC m=+0.107886412 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  5 02:21:46 compute-0 podman[464420]: 2025-12-05 02:21:46.979482549 +0000 UTC m=+0.133239183 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, managed_by=edpm_ansible, name=ubi9, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, release-0.7.12=, version=9.4)
Dec  5 02:21:47 compute-0 nova_compute[349548]: 2025-12-05 02:21:47.599 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:21:47 compute-0 podman[464613]: 2025-12-05 02:21:47.987109679 +0000 UTC m=+0.080855232 container create 3d99a0999eca9765e36d1d7289a5b5691f8e767a65107b15b48a64c042c73abb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  5 02:21:48 compute-0 podman[464613]: 2025-12-05 02:21:47.957372053 +0000 UTC m=+0.051117636 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:21:48 compute-0 systemd[1]: Started libpod-conmon-3d99a0999eca9765e36d1d7289a5b5691f8e767a65107b15b48a64c042c73abb.scope.
Dec  5 02:21:48 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:21:48 compute-0 podman[464613]: 2025-12-05 02:21:48.126356269 +0000 UTC m=+0.220101912 container init 3d99a0999eca9765e36d1d7289a5b5691f8e767a65107b15b48a64c042c73abb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mendeleev, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  5 02:21:48 compute-0 podman[464613]: 2025-12-05 02:21:48.14702078 +0000 UTC m=+0.240766353 container start 3d99a0999eca9765e36d1d7289a5b5691f8e767a65107b15b48a64c042c73abb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mendeleev, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  5 02:21:48 compute-0 podman[464613]: 2025-12-05 02:21:48.153508312 +0000 UTC m=+0.247253935 container attach 3d99a0999eca9765e36d1d7289a5b5691f8e767a65107b15b48a64c042c73abb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mendeleev, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  5 02:21:48 compute-0 dreamy_mendeleev[464629]: 167 167
Dec  5 02:21:48 compute-0 systemd[1]: libpod-3d99a0999eca9765e36d1d7289a5b5691f8e767a65107b15b48a64c042c73abb.scope: Deactivated successfully.
Dec  5 02:21:48 compute-0 podman[464613]: 2025-12-05 02:21:48.160103807 +0000 UTC m=+0.253849360 container died 3d99a0999eca9765e36d1d7289a5b5691f8e767a65107b15b48a64c042c73abb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mendeleev, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  5 02:21:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-b14ad9a9f8075721f1042e05b95a69441bc05aac097b564dc590dc4b942e4d48-merged.mount: Deactivated successfully.
Dec  5 02:21:48 compute-0 podman[464613]: 2025-12-05 02:21:48.231442771 +0000 UTC m=+0.325188344 container remove 3d99a0999eca9765e36d1d7289a5b5691f8e767a65107b15b48a64c042c73abb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mendeleev, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  5 02:21:48 compute-0 systemd[1]: libpod-conmon-3d99a0999eca9765e36d1d7289a5b5691f8e767a65107b15b48a64c042c73abb.scope: Deactivated successfully.
Dec  5 02:21:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:21:48 compute-0 podman[464651]: 2025-12-05 02:21:48.479343433 +0000 UTC m=+0.072682132 container create 2dfd20c83f2b6e712956c8464d7d29e03b633151785d12a0893b1bf5019d50e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_joliot, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:21:48 compute-0 nova_compute[349548]: 2025-12-05 02:21:48.487 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:21:48 compute-0 podman[464651]: 2025-12-05 02:21:48.454516916 +0000 UTC m=+0.047855645 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:21:48 compute-0 systemd[1]: Started libpod-conmon-2dfd20c83f2b6e712956c8464d7d29e03b633151785d12a0893b1bf5019d50e2.scope.
Dec  5 02:21:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2184: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 7.3 KiB/s wr, 0 op/s
Dec  5 02:21:48 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:21:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc8ed31d2b3681b87e1c93c130c524c89af2a0ceab43f45f8ff40ee886b52815/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:21:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc8ed31d2b3681b87e1c93c130c524c89af2a0ceab43f45f8ff40ee886b52815/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:21:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc8ed31d2b3681b87e1c93c130c524c89af2a0ceab43f45f8ff40ee886b52815/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:21:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc8ed31d2b3681b87e1c93c130c524c89af2a0ceab43f45f8ff40ee886b52815/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:21:48 compute-0 podman[464651]: 2025-12-05 02:21:48.647233869 +0000 UTC m=+0.240572658 container init 2dfd20c83f2b6e712956c8464d7d29e03b633151785d12a0893b1bf5019d50e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_joliot, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:21:48 compute-0 podman[464651]: 2025-12-05 02:21:48.658023932 +0000 UTC m=+0.251362631 container start 2dfd20c83f2b6e712956c8464d7d29e03b633151785d12a0893b1bf5019d50e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  5 02:21:48 compute-0 podman[464651]: 2025-12-05 02:21:48.663358932 +0000 UTC m=+0.256697711 container attach 2dfd20c83f2b6e712956c8464d7d29e03b633151785d12a0893b1bf5019d50e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_joliot, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:21:49 compute-0 nova_compute[349548]: 2025-12-05 02:21:49.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:21:49 compute-0 silly_joliot[464668]: {
Dec  5 02:21:49 compute-0 silly_joliot[464668]:    "0": [
Dec  5 02:21:49 compute-0 silly_joliot[464668]:        {
Dec  5 02:21:49 compute-0 silly_joliot[464668]:            "devices": [
Dec  5 02:21:49 compute-0 silly_joliot[464668]:                "/dev/loop3"
Dec  5 02:21:49 compute-0 silly_joliot[464668]:            ],
Dec  5 02:21:49 compute-0 silly_joliot[464668]:            "lv_name": "ceph_lv0",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:            "lv_size": "21470642176",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:            "name": "ceph_lv0",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:            "tags": {
Dec  5 02:21:49 compute-0 silly_joliot[464668]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:                "ceph.cluster_name": "ceph",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:                "ceph.crush_device_class": "",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:                "ceph.encrypted": "0",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:                "ceph.osd_id": "0",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:                "ceph.type": "block",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:                "ceph.vdo": "0"
Dec  5 02:21:49 compute-0 silly_joliot[464668]:            },
Dec  5 02:21:49 compute-0 silly_joliot[464668]:            "type": "block",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:            "vg_name": "ceph_vg0"
Dec  5 02:21:49 compute-0 silly_joliot[464668]:        }
Dec  5 02:21:49 compute-0 silly_joliot[464668]:    ],
Dec  5 02:21:49 compute-0 silly_joliot[464668]:    "1": [
Dec  5 02:21:49 compute-0 silly_joliot[464668]:        {
Dec  5 02:21:49 compute-0 silly_joliot[464668]:            "devices": [
Dec  5 02:21:49 compute-0 silly_joliot[464668]:                "/dev/loop4"
Dec  5 02:21:49 compute-0 silly_joliot[464668]:            ],
Dec  5 02:21:49 compute-0 silly_joliot[464668]:            "lv_name": "ceph_lv1",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:            "lv_size": "21470642176",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:            "name": "ceph_lv1",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:            "tags": {
Dec  5 02:21:49 compute-0 silly_joliot[464668]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:                "ceph.cluster_name": "ceph",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:                "ceph.crush_device_class": "",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:                "ceph.encrypted": "0",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:                "ceph.osd_id": "1",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:                "ceph.type": "block",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:                "ceph.vdo": "0"
Dec  5 02:21:49 compute-0 silly_joliot[464668]:            },
Dec  5 02:21:49 compute-0 silly_joliot[464668]:            "type": "block",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:            "vg_name": "ceph_vg1"
Dec  5 02:21:49 compute-0 silly_joliot[464668]:        }
Dec  5 02:21:49 compute-0 silly_joliot[464668]:    ],
Dec  5 02:21:49 compute-0 silly_joliot[464668]:    "2": [
Dec  5 02:21:49 compute-0 silly_joliot[464668]:        {
Dec  5 02:21:49 compute-0 silly_joliot[464668]:            "devices": [
Dec  5 02:21:49 compute-0 silly_joliot[464668]:                "/dev/loop5"
Dec  5 02:21:49 compute-0 silly_joliot[464668]:            ],
Dec  5 02:21:49 compute-0 silly_joliot[464668]:            "lv_name": "ceph_lv2",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:            "lv_size": "21470642176",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:            "name": "ceph_lv2",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:            "tags": {
Dec  5 02:21:49 compute-0 silly_joliot[464668]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:                "ceph.cluster_name": "ceph",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:                "ceph.crush_device_class": "",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:                "ceph.encrypted": "0",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:                "ceph.osd_id": "2",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:                "ceph.type": "block",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:                "ceph.vdo": "0"
Dec  5 02:21:49 compute-0 silly_joliot[464668]:            },
Dec  5 02:21:49 compute-0 silly_joliot[464668]:            "type": "block",
Dec  5 02:21:49 compute-0 silly_joliot[464668]:            "vg_name": "ceph_vg2"
Dec  5 02:21:49 compute-0 silly_joliot[464668]:        }
Dec  5 02:21:49 compute-0 silly_joliot[464668]:    ]
Dec  5 02:21:49 compute-0 silly_joliot[464668]: }
Dec  5 02:21:49 compute-0 systemd[1]: libpod-2dfd20c83f2b6e712956c8464d7d29e03b633151785d12a0893b1bf5019d50e2.scope: Deactivated successfully.
Dec  5 02:21:49 compute-0 podman[464651]: 2025-12-05 02:21:49.571834817 +0000 UTC m=+1.165173536 container died 2dfd20c83f2b6e712956c8464d7d29e03b633151785d12a0893b1bf5019d50e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_joliot, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  5 02:21:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc8ed31d2b3681b87e1c93c130c524c89af2a0ceab43f45f8ff40ee886b52815-merged.mount: Deactivated successfully.
Dec  5 02:21:49 compute-0 podman[464651]: 2025-12-05 02:21:49.684392948 +0000 UTC m=+1.277731677 container remove 2dfd20c83f2b6e712956c8464d7d29e03b633151785d12a0893b1bf5019d50e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_joliot, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:21:49 compute-0 systemd[1]: libpod-conmon-2dfd20c83f2b6e712956c8464d7d29e03b633151785d12a0893b1bf5019d50e2.scope: Deactivated successfully.
Dec  5 02:21:50 compute-0 nova_compute[349548]: 2025-12-05 02:21:50.062 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:21:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2185: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s wr, 0 op/s
Dec  5 02:21:50 compute-0 podman[464825]: 2025-12-05 02:21:50.927876723 +0000 UTC m=+0.086489120 container create 26215aba242d6cc3f6eb8b14412065e03ae60fb471813acafa0197654e4d9193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_knuth, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  5 02:21:50 compute-0 podman[464825]: 2025-12-05 02:21:50.892035526 +0000 UTC m=+0.050647983 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:21:51 compute-0 systemd[1]: Started libpod-conmon-26215aba242d6cc3f6eb8b14412065e03ae60fb471813acafa0197654e4d9193.scope.
Dec  5 02:21:51 compute-0 nova_compute[349548]: 2025-12-05 02:21:51.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:21:51 compute-0 nova_compute[349548]: 2025-12-05 02:21:51.066 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 02:21:51 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:21:51 compute-0 podman[464825]: 2025-12-05 02:21:51.098653949 +0000 UTC m=+0.257266406 container init 26215aba242d6cc3f6eb8b14412065e03ae60fb471813acafa0197654e4d9193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  5 02:21:51 compute-0 podman[464825]: 2025-12-05 02:21:51.116481939 +0000 UTC m=+0.275094306 container start 26215aba242d6cc3f6eb8b14412065e03ae60fb471813acafa0197654e4d9193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:21:51 compute-0 podman[464825]: 2025-12-05 02:21:51.122800746 +0000 UTC m=+0.281413143 container attach 26215aba242d6cc3f6eb8b14412065e03ae60fb471813acafa0197654e4d9193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_knuth, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  5 02:21:51 compute-0 exciting_knuth[464840]: 167 167
Dec  5 02:21:51 compute-0 systemd[1]: libpod-26215aba242d6cc3f6eb8b14412065e03ae60fb471813acafa0197654e4d9193.scope: Deactivated successfully.
Dec  5 02:21:51 compute-0 podman[464825]: 2025-12-05 02:21:51.129503155 +0000 UTC m=+0.288115572 container died 26215aba242d6cc3f6eb8b14412065e03ae60fb471813acafa0197654e4d9193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_knuth, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  5 02:21:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0a02b0eb470474e9c2a49b7221ccd94a78c2ea2a1d226cccb06eaa7a429b091-merged.mount: Deactivated successfully.
Dec  5 02:21:51 compute-0 podman[464825]: 2025-12-05 02:21:51.216408615 +0000 UTC m=+0.375021012 container remove 26215aba242d6cc3f6eb8b14412065e03ae60fb471813acafa0197654e4d9193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_knuth, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:21:51 compute-0 systemd[1]: libpod-conmon-26215aba242d6cc3f6eb8b14412065e03ae60fb471813acafa0197654e4d9193.scope: Deactivated successfully.
Dec  5 02:21:51 compute-0 podman[464863]: 2025-12-05 02:21:51.525400974 +0000 UTC m=+0.093854897 container create 8690ad637b2bfdbdfb33b751020337d5c2df2a4b70ce975210baefb862f05aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_beaver, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  5 02:21:51 compute-0 podman[464863]: 2025-12-05 02:21:51.489594848 +0000 UTC m=+0.058048811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:21:51 compute-0 systemd[1]: Started libpod-conmon-8690ad637b2bfdbdfb33b751020337d5c2df2a4b70ce975210baefb862f05aa9.scope.
Dec  5 02:21:51 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:21:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b783119f3c40d9749e9237a933b37ad08090de0347ea9d5d410520b8aae9f62c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:21:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b783119f3c40d9749e9237a933b37ad08090de0347ea9d5d410520b8aae9f62c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:21:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b783119f3c40d9749e9237a933b37ad08090de0347ea9d5d410520b8aae9f62c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:21:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b783119f3c40d9749e9237a933b37ad08090de0347ea9d5d410520b8aae9f62c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:21:51 compute-0 podman[464863]: 2025-12-05 02:21:51.714317869 +0000 UTC m=+0.282771842 container init 8690ad637b2bfdbdfb33b751020337d5c2df2a4b70ce975210baefb862f05aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_beaver, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:21:51 compute-0 podman[464863]: 2025-12-05 02:21:51.737620714 +0000 UTC m=+0.306074637 container start 8690ad637b2bfdbdfb33b751020337d5c2df2a4b70ce975210baefb862f05aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_beaver, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  5 02:21:51 compute-0 podman[464863]: 2025-12-05 02:21:51.746286357 +0000 UTC m=+0.314740340 container attach 8690ad637b2bfdbdfb33b751020337d5c2df2a4b70ce975210baefb862f05aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  5 02:21:52 compute-0 nova_compute[349548]: 2025-12-05 02:21:52.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:21:52 compute-0 nova_compute[349548]: 2025-12-05 02:21:52.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:21:52 compute-0 nova_compute[349548]: 2025-12-05 02:21:52.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:21:52 compute-0 nova_compute[349548]: 2025-12-05 02:21:52.092 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:21:52 compute-0 nova_compute[349548]: 2025-12-05 02:21:52.092 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:21:52 compute-0 nova_compute[349548]: 2025-12-05 02:21:52.092 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:21:52 compute-0 nova_compute[349548]: 2025-12-05 02:21:52.092 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 02:21:52 compute-0 nova_compute[349548]: 2025-12-05 02:21:52.092 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:21:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2186: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s wr, 0 op/s
Dec  5 02:21:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:21:52 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1587739206' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:21:52 compute-0 nova_compute[349548]: 2025-12-05 02:21:52.600 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:21:52 compute-0 nova_compute[349548]: 2025-12-05 02:21:52.625 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:21:52 compute-0 nova_compute[349548]: 2025-12-05 02:21:52.739 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:21:52 compute-0 nova_compute[349548]: 2025-12-05 02:21:52.740 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:21:52 compute-0 nova_compute[349548]: 2025-12-05 02:21:52.749 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:21:52 compute-0 nova_compute[349548]: 2025-12-05 02:21:52.749 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:21:53 compute-0 sweet_beaver[464880]: {
Dec  5 02:21:53 compute-0 sweet_beaver[464880]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 02:21:53 compute-0 sweet_beaver[464880]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:21:53 compute-0 sweet_beaver[464880]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 02:21:53 compute-0 sweet_beaver[464880]:        "osd_id": 0,
Dec  5 02:21:53 compute-0 sweet_beaver[464880]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:21:53 compute-0 sweet_beaver[464880]:        "type": "bluestore"
Dec  5 02:21:53 compute-0 sweet_beaver[464880]:    },
Dec  5 02:21:53 compute-0 sweet_beaver[464880]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 02:21:53 compute-0 sweet_beaver[464880]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:21:53 compute-0 sweet_beaver[464880]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 02:21:53 compute-0 sweet_beaver[464880]:        "osd_id": 1,
Dec  5 02:21:53 compute-0 sweet_beaver[464880]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:21:53 compute-0 sweet_beaver[464880]:        "type": "bluestore"
Dec  5 02:21:53 compute-0 sweet_beaver[464880]:    },
Dec  5 02:21:53 compute-0 sweet_beaver[464880]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 02:21:53 compute-0 sweet_beaver[464880]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:21:53 compute-0 sweet_beaver[464880]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 02:21:53 compute-0 sweet_beaver[464880]:        "osd_id": 2,
Dec  5 02:21:53 compute-0 sweet_beaver[464880]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:21:53 compute-0 sweet_beaver[464880]:        "type": "bluestore"
Dec  5 02:21:53 compute-0 sweet_beaver[464880]:    }
Dec  5 02:21:53 compute-0 sweet_beaver[464880]: }
Dec  5 02:21:53 compute-0 systemd[1]: libpod-8690ad637b2bfdbdfb33b751020337d5c2df2a4b70ce975210baefb862f05aa9.scope: Deactivated successfully.
Dec  5 02:21:53 compute-0 systemd[1]: libpod-8690ad637b2bfdbdfb33b751020337d5c2df2a4b70ce975210baefb862f05aa9.scope: Consumed 1.282s CPU time.
Dec  5 02:21:53 compute-0 podman[464935]: 2025-12-05 02:21:53.142729718 +0000 UTC m=+0.065414468 container died 8690ad637b2bfdbdfb33b751020337d5c2df2a4b70ce975210baefb862f05aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:21:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-b783119f3c40d9749e9237a933b37ad08090de0347ea9d5d410520b8aae9f62c-merged.mount: Deactivated successfully.
Dec  5 02:21:53 compute-0 podman[464935]: 2025-12-05 02:21:53.249467376 +0000 UTC m=+0.172152116 container remove 8690ad637b2bfdbdfb33b751020337d5c2df2a4b70ce975210baefb862f05aa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_beaver, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  5 02:21:53 compute-0 systemd[1]: libpod-conmon-8690ad637b2bfdbdfb33b751020337d5c2df2a4b70ce975210baefb862f05aa9.scope: Deactivated successfully.
Dec  5 02:21:53 compute-0 nova_compute[349548]: 2025-12-05 02:21:53.319 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:21:53 compute-0 nova_compute[349548]: 2025-12-05 02:21:53.320 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3454MB free_disk=59.89703369140625GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 02:21:53 compute-0 nova_compute[349548]: 2025-12-05 02:21:53.321 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:21:53 compute-0 nova_compute[349548]: 2025-12-05 02:21:53.322 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:21:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 02:21:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:21:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 02:21:53 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:21:53 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 8c027028-f77e-47bc-84eb-4b1776dca2ed does not exist
Dec  5 02:21:53 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 790d325f-67f9-4c67-8596-a6b94e915f9f does not exist
Dec  5 02:21:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:21:53 compute-0 nova_compute[349548]: 2025-12-05 02:21:53.438 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 292fd084-0808-4a80-adc1-6ab1f28e188a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:21:53 compute-0 nova_compute[349548]: 2025-12-05 02:21:53.439 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:21:53 compute-0 nova_compute[349548]: 2025-12-05 02:21:53.440 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 02:21:53 compute-0 nova_compute[349548]: 2025-12-05 02:21:53.440 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 02:21:53 compute-0 nova_compute[349548]: 2025-12-05 02:21:53.490 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:21:53 compute-0 nova_compute[349548]: 2025-12-05 02:21:53.498 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:21:53 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:21:53 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:21:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:21:53 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3999271080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:21:54 compute-0 nova_compute[349548]: 2025-12-05 02:21:54.027 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:21:54 compute-0 nova_compute[349548]: 2025-12-05 02:21:54.042 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:21:54 compute-0 nova_compute[349548]: 2025-12-05 02:21:54.070 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:21:54 compute-0 nova_compute[349548]: 2025-12-05 02:21:54.072 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 02:21:54 compute-0 nova_compute[349548]: 2025-12-05 02:21:54.073 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:21:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2187: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:21:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:21:56.221 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:21:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:21:56.222 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:21:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:21:56.223 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:21:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2188: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:21:57 compute-0 nova_compute[349548]: 2025-12-05 02:21:57.069 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:21:57 compute-0 nova_compute[349548]: 2025-12-05 02:21:57.096 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:21:57 compute-0 nova_compute[349548]: 2025-12-05 02:21:57.097 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:21:57 compute-0 nova_compute[349548]: 2025-12-05 02:21:57.603 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:21:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:21:58 compute-0 nova_compute[349548]: 2025-12-05 02:21:58.494 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:21:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2189: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s wr, 1 op/s
Dec  5 02:21:59 compute-0 podman[465022]: 2025-12-05 02:21:59.713476352 +0000 UTC m=+0.101232935 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, managed_by=edpm_ansible, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, release=1755695350, io.buildah.version=1.33.7, maintainer=Red Hat, Inc.)
Dec  5 02:21:59 compute-0 podman[465020]: 2025-12-05 02:21:59.741956572 +0000 UTC m=+0.129685164 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 02:21:59 compute-0 podman[158197]: time="2025-12-05T02:21:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:21:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:21:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 02:21:59 compute-0 podman[465019]: 2025-12-05 02:21:59.756814509 +0000 UTC m=+0.146724832 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 02:21:59 compute-0 podman[465021]: 2025-12-05 02:21:59.763401594 +0000 UTC m=+0.144597742 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Dec  5 02:21:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:21:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8667 "" "Go-http-client/1.1"
Dec  5 02:22:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2190: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec  5 02:22:01 compute-0 openstack_network_exporter[366555]: ERROR   02:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:22:01 compute-0 openstack_network_exporter[366555]: ERROR   02:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:22:01 compute-0 openstack_network_exporter[366555]: ERROR   02:22:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:22:01 compute-0 openstack_network_exporter[366555]: ERROR   02:22:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:22:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:22:01 compute-0 openstack_network_exporter[366555]: ERROR   02:22:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:22:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:22:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2191: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec  5 02:22:02 compute-0 nova_compute[349548]: 2025-12-05 02:22:02.608 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:22:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:22:03 compute-0 nova_compute[349548]: 2025-12-05 02:22:03.496 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:22:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2192: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec  5 02:22:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2193: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec  5 02:22:07 compute-0 nova_compute[349548]: 2025-12-05 02:22:07.609 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:22:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:22:08 compute-0 nova_compute[349548]: 2025-12-05 02:22:08.499 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:22:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2194: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 9.0 KiB/s wr, 1 op/s
Dec  5 02:22:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2195: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Dec  5 02:22:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2196: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  5 02:22:12 compute-0 nova_compute[349548]: 2025-12-05 02:22:12.612 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:22:12 compute-0 podman[465107]: 2025-12-05 02:22:12.715164117 +0000 UTC m=+0.121556405 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec  5 02:22:12 compute-0 podman[465108]: 2025-12-05 02:22:12.734798618 +0000 UTC m=+0.134072146 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  5 02:22:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:22:13 compute-0 nova_compute[349548]: 2025-12-05 02:22:13.503 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:22:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2197: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  5 02:22:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:22:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:22:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:22:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:22:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:22:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:22:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:22:16
Dec  5 02:22:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 02:22:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 02:22:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'images', 'vms', '.rgw.root', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes']
Dec  5 02:22:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 02:22:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2198: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  5 02:22:17 compute-0 nova_compute[349548]: 2025-12-05 02:22:17.616 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:22:17 compute-0 podman[465149]: 2025-12-05 02:22:17.720796159 +0000 UTC m=+0.124269231 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  5 02:22:17 compute-0 podman[465151]: 2025-12-05 02:22:17.726922901 +0000 UTC m=+0.110560436 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  5 02:22:17 compute-0 podman[465150]: 2025-12-05 02:22:17.73114486 +0000 UTC m=+0.126052742 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, vendor=Red Hat, Inc., version=9.4, container_name=kepler, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, architecture=x86_64, com.redhat.component=ubi9-container, config_id=edpm, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc.)
Dec  5 02:22:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 02:22:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:22:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 02:22:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:22:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:22:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:22:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:22:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:22:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:22:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:22:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:22:18 compute-0 nova_compute[349548]: 2025-12-05 02:22:18.507 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:22:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2199: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  5 02:22:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2200: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  5 02:22:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2201: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 341 B/s wr, 0 op/s
Dec  5 02:22:22 compute-0 nova_compute[349548]: 2025-12-05 02:22:22.620 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:22:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:22:23 compute-0 nova_compute[349548]: 2025-12-05 02:22:23.511 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:22:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2202: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:22:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2203: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:22:26 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 02:22:26 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.0 total, 600.0 interval#012Cumulative writes: 10K writes, 45K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 10K syncs, 1.00 writes per sync, written: 0.06 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1314 writes, 5712 keys, 1314 commit groups, 1.0 writes per commit group, ingest: 8.62 MB, 0.01 MB/s#012Interval WAL: 1314 writes, 1314 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0    102.3      0.54              0.25        31    0.017       0      0       0.0       0.0#012  L6      1/0    6.46 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.1    124.2    102.0      2.19              0.97        30    0.073    159K    16K       0.0       0.0#012 Sum      1/0    6.46 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.1     99.7    102.1      2.73              1.22        61    0.045    159K    16K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.6    123.7    125.4      0.31              0.16         8    0.038     25K   2045       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    124.2    102.0      2.19              0.97        30    0.073    159K    16K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0    103.2      0.53              0.25        30    0.018       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.4      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4200.0 total, 600.0 interval#012Flush(GB): cumulative 0.054, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.27 GB write, 0.07 MB/s write, 0.27 GB read, 0.06 MB/s read, 2.7 seconds#012Interval compaction: 0.04 GB write, 0.06 MB/s write, 0.04 GB read, 0.06 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56463779d1f0#2 capacity: 304.00 MB usage: 32.18 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000266 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2067,31.01 MB,10.2003%) FilterBlock(62,452.73 KB,0.145435%) IndexBlock(62,747.39 KB,0.24009%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  5 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015211533217750863 of space, bias 1.0, pg target 0.4563459965325259 quantized to 32 (current 32)
Dec  5 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  5 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:22:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 02:22:27 compute-0 nova_compute[349548]: 2025-12-05 02:22:27.622 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:22:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:22:28 compute-0 nova_compute[349548]: 2025-12-05 02:22:28.514 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:22:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2204: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:22:29 compute-0 podman[158197]: time="2025-12-05T02:22:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:22:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:22:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 02:22:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:22:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8666 "" "Go-http-client/1.1"
Dec  5 02:22:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2205: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:22:30 compute-0 podman[465208]: 2025-12-05 02:22:30.693629025 +0000 UTC m=+0.091883561 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 02:22:30 compute-0 podman[465207]: 2025-12-05 02:22:30.701955099 +0000 UTC m=+0.116949755 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  5 02:22:30 compute-0 podman[465215]: 2025-12-05 02:22:30.723570746 +0000 UTC m=+0.105800832 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, architecture=x86_64, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.openshift.expose-services=, version=9.6, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  5 02:22:30 compute-0 podman[465214]: 2025-12-05 02:22:30.768148998 +0000 UTC m=+0.154695465 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec  5 02:22:31 compute-0 openstack_network_exporter[366555]: ERROR   02:22:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:22:31 compute-0 openstack_network_exporter[366555]: ERROR   02:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:22:31 compute-0 openstack_network_exporter[366555]: ERROR   02:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:22:31 compute-0 openstack_network_exporter[366555]: ERROR   02:22:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:22:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:22:31 compute-0 openstack_network_exporter[366555]: ERROR   02:22:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:22:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:22:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2206: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:22:32 compute-0 nova_compute[349548]: 2025-12-05 02:22:32.626 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:22:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:22:33 compute-0 nova_compute[349548]: 2025-12-05 02:22:33.518 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:22:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2207: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:22:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2208: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:22:37 compute-0 nova_compute[349548]: 2025-12-05 02:22:37.630 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.326 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.327 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.330 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.340 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '292fd084-0808-4a80-adc1-6ab1f28e188a', 'name': 'te-3255585-asg-ymkpcnuo2iqm-rsaqvth2jwvx-k3ipymnd45pa', 'flavor': {'id': 'bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'b01709a3378347e1a3f25eeb2b8b1bca', 'user_id': '99591ed8361e41579fee1d14f16bf0f7', 'hostId': '1d9ee94bfdb0c27cf886050001bab7f2a93221931735791e86b3ac18', 'status': 'active', 'metadata': {'metering.server_group': '92ca195d-98d1-443c-9947-dcb7ca7b926a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.345 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7', 'name': 'te-3255585-asg-ymkpcnuo2iqm-egephyv4dydi-sxgc5dh3lpwo', 'flavor': {'id': 'bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'b01709a3378347e1a3f25eeb2b8b1bca', 'user_id': '99591ed8361e41579fee1d14f16bf0f7', 'hostId': '1d9ee94bfdb0c27cf886050001bab7f2a93221931735791e86b3ac18', 'status': 'active', 'metadata': {'metering.server_group': '92ca195d-98d1-443c-9947-dcb7ca7b926a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.346 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.346 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd61438050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.346 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd61438050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.346 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.347 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-05T02:22:38.346661) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.347 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.348 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.348 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.348 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.348 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.349 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.349 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-05T02:22:38.348966) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.371 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.372 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.397 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.398 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.399 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.400 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.400 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.400 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.401 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.401 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.402 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-05T02:22:38.401244) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.402 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.403 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.403 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.404 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.404 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.404 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.404 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.404 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.405 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-05T02:22:38.404789) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.457 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.bytes volume: 30882304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.458 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 nova_compute[349548]: 2025-12-05 02:22:38.522 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.523 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.bytes volume: 31304192 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.523 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.524 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.524 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.524 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.524 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.524 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.524 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.524 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.latency volume: 3200956192 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.525 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.latency volume: 237184283 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.525 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.latency volume: 2882860455 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.525 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-05T02:22:38.524751) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.526 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.latency volume: 200982064 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.526 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.526 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.526 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.526 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.527 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.527 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.527 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.requests volume: 1101 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.527 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-05T02:22:38.527110) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.527 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.528 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.requests volume: 1122 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.528 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.528 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.528 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.529 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.529 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.529 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.529 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.529 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.529 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.530 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.530 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-05T02:22:38.529320) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.530 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.531 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.531 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.531 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.531 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.531 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.531 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.531 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.bytes volume: 73146368 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.532 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.532 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.bytes volume: 73129984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.532 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-05T02:22:38.531604) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.533 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.533 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.533 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.533 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.533 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.534 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.534 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.534 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-05T02:22:38.534152) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.565 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.600 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.601 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.601 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.601 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.601 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.601 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.602 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.602 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.latency volume: 11353966152 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.602 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.602 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.latency volume: 10991220303 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.603 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.603 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.604 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.604 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.604 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-05T02:22:38.601870) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.604 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.604 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.604 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.604 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.requests volume: 315 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.605 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.605 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.requests volume: 302 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.605 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-05T02:22:38.604621) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.605 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.606 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.606 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.606 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.606 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.606 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.606 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.607 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-05T02:22:38.606776) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.611 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.615 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.616 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.616 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.616 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.616 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.616 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.616 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.616 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.617 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-05T02:22:38.616617) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.617 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.617 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.618 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.618 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.618 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.618 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.618 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.618 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.619 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.619 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-05T02:22:38.618628) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.619 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.620 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.620 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  5 02:22:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2209: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.620 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.620 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.621 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.621 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.621 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.621 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-05T02:22:38.621149) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.621 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.622 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.622 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.623 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.623 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.623 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.623 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.623 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.623 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.624 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-05T02:22:38.623485) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.624 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.624 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.624 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.624 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.625 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.625 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.625 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.625 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.625 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.626 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.626 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.626 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.627 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-05T02:22:38.625362) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.627 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.627 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.627 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.627 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.627 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.627 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/memory.usage volume: 42.4765625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.628 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-05T02:22:38.627775) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.628 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/memory.usage volume: 42.26953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.628 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.629 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.629 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.629 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.629 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.629 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.629 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.bytes volume: 2150 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.630 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.630 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.630 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.631 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.631 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.632 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.632 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.632 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-05T02:22:38.629701) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.633 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.633 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-05T02:22:38.632422) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.633 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.633 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.634 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.634 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.635 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.635 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.635 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.635 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.635 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.636 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-05T02:22:38.635275) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.636 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.636 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.636 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.636 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.637 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.637 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.638 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-05T02:22:38.637124) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.637 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/cpu volume: 337740000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.638 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/cpu volume: 335240000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.638 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.639 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.639 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.639 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.639 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.639 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.640 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.640 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.640 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.640 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.641 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.641 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.641 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.641 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-05T02:22:38.639812) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.641 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.641 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.642 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.643 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.643 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.643 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.644 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.644 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.644 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.644 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.644 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.644 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.644 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.644 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.644 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.644 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.644 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.645 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.645 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.645 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.645 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.645 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.645 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.645 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.645 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.645 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.645 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.645 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.646 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.646 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:22:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:22:38.646 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-05T02:22:38.641601) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:22:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2210: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:22:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2211: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:22:42 compute-0 nova_compute[349548]: 2025-12-05 02:22:42.632 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:22:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:22:43 compute-0 nova_compute[349548]: 2025-12-05 02:22:43.525 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:22:43 compute-0 podman[465295]: 2025-12-05 02:22:43.713150756 +0000 UTC m=+0.116872294 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  5 02:22:43 compute-0 podman[465296]: 2025-12-05 02:22:43.724405712 +0000 UTC m=+0.129063986 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 02:22:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2212: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:22:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 02:22:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4268910848' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 02:22:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 02:22:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4268910848' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 02:22:46 compute-0 nova_compute[349548]: 2025-12-05 02:22:46.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:22:46 compute-0 nova_compute[349548]: 2025-12-05 02:22:46.068 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 02:22:46 compute-0 nova_compute[349548]: 2025-12-05 02:22:46.068 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  5 02:22:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:22:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:22:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:22:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:22:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:22:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:22:46 compute-0 nova_compute[349548]: 2025-12-05 02:22:46.406 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:22:46 compute-0 nova_compute[349548]: 2025-12-05 02:22:46.407 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:22:46 compute-0 nova_compute[349548]: 2025-12-05 02:22:46.407 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  5 02:22:46 compute-0 nova_compute[349548]: 2025-12-05 02:22:46.407 349552 DEBUG nova.objects.instance [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 292fd084-0808-4a80-adc1-6ab1f28e188a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:22:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2213: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:22:47 compute-0 nova_compute[349548]: 2025-12-05 02:22:47.635 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:22:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:22:48 compute-0 nova_compute[349548]: 2025-12-05 02:22:48.529 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:22:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2214: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:22:48 compute-0 podman[465339]: 2025-12-05 02:22:48.752358271 +0000 UTC m=+0.147365520 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, io.openshift.tags=base rhel9, architecture=x86_64, com.redhat.component=ubi9-container, release-0.7.12=, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, container_name=kepler, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9, release=1214.1726694543, vcs-type=git)
Dec  5 02:22:48 compute-0 podman[465338]: 2025-12-05 02:22:48.755421797 +0000 UTC m=+0.160057057 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Dec  5 02:22:48 compute-0 podman[465340]: 2025-12-05 02:22:48.756760774 +0000 UTC m=+0.146615139 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 02:22:49 compute-0 nova_compute[349548]: 2025-12-05 02:22:49.293 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updating instance_info_cache with network_info: [{"id": "706f9405-4061-481e-a252-9b14f4534a4e", "address": "fa:16:3e:cf:10:bc", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.151", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706f9405-40", "ovs_interfaceid": "706f9405-4061-481e-a252-9b14f4534a4e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:22:49 compute-0 nova_compute[349548]: 2025-12-05 02:22:49.313 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:22:49 compute-0 nova_compute[349548]: 2025-12-05 02:22:49.313 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  5 02:22:50 compute-0 nova_compute[349548]: 2025-12-05 02:22:50.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:22:50 compute-0 nova_compute[349548]: 2025-12-05 02:22:50.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:22:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2215: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:22:52 compute-0 nova_compute[349548]: 2025-12-05 02:22:52.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:22:52 compute-0 nova_compute[349548]: 2025-12-05 02:22:52.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:22:52 compute-0 nova_compute[349548]: 2025-12-05 02:22:52.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 02:22:52 compute-0 nova_compute[349548]: 2025-12-05 02:22:52.068 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:22:52 compute-0 nova_compute[349548]: 2025-12-05 02:22:52.100 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:22:52 compute-0 nova_compute[349548]: 2025-12-05 02:22:52.100 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:22:52 compute-0 nova_compute[349548]: 2025-12-05 02:22:52.101 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:22:52 compute-0 nova_compute[349548]: 2025-12-05 02:22:52.101 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 02:22:52 compute-0 nova_compute[349548]: 2025-12-05 02:22:52.102 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:22:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:22:52 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3302821614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:22:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2216: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:22:52 compute-0 nova_compute[349548]: 2025-12-05 02:22:52.637 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:22:52 compute-0 nova_compute[349548]: 2025-12-05 02:22:52.639 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:22:52 compute-0 nova_compute[349548]: 2025-12-05 02:22:52.781 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:22:52 compute-0 nova_compute[349548]: 2025-12-05 02:22:52.781 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:22:52 compute-0 nova_compute[349548]: 2025-12-05 02:22:52.790 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:22:52 compute-0 nova_compute[349548]: 2025-12-05 02:22:52.791 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:22:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:22:53 compute-0 nova_compute[349548]: 2025-12-05 02:22:53.478 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:22:53 compute-0 nova_compute[349548]: 2025-12-05 02:22:53.482 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3498MB free_disk=59.897029876708984GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 02:22:53 compute-0 nova_compute[349548]: 2025-12-05 02:22:53.483 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:22:53 compute-0 nova_compute[349548]: 2025-12-05 02:22:53.484 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:22:53 compute-0 nova_compute[349548]: 2025-12-05 02:22:53.533 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:22:53 compute-0 nova_compute[349548]: 2025-12-05 02:22:53.593 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 292fd084-0808-4a80-adc1-6ab1f28e188a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:22:53 compute-0 nova_compute[349548]: 2025-12-05 02:22:53.593 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:22:53 compute-0 nova_compute[349548]: 2025-12-05 02:22:53.594 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 02:22:53 compute-0 nova_compute[349548]: 2025-12-05 02:22:53.594 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 02:22:53 compute-0 nova_compute[349548]: 2025-12-05 02:22:53.659 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:22:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:22:54 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/546819003' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:22:54 compute-0 nova_compute[349548]: 2025-12-05 02:22:54.166 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:22:54 compute-0 nova_compute[349548]: 2025-12-05 02:22:54.181 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:22:54 compute-0 nova_compute[349548]: 2025-12-05 02:22:54.201 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:22:54 compute-0 nova_compute[349548]: 2025-12-05 02:22:54.203 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 02:22:54 compute-0 nova_compute[349548]: 2025-12-05 02:22:54.204 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:22:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2217: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:22:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:22:54 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:22:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 02:22:54 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:22:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 02:22:54 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:22:54 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 3bfa0005-1e4c-4b8f-b5dc-e2a6e9cec7b8 does not exist
Dec  5 02:22:54 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 0347786b-636d-47b7-8583-6108b094601e does not exist
Dec  5 02:22:54 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev d4acefaa-04a8-473a-a4c5-344a872483fa does not exist
Dec  5 02:22:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 02:22:54 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 02:22:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 02:22:54 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:22:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:22:54 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:22:55 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:22:55 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:22:55 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:22:55 compute-0 nova_compute[349548]: 2025-12-05 02:22:55.204 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:22:56 compute-0 nova_compute[349548]: 2025-12-05 02:22:56.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:22:56 compute-0 podman[465708]: 2025-12-05 02:22:56.193210062 +0000 UTC m=+0.089364431 container create 89210e048792d271fb91655843b9510647a5ef7edac0723aee861aea7aabc151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_allen, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:22:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:22:56.223 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:22:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:22:56.224 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:22:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:22:56.224 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:22:56 compute-0 podman[465708]: 2025-12-05 02:22:56.159417843 +0000 UTC m=+0.055572262 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:22:56 compute-0 systemd[1]: Started libpod-conmon-89210e048792d271fb91655843b9510647a5ef7edac0723aee861aea7aabc151.scope.
Dec  5 02:22:56 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:22:56 compute-0 podman[465708]: 2025-12-05 02:22:56.362661082 +0000 UTC m=+0.258815471 container init 89210e048792d271fb91655843b9510647a5ef7edac0723aee861aea7aabc151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:22:56 compute-0 podman[465708]: 2025-12-05 02:22:56.38147538 +0000 UTC m=+0.277629769 container start 89210e048792d271fb91655843b9510647a5ef7edac0723aee861aea7aabc151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_allen, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:22:56 compute-0 podman[465708]: 2025-12-05 02:22:56.38965295 +0000 UTC m=+0.285807369 container attach 89210e048792d271fb91655843b9510647a5ef7edac0723aee861aea7aabc151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_allen, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:22:56 compute-0 lucid_allen[465724]: 167 167
Dec  5 02:22:56 compute-0 systemd[1]: libpod-89210e048792d271fb91655843b9510647a5ef7edac0723aee861aea7aabc151.scope: Deactivated successfully.
Dec  5 02:22:56 compute-0 podman[465708]: 2025-12-05 02:22:56.395672409 +0000 UTC m=+0.291826798 container died 89210e048792d271fb91655843b9510647a5ef7edac0723aee861aea7aabc151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_allen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Dec  5 02:22:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f141b35b3ce0b3d3a1704558fbe4eaacc84c15e122b04ce4c1b3ab1c71f3ff5-merged.mount: Deactivated successfully.
Dec  5 02:22:56 compute-0 podman[465708]: 2025-12-05 02:22:56.495594795 +0000 UTC m=+0.391749174 container remove 89210e048792d271fb91655843b9510647a5ef7edac0723aee861aea7aabc151 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_allen, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:22:56 compute-0 systemd[1]: libpod-conmon-89210e048792d271fb91655843b9510647a5ef7edac0723aee861aea7aabc151.scope: Deactivated successfully.
Dec  5 02:22:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2218: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:22:56 compute-0 podman[465746]: 2025-12-05 02:22:56.763210051 +0000 UTC m=+0.062850726 container create 7b4d994dbce429895b221f27c5ace852fd86fcbae4406af928c51cdfe3fea25e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:22:56 compute-0 podman[465746]: 2025-12-05 02:22:56.742264893 +0000 UTC m=+0.041905598 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:22:56 compute-0 systemd[1]: Started libpod-conmon-7b4d994dbce429895b221f27c5ace852fd86fcbae4406af928c51cdfe3fea25e.scope.
Dec  5 02:22:56 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:22:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eec4d152ff69ac7157e9fa3d1004d2d6c243028dca1b19ae5913af197f3f5a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:22:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eec4d152ff69ac7157e9fa3d1004d2d6c243028dca1b19ae5913af197f3f5a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:22:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eec4d152ff69ac7157e9fa3d1004d2d6c243028dca1b19ae5913af197f3f5a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:22:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eec4d152ff69ac7157e9fa3d1004d2d6c243028dca1b19ae5913af197f3f5a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:22:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eec4d152ff69ac7157e9fa3d1004d2d6c243028dca1b19ae5913af197f3f5a4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 02:22:56 compute-0 podman[465746]: 2025-12-05 02:22:56.933329269 +0000 UTC m=+0.232970044 container init 7b4d994dbce429895b221f27c5ace852fd86fcbae4406af928c51cdfe3fea25e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bell, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:22:56 compute-0 podman[465746]: 2025-12-05 02:22:56.971608564 +0000 UTC m=+0.271249259 container start 7b4d994dbce429895b221f27c5ace852fd86fcbae4406af928c51cdfe3fea25e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:22:56 compute-0 podman[465746]: 2025-12-05 02:22:56.977692645 +0000 UTC m=+0.277333370 container attach 7b4d994dbce429895b221f27c5ace852fd86fcbae4406af928c51cdfe3fea25e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:22:57 compute-0 nova_compute[349548]: 2025-12-05 02:22:57.639 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:22:58 compute-0 nova_compute[349548]: 2025-12-05 02:22:58.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:22:58 compute-0 musing_bell[465762]: --> passed data devices: 0 physical, 3 LVM
Dec  5 02:22:58 compute-0 musing_bell[465762]: --> relative data size: 1.0
Dec  5 02:22:58 compute-0 musing_bell[465762]: --> All data devices are unavailable
Dec  5 02:22:58 compute-0 systemd[1]: libpod-7b4d994dbce429895b221f27c5ace852fd86fcbae4406af928c51cdfe3fea25e.scope: Deactivated successfully.
Dec  5 02:22:58 compute-0 podman[465746]: 2025-12-05 02:22:58.128502827 +0000 UTC m=+1.428143522 container died 7b4d994dbce429895b221f27c5ace852fd86fcbae4406af928c51cdfe3fea25e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bell, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  5 02:22:58 compute-0 systemd[1]: libpod-7b4d994dbce429895b221f27c5ace852fd86fcbae4406af928c51cdfe3fea25e.scope: Consumed 1.074s CPU time.
Dec  5 02:22:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-2eec4d152ff69ac7157e9fa3d1004d2d6c243028dca1b19ae5913af197f3f5a4-merged.mount: Deactivated successfully.
Dec  5 02:22:58 compute-0 podman[465746]: 2025-12-05 02:22:58.2016042 +0000 UTC m=+1.501244885 container remove 7b4d994dbce429895b221f27c5ace852fd86fcbae4406af928c51cdfe3fea25e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bell, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  5 02:22:58 compute-0 systemd[1]: libpod-conmon-7b4d994dbce429895b221f27c5ace852fd86fcbae4406af928c51cdfe3fea25e.scope: Deactivated successfully.
Dec  5 02:22:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:22:58 compute-0 nova_compute[349548]: 2025-12-05 02:22:58.537 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:22:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2219: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:22:59 compute-0 podman[465940]: 2025-12-05 02:22:59.366554048 +0000 UTC m=+0.076752367 container create aa26114970af139a98612e44a41252b1e21ea1c58bfb2e2ba2ea24d5f9402ee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:22:59 compute-0 podman[465940]: 2025-12-05 02:22:59.335011552 +0000 UTC m=+0.045209961 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:22:59 compute-0 systemd[1]: Started libpod-conmon-aa26114970af139a98612e44a41252b1e21ea1c58bfb2e2ba2ea24d5f9402ee0.scope.
Dec  5 02:22:59 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:22:59 compute-0 podman[465940]: 2025-12-05 02:22:59.505550461 +0000 UTC m=+0.215748840 container init aa26114970af139a98612e44a41252b1e21ea1c58bfb2e2ba2ea24d5f9402ee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:22:59 compute-0 podman[465940]: 2025-12-05 02:22:59.521367146 +0000 UTC m=+0.231565505 container start aa26114970af139a98612e44a41252b1e21ea1c58bfb2e2ba2ea24d5f9402ee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kowalevski, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:22:59 compute-0 podman[465940]: 2025-12-05 02:22:59.528498236 +0000 UTC m=+0.238696595 container attach aa26114970af139a98612e44a41252b1e21ea1c58bfb2e2ba2ea24d5f9402ee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kowalevski, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  5 02:22:59 compute-0 gallant_kowalevski[465956]: 167 167
Dec  5 02:22:59 compute-0 systemd[1]: libpod-aa26114970af139a98612e44a41252b1e21ea1c58bfb2e2ba2ea24d5f9402ee0.scope: Deactivated successfully.
Dec  5 02:22:59 compute-0 podman[465940]: 2025-12-05 02:22:59.533133636 +0000 UTC m=+0.243331995 container died aa26114970af139a98612e44a41252b1e21ea1c58bfb2e2ba2ea24d5f9402ee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  5 02:22:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-064d9b997fa7f719f664d6bd6c1afcd6640b12a6b593d8a8956a9b7d9d2494be-merged.mount: Deactivated successfully.
Dec  5 02:22:59 compute-0 podman[465940]: 2025-12-05 02:22:59.619548013 +0000 UTC m=+0.329746342 container remove aa26114970af139a98612e44a41252b1e21ea1c58bfb2e2ba2ea24d5f9402ee0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kowalevski, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  5 02:22:59 compute-0 systemd[1]: libpod-conmon-aa26114970af139a98612e44a41252b1e21ea1c58bfb2e2ba2ea24d5f9402ee0.scope: Deactivated successfully.
Dec  5 02:22:59 compute-0 podman[158197]: time="2025-12-05T02:22:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:22:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:22:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 02:22:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:22:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8660 "" "Go-http-client/1.1"
Dec  5 02:22:59 compute-0 podman[465979]: 2025-12-05 02:22:59.907255904 +0000 UTC m=+0.081373787 container create d7112ed883246c4c2b5d1828859a0cac6522c3faf23a1bd1db4de43d8a9b2807 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:22:59 compute-0 podman[465979]: 2025-12-05 02:22:59.873861376 +0000 UTC m=+0.047979319 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:23:00 compute-0 systemd[1]: Started libpod-conmon-d7112ed883246c4c2b5d1828859a0cac6522c3faf23a1bd1db4de43d8a9b2807.scope.
Dec  5 02:23:00 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:23:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8d47acaf05e6c3deec280a40fe3b4fa55ea213c7c5375dbadfec42696b2d84c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:23:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8d47acaf05e6c3deec280a40fe3b4fa55ea213c7c5375dbadfec42696b2d84c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:23:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8d47acaf05e6c3deec280a40fe3b4fa55ea213c7c5375dbadfec42696b2d84c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:23:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8d47acaf05e6c3deec280a40fe3b4fa55ea213c7c5375dbadfec42696b2d84c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:23:00 compute-0 podman[465979]: 2025-12-05 02:23:00.075433787 +0000 UTC m=+0.249551720 container init d7112ed883246c4c2b5d1828859a0cac6522c3faf23a1bd1db4de43d8a9b2807 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  5 02:23:00 compute-0 podman[465979]: 2025-12-05 02:23:00.096485008 +0000 UTC m=+0.270602861 container start d7112ed883246c4c2b5d1828859a0cac6522c3faf23a1bd1db4de43d8a9b2807 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_goodall, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:23:00 compute-0 podman[465979]: 2025-12-05 02:23:00.104639477 +0000 UTC m=+0.278757410 container attach d7112ed883246c4c2b5d1828859a0cac6522c3faf23a1bd1db4de43d8a9b2807 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:23:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2220: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:23:00 compute-0 elastic_goodall[465996]: {
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:    "0": [
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:        {
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:            "devices": [
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:                "/dev/loop3"
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:            ],
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:            "lv_name": "ceph_lv0",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:            "lv_size": "21470642176",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:            "name": "ceph_lv0",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:            "tags": {
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:                "ceph.cluster_name": "ceph",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:                "ceph.crush_device_class": "",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:                "ceph.encrypted": "0",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:                "ceph.osd_id": "0",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:                "ceph.type": "block",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:                "ceph.vdo": "0"
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:            },
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:            "type": "block",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:            "vg_name": "ceph_vg0"
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:        }
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:    ],
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:    "1": [
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:        {
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:            "devices": [
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:                "/dev/loop4"
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:            ],
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:            "lv_name": "ceph_lv1",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:            "lv_size": "21470642176",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:            "name": "ceph_lv1",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:            "tags": {
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:                "ceph.cluster_name": "ceph",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:                "ceph.crush_device_class": "",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:                "ceph.encrypted": "0",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:                "ceph.osd_id": "1",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:                "ceph.type": "block",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:                "ceph.vdo": "0"
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:            },
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:            "type": "block",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:            "vg_name": "ceph_vg1"
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:        }
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:    ],
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:    "2": [
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:        {
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:            "devices": [
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:                "/dev/loop5"
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:            ],
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:            "lv_name": "ceph_lv2",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:            "lv_size": "21470642176",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:            "name": "ceph_lv2",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:            "tags": {
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:                "ceph.cluster_name": "ceph",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:                "ceph.crush_device_class": "",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:                "ceph.encrypted": "0",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:                "ceph.osd_id": "2",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:                "ceph.type": "block",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:                "ceph.vdo": "0"
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:            },
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:            "type": "block",
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:            "vg_name": "ceph_vg2"
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:        }
Dec  5 02:23:00 compute-0 elastic_goodall[465996]:    ]
Dec  5 02:23:00 compute-0 elastic_goodall[465996]: }
Dec  5 02:23:01 compute-0 systemd[1]: libpod-d7112ed883246c4c2b5d1828859a0cac6522c3faf23a1bd1db4de43d8a9b2807.scope: Deactivated successfully.
Dec  5 02:23:01 compute-0 podman[466005]: 2025-12-05 02:23:01.086077832 +0000 UTC m=+0.063245807 container died d7112ed883246c4c2b5d1828859a0cac6522c3faf23a1bd1db4de43d8a9b2807 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_goodall, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:23:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8d47acaf05e6c3deec280a40fe3b4fa55ea213c7c5375dbadfec42696b2d84c-merged.mount: Deactivated successfully.
Dec  5 02:23:01 compute-0 podman[466005]: 2025-12-05 02:23:01.157769235 +0000 UTC m=+0.134937170 container remove d7112ed883246c4c2b5d1828859a0cac6522c3faf23a1bd1db4de43d8a9b2807 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  5 02:23:01 compute-0 systemd[1]: libpod-conmon-d7112ed883246c4c2b5d1828859a0cac6522c3faf23a1bd1db4de43d8a9b2807.scope: Deactivated successfully.
Dec  5 02:23:01 compute-0 podman[466006]: 2025-12-05 02:23:01.181743839 +0000 UTC m=+0.126256737 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible)
Dec  5 02:23:01 compute-0 podman[466012]: 2025-12-05 02:23:01.192050518 +0000 UTC m=+0.134254581 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 02:23:01 compute-0 podman[466018]: 2025-12-05 02:23:01.204263381 +0000 UTC m=+0.127760289 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-type=git, version=9.6, container_name=openstack_network_exporter, release=1755695350, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41)
Dec  5 02:23:01 compute-0 podman[466014]: 2025-12-05 02:23:01.261092107 +0000 UTC m=+0.196072658 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  5 02:23:01 compute-0 openstack_network_exporter[366555]: ERROR   02:23:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:23:01 compute-0 openstack_network_exporter[366555]: ERROR   02:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:23:01 compute-0 openstack_network_exporter[366555]: ERROR   02:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:23:01 compute-0 openstack_network_exporter[366555]: ERROR   02:23:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:23:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:23:01 compute-0 openstack_network_exporter[366555]: ERROR   02:23:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:23:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:23:02 compute-0 podman[466240]: 2025-12-05 02:23:02.198523836 +0000 UTC m=+0.088163827 container create ddb6596bd04ebbe48929c33de13fd9f5e6a91e6bab1c603aa694c0854e38bb3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_einstein, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:23:02 compute-0 podman[466240]: 2025-12-05 02:23:02.159573422 +0000 UTC m=+0.049213413 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:23:02 compute-0 systemd[1]: Started libpod-conmon-ddb6596bd04ebbe48929c33de13fd9f5e6a91e6bab1c603aa694c0854e38bb3d.scope.
Dec  5 02:23:02 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:23:02 compute-0 podman[466240]: 2025-12-05 02:23:02.350390791 +0000 UTC m=+0.240030842 container init ddb6596bd04ebbe48929c33de13fd9f5e6a91e6bab1c603aa694c0854e38bb3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_einstein, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  5 02:23:02 compute-0 podman[466240]: 2025-12-05 02:23:02.360734382 +0000 UTC m=+0.250374343 container start ddb6596bd04ebbe48929c33de13fd9f5e6a91e6bab1c603aa694c0854e38bb3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  5 02:23:02 compute-0 podman[466240]: 2025-12-05 02:23:02.367821191 +0000 UTC m=+0.257461252 container attach ddb6596bd04ebbe48929c33de13fd9f5e6a91e6bab1c603aa694c0854e38bb3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_einstein, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:23:02 compute-0 priceless_einstein[466255]: 167 167
Dec  5 02:23:02 compute-0 systemd[1]: libpod-ddb6596bd04ebbe48929c33de13fd9f5e6a91e6bab1c603aa694c0854e38bb3d.scope: Deactivated successfully.
Dec  5 02:23:02 compute-0 podman[466240]: 2025-12-05 02:23:02.370339581 +0000 UTC m=+0.259979572 container died ddb6596bd04ebbe48929c33de13fd9f5e6a91e6bab1c603aa694c0854e38bb3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  5 02:23:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-5645ff46ac9214f3262221e33abdf7a9621eb3afaab316764f7ae6e81d81e24f-merged.mount: Deactivated successfully.
Dec  5 02:23:02 compute-0 podman[466240]: 2025-12-05 02:23:02.436008776 +0000 UTC m=+0.325648737 container remove ddb6596bd04ebbe48929c33de13fd9f5e6a91e6bab1c603aa694c0854e38bb3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_einstein, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Dec  5 02:23:02 compute-0 systemd[1]: libpod-conmon-ddb6596bd04ebbe48929c33de13fd9f5e6a91e6bab1c603aa694c0854e38bb3d.scope: Deactivated successfully.
Dec  5 02:23:02 compute-0 podman[466277]: 2025-12-05 02:23:02.622587736 +0000 UTC m=+0.055539561 container create 1ccd4501e28ba317170863879e8c5c9e03c8a552f94eb4b9ab989ae05c51ca9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_booth, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  5 02:23:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2221: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:23:02 compute-0 nova_compute[349548]: 2025-12-05 02:23:02.642 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:23:02 compute-0 systemd[1]: Started libpod-conmon-1ccd4501e28ba317170863879e8c5c9e03c8a552f94eb4b9ab989ae05c51ca9b.scope.
Dec  5 02:23:02 compute-0 podman[466277]: 2025-12-05 02:23:02.597267405 +0000 UTC m=+0.030219290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:23:02 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:23:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5f15f084e406a04f74f32ebb055915011ac82428983786c9a6854301b63f29/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:23:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5f15f084e406a04f74f32ebb055915011ac82428983786c9a6854301b63f29/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:23:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5f15f084e406a04f74f32ebb055915011ac82428983786c9a6854301b63f29/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:23:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d5f15f084e406a04f74f32ebb055915011ac82428983786c9a6854301b63f29/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:23:02 compute-0 podman[466277]: 2025-12-05 02:23:02.771809786 +0000 UTC m=+0.204761621 container init 1ccd4501e28ba317170863879e8c5c9e03c8a552f94eb4b9ab989ae05c51ca9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_booth, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:23:02 compute-0 podman[466277]: 2025-12-05 02:23:02.794405671 +0000 UTC m=+0.227357496 container start 1ccd4501e28ba317170863879e8c5c9e03c8a552f94eb4b9ab989ae05c51ca9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_booth, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:23:02 compute-0 podman[466277]: 2025-12-05 02:23:02.799313319 +0000 UTC m=+0.232265154 container attach 1ccd4501e28ba317170863879e8c5c9e03c8a552f94eb4b9ab989ae05c51ca9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_booth, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:23:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:23:03 compute-0 nova_compute[349548]: 2025-12-05 02:23:03.541 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:23:03 compute-0 wizardly_booth[466293]: {
Dec  5 02:23:03 compute-0 wizardly_booth[466293]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 02:23:03 compute-0 wizardly_booth[466293]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:23:03 compute-0 wizardly_booth[466293]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 02:23:03 compute-0 wizardly_booth[466293]:        "osd_id": 0,
Dec  5 02:23:03 compute-0 wizardly_booth[466293]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:23:03 compute-0 wizardly_booth[466293]:        "type": "bluestore"
Dec  5 02:23:03 compute-0 wizardly_booth[466293]:    },
Dec  5 02:23:03 compute-0 wizardly_booth[466293]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 02:23:03 compute-0 wizardly_booth[466293]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:23:03 compute-0 wizardly_booth[466293]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 02:23:03 compute-0 wizardly_booth[466293]:        "osd_id": 1,
Dec  5 02:23:03 compute-0 wizardly_booth[466293]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:23:03 compute-0 wizardly_booth[466293]:        "type": "bluestore"
Dec  5 02:23:03 compute-0 wizardly_booth[466293]:    },
Dec  5 02:23:03 compute-0 wizardly_booth[466293]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 02:23:03 compute-0 wizardly_booth[466293]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:23:03 compute-0 wizardly_booth[466293]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 02:23:03 compute-0 wizardly_booth[466293]:        "osd_id": 2,
Dec  5 02:23:03 compute-0 wizardly_booth[466293]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:23:03 compute-0 wizardly_booth[466293]:        "type": "bluestore"
Dec  5 02:23:03 compute-0 wizardly_booth[466293]:    }
Dec  5 02:23:03 compute-0 wizardly_booth[466293]: }
Dec  5 02:23:03 compute-0 systemd[1]: libpod-1ccd4501e28ba317170863879e8c5c9e03c8a552f94eb4b9ab989ae05c51ca9b.scope: Deactivated successfully.
Dec  5 02:23:03 compute-0 systemd[1]: libpod-1ccd4501e28ba317170863879e8c5c9e03c8a552f94eb4b9ab989ae05c51ca9b.scope: Consumed 1.168s CPU time.
Dec  5 02:23:03 compute-0 conmon[466293]: conmon 1ccd4501e28ba3171708 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1ccd4501e28ba317170863879e8c5c9e03c8a552f94eb4b9ab989ae05c51ca9b.scope/container/memory.events
Dec  5 02:23:03 compute-0 podman[466277]: 2025-12-05 02:23:03.972081437 +0000 UTC m=+1.405033272 container died 1ccd4501e28ba317170863879e8c5c9e03c8a552f94eb4b9ab989ae05c51ca9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_booth, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:23:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d5f15f084e406a04f74f32ebb055915011ac82428983786c9a6854301b63f29-merged.mount: Deactivated successfully.
Dec  5 02:23:04 compute-0 podman[466277]: 2025-12-05 02:23:04.055535641 +0000 UTC m=+1.488487516 container remove 1ccd4501e28ba317170863879e8c5c9e03c8a552f94eb4b9ab989ae05c51ca9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  5 02:23:04 compute-0 systemd[1]: libpod-conmon-1ccd4501e28ba317170863879e8c5c9e03c8a552f94eb4b9ab989ae05c51ca9b.scope: Deactivated successfully.
Dec  5 02:23:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 02:23:04 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:23:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 02:23:04 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:23:04 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 2c302bd2-db7b-4c21-97cb-ccf9f4d9c2ea does not exist
Dec  5 02:23:04 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 9c1246e1-4abd-4bcc-883a-601574855148 does not exist
Dec  5 02:23:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2222: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:23:05 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:23:05 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:23:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2223: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:23:07 compute-0 nova_compute[349548]: 2025-12-05 02:23:07.645 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:23:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:23:08 compute-0 nova_compute[349548]: 2025-12-05 02:23:08.545 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:23:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2224: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:23:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2225: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:23:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2226: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:23:12 compute-0 nova_compute[349548]: 2025-12-05 02:23:12.650 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:23:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:23:13 compute-0 nova_compute[349548]: 2025-12-05 02:23:13.548 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:23:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2227: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:23:14 compute-0 podman[466392]: 2025-12-05 02:23:14.745016381 +0000 UTC m=+0.146669310 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  5 02:23:14 compute-0 podman[466391]: 2025-12-05 02:23:14.763025507 +0000 UTC m=+0.164244524 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec  5 02:23:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:23:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:23:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:23:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:23:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:23:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:23:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:23:16
Dec  5 02:23:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 02:23:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 02:23:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', 'default.rgw.log', 'images', 'backups', '.rgw.root']
Dec  5 02:23:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 02:23:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2228: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:23:17 compute-0 nova_compute[349548]: 2025-12-05 02:23:17.652 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:23:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 02:23:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:23:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 02:23:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:23:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:23:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:23:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:23:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:23:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:23:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:23:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:23:18 compute-0 nova_compute[349548]: 2025-12-05 02:23:18.551 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:23:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2229: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:23:19 compute-0 podman[466431]: 2025-12-05 02:23:19.729647737 +0000 UTC m=+0.131617348 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec  5 02:23:19 compute-0 podman[466433]: 2025-12-05 02:23:19.735298025 +0000 UTC m=+0.121636277 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm)
Dec  5 02:23:19 compute-0 podman[466432]: 2025-12-05 02:23:19.737828116 +0000 UTC m=+0.135417644 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, container_name=kepler, vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, distribution-scope=public, io.openshift.expose-services=, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, version=9.4, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  5 02:23:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2230: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:23:22 compute-0 nova_compute[349548]: 2025-12-05 02:23:22.653 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:23:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2231: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:23:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:23:23 compute-0 nova_compute[349548]: 2025-12-05 02:23:23.555 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:23:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2232: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:23:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2233: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015211533217750863 of space, bias 1.0, pg target 0.4563459965325259 quantized to 32 (current 32)
Dec  5 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  5 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:23:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 02:23:27 compute-0 nova_compute[349548]: 2025-12-05 02:23:27.656 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:23:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:23:28 compute-0 nova_compute[349548]: 2025-12-05 02:23:28.558 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:23:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2234: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:23:29 compute-0 podman[158197]: time="2025-12-05T02:23:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:23:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:23:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 02:23:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:23:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8661 "" "Go-http-client/1.1"
Dec  5 02:23:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2235: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:23:31 compute-0 openstack_network_exporter[366555]: ERROR   02:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:23:31 compute-0 openstack_network_exporter[366555]: ERROR   02:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:23:31 compute-0 openstack_network_exporter[366555]: ERROR   02:23:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:23:31 compute-0 openstack_network_exporter[366555]: ERROR   02:23:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:23:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:23:31 compute-0 openstack_network_exporter[366555]: ERROR   02:23:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:23:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:23:31 compute-0 podman[466485]: 2025-12-05 02:23:31.691736197 +0000 UTC m=+0.100365590 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  5 02:23:31 compute-0 podman[466486]: 2025-12-05 02:23:31.713108147 +0000 UTC m=+0.104366292 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 02:23:31 compute-0 podman[466493]: 2025-12-05 02:23:31.762445623 +0000 UTC m=+0.133573333 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, distribution-scope=public, release=1755695350, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, version=9.6)
Dec  5 02:23:31 compute-0 podman[466488]: 2025-12-05 02:23:31.783847254 +0000 UTC m=+0.166625041 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:23:32 compute-0 nova_compute[349548]: 2025-12-05 02:23:32.658 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:23:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2236: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:23:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:23:33 compute-0 nova_compute[349548]: 2025-12-05 02:23:33.561 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:23:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2237: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:23:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2238: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:23:37 compute-0 nova_compute[349548]: 2025-12-05 02:23:37.662 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:23:37 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Dec  5 02:23:37 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:23:37.901334) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  5 02:23:37 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Dec  5 02:23:37 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901417901371, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 1699, "num_deletes": 251, "total_data_size": 2789867, "memory_usage": 2834800, "flush_reason": "Manual Compaction"}
Dec  5 02:23:37 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Dec  5 02:23:37 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901417920351, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 2730021, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 44577, "largest_seqno": 46275, "table_properties": {"data_size": 2722127, "index_size": 4773, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 15848, "raw_average_key_size": 19, "raw_value_size": 2706500, "raw_average_value_size": 3404, "num_data_blocks": 213, "num_entries": 795, "num_filter_entries": 795, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764901234, "oldest_key_time": 1764901234, "file_creation_time": 1764901417, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Dec  5 02:23:37 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 19086 microseconds, and 7999 cpu microseconds.
Dec  5 02:23:37 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 02:23:37 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:23:37.920414) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 2730021 bytes OK
Dec  5 02:23:37 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:23:37.920442) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Dec  5 02:23:37 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:23:37.923407) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Dec  5 02:23:37 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:23:37.923429) EVENT_LOG_v1 {"time_micros": 1764901417923423, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  5 02:23:37 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:23:37.923448) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  5 02:23:37 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 2782595, prev total WAL file size 2782595, number of live WAL files 2.
Dec  5 02:23:37 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:23:37 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:23:37.924757) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Dec  5 02:23:37 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  5 02:23:37 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(2666KB)], [107(6617KB)]
Dec  5 02:23:37 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901417924797, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 9506313, "oldest_snapshot_seqno": -1}
Dec  5 02:23:37 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 6078 keys, 7758851 bytes, temperature: kUnknown
Dec  5 02:23:37 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901417984607, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 7758851, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7721165, "index_size": 21384, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15237, "raw_key_size": 158064, "raw_average_key_size": 26, "raw_value_size": 7614122, "raw_average_value_size": 1252, "num_data_blocks": 847, "num_entries": 6078, "num_filter_entries": 6078, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764901417, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Dec  5 02:23:37 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 02:23:37 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:23:37.985089) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 7758851 bytes
Dec  5 02:23:37 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:23:37.987978) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 158.7 rd, 129.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 6.5 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(6.3) write-amplify(2.8) OK, records in: 6592, records dropped: 514 output_compression: NoCompression
Dec  5 02:23:37 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:23:37.988010) EVENT_LOG_v1 {"time_micros": 1764901417987995, "job": 64, "event": "compaction_finished", "compaction_time_micros": 59903, "compaction_time_cpu_micros": 31913, "output_level": 6, "num_output_files": 1, "total_output_size": 7758851, "num_input_records": 6592, "num_output_records": 6078, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  5 02:23:37 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:23:37 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901417989262, "job": 64, "event": "table_file_deletion", "file_number": 109}
Dec  5 02:23:37 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:23:37 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901417992564, "job": 64, "event": "table_file_deletion", "file_number": 107}
Dec  5 02:23:37 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:23:37.924357) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:23:37 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:23:37.992970) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:23:37 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:23:37.992979) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:23:37 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:23:37.992983) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:23:37 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:23:37.992986) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:23:37 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:23:37.992989) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:23:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:23:38 compute-0 nova_compute[349548]: 2025-12-05 02:23:38.566 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:23:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2239: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:23:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2240: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:23:42 compute-0 nova_compute[349548]: 2025-12-05 02:23:42.666 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:23:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2241: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:23:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:23:43 compute-0 nova_compute[349548]: 2025-12-05 02:23:43.570 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:23:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2242: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:23:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 02:23:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2231307422' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 02:23:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 02:23:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2231307422' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 02:23:45 compute-0 podman[466566]: 2025-12-05 02:23:45.722990439 +0000 UTC m=+0.121482472 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 02:23:45 compute-0 podman[466565]: 2025-12-05 02:23:45.751689135 +0000 UTC m=+0.156958668 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec  5 02:23:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:23:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:23:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:23:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:23:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:23:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:23:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2243: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:23:47 compute-0 nova_compute[349548]: 2025-12-05 02:23:47.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:23:47 compute-0 nova_compute[349548]: 2025-12-05 02:23:47.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 02:23:47 compute-0 nova_compute[349548]: 2025-12-05 02:23:47.439 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:23:47 compute-0 nova_compute[349548]: 2025-12-05 02:23:47.440 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:23:47 compute-0 nova_compute[349548]: 2025-12-05 02:23:47.440 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  5 02:23:47 compute-0 nova_compute[349548]: 2025-12-05 02:23:47.667 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:23:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:23:48 compute-0 nova_compute[349548]: 2025-12-05 02:23:48.574 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:23:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2244: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:23:48 compute-0 nova_compute[349548]: 2025-12-05 02:23:48.680 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Updating instance_info_cache with network_info: [{"id": "afc3cf6c-cbe3-4163-920e-7122f474d371", "address": "fa:16:3e:69:80:52", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafc3cf6c-cb", "ovs_interfaceid": "afc3cf6c-cbe3-4163-920e-7122f474d371", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:23:48 compute-0 nova_compute[349548]: 2025-12-05 02:23:48.695 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:23:48 compute-0 nova_compute[349548]: 2025-12-05 02:23:48.696 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  5 02:23:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2245: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:23:50 compute-0 podman[466609]: 2025-12-05 02:23:50.712527634 +0000 UTC m=+0.111173903 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, version=9.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., container_name=kepler, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, io.openshift.tags=base rhel9, name=ubi9, release=1214.1726694543)
Dec  5 02:23:50 compute-0 podman[466610]: 2025-12-05 02:23:50.72947013 +0000 UTC m=+0.132180983 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  5 02:23:50 compute-0 podman[466608]: 2025-12-05 02:23:50.748988558 +0000 UTC m=+0.153489171 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  5 02:23:51 compute-0 nova_compute[349548]: 2025-12-05 02:23:51.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:23:52 compute-0 nova_compute[349548]: 2025-12-05 02:23:52.062 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:23:52 compute-0 nova_compute[349548]: 2025-12-05 02:23:52.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:23:52 compute-0 nova_compute[349548]: 2025-12-05 02:23:52.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:23:52 compute-0 nova_compute[349548]: 2025-12-05 02:23:52.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 02:23:52 compute-0 nova_compute[349548]: 2025-12-05 02:23:52.671 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:23:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2246: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:23:53 compute-0 nova_compute[349548]: 2025-12-05 02:23:53.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:23:53 compute-0 nova_compute[349548]: 2025-12-05 02:23:53.115 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:23:53 compute-0 nova_compute[349548]: 2025-12-05 02:23:53.116 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:23:53 compute-0 nova_compute[349548]: 2025-12-05 02:23:53.116 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:23:53 compute-0 nova_compute[349548]: 2025-12-05 02:23:53.117 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 02:23:53 compute-0 nova_compute[349548]: 2025-12-05 02:23:53.117 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:23:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:23:53 compute-0 nova_compute[349548]: 2025-12-05 02:23:53.576 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:23:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:23:53 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3548959651' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:23:53 compute-0 nova_compute[349548]: 2025-12-05 02:23:53.667 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:23:53 compute-0 nova_compute[349548]: 2025-12-05 02:23:53.792 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:23:53 compute-0 nova_compute[349548]: 2025-12-05 02:23:53.793 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:23:53 compute-0 nova_compute[349548]: 2025-12-05 02:23:53.802 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:23:53 compute-0 nova_compute[349548]: 2025-12-05 02:23:53.803 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:23:54 compute-0 nova_compute[349548]: 2025-12-05 02:23:54.426 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:23:54 compute-0 nova_compute[349548]: 2025-12-05 02:23:54.427 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3488MB free_disk=59.897029876708984GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 02:23:54 compute-0 nova_compute[349548]: 2025-12-05 02:23:54.427 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:23:54 compute-0 nova_compute[349548]: 2025-12-05 02:23:54.427 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:23:54 compute-0 nova_compute[349548]: 2025-12-05 02:23:54.551 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 292fd084-0808-4a80-adc1-6ab1f28e188a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:23:54 compute-0 nova_compute[349548]: 2025-12-05 02:23:54.551 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:23:54 compute-0 nova_compute[349548]: 2025-12-05 02:23:54.552 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 02:23:54 compute-0 nova_compute[349548]: 2025-12-05 02:23:54.552 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 02:23:54 compute-0 nova_compute[349548]: 2025-12-05 02:23:54.570 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing inventories for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  5 02:23:54 compute-0 nova_compute[349548]: 2025-12-05 02:23:54.594 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating ProviderTree inventory for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  5 02:23:54 compute-0 nova_compute[349548]: 2025-12-05 02:23:54.595 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating inventory in ProviderTree for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  5 02:23:54 compute-0 nova_compute[349548]: 2025-12-05 02:23:54.610 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing aggregate associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  5 02:23:54 compute-0 nova_compute[349548]: 2025-12-05 02:23:54.653 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing trait associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, traits: HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,HW_CPU_X86_ABM,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE41,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI2,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  5 02:23:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2247: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:23:54 compute-0 nova_compute[349548]: 2025-12-05 02:23:54.746 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:23:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:23:55 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3739333258' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:23:55 compute-0 nova_compute[349548]: 2025-12-05 02:23:55.275 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:23:55 compute-0 nova_compute[349548]: 2025-12-05 02:23:55.286 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:23:55 compute-0 nova_compute[349548]: 2025-12-05 02:23:55.304 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:23:55 compute-0 nova_compute[349548]: 2025-12-05 02:23:55.307 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 02:23:55 compute-0 nova_compute[349548]: 2025-12-05 02:23:55.308 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.881s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:23:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:23:56.225 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:23:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:23:56.226 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:23:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:23:56.228 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:23:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2248: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:23:57 compute-0 nova_compute[349548]: 2025-12-05 02:23:57.305 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:23:57 compute-0 nova_compute[349548]: 2025-12-05 02:23:57.342 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:23:57 compute-0 nova_compute[349548]: 2025-12-05 02:23:57.342 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:23:57 compute-0 nova_compute[349548]: 2025-12-05 02:23:57.674 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:23:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:23:58 compute-0 nova_compute[349548]: 2025-12-05 02:23:58.580 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:23:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2249: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:23:59 compute-0 nova_compute[349548]: 2025-12-05 02:23:59.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:23:59 compute-0 podman[158197]: time="2025-12-05T02:23:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:23:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:23:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 02:23:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:23:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8663 "" "Go-http-client/1.1"
Dec  5 02:24:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2250: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:24:01 compute-0 openstack_network_exporter[366555]: ERROR   02:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:24:01 compute-0 openstack_network_exporter[366555]: ERROR   02:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:24:01 compute-0 openstack_network_exporter[366555]: ERROR   02:24:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:24:01 compute-0 openstack_network_exporter[366555]: ERROR   02:24:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:24:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:24:01 compute-0 openstack_network_exporter[366555]: ERROR   02:24:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:24:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:24:02 compute-0 nova_compute[349548]: 2025-12-05 02:24:02.676 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:24:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2251: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:24:02 compute-0 podman[466712]: 2025-12-05 02:24:02.717664912 +0000 UTC m=+0.119261371 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  5 02:24:02 compute-0 podman[466719]: 2025-12-05 02:24:02.727258951 +0000 UTC m=+0.114232389 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, config_id=edpm, name=ubi9-minimal, vcs-type=git, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., version=9.6, distribution-scope=public, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc.)
Dec  5 02:24:02 compute-0 podman[466711]: 2025-12-05 02:24:02.733785444 +0000 UTC m=+0.149224802 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 02:24:02 compute-0 podman[466713]: 2025-12-05 02:24:02.769292552 +0000 UTC m=+0.162056263 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  5 02:24:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:24:03 compute-0 nova_compute[349548]: 2025-12-05 02:24:03.584 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:24:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2252: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:24:05 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:24:05 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:24:05 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 02:24:05 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:24:05 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 02:24:05 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:24:05 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev d088cadc-97a8-4b1a-bf55-6536012cd2bb does not exist
Dec  5 02:24:05 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev faacd52a-bced-489b-bded-5bc36cd515c9 does not exist
Dec  5 02:24:05 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev a5acf6f2-7b27-4d53-8f15-5b733cde70c4 does not exist
Dec  5 02:24:05 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 02:24:05 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 02:24:05 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 02:24:05 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:24:05 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:24:05 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:24:06 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:24:06 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:24:06 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:24:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2253: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:24:06 compute-0 podman[467060]: 2025-12-05 02:24:06.885230138 +0000 UTC m=+0.079075402 container create a86886461fb495a47b0ce9b3ca593d50e2ac77c96bfd94331100bc1501314156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chatterjee, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:24:06 compute-0 systemd[1]: Started libpod-conmon-a86886461fb495a47b0ce9b3ca593d50e2ac77c96bfd94331100bc1501314156.scope.
Dec  5 02:24:06 compute-0 podman[467060]: 2025-12-05 02:24:06.853679782 +0000 UTC m=+0.047525106 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:24:06 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:24:07 compute-0 podman[467060]: 2025-12-05 02:24:07.017335398 +0000 UTC m=+0.211180682 container init a86886461fb495a47b0ce9b3ca593d50e2ac77c96bfd94331100bc1501314156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:24:07 compute-0 podman[467060]: 2025-12-05 02:24:07.034164581 +0000 UTC m=+0.228009865 container start a86886461fb495a47b0ce9b3ca593d50e2ac77c96bfd94331100bc1501314156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chatterjee, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  5 02:24:07 compute-0 podman[467060]: 2025-12-05 02:24:07.041372153 +0000 UTC m=+0.235217437 container attach a86886461fb495a47b0ce9b3ca593d50e2ac77c96bfd94331100bc1501314156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chatterjee, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:24:07 compute-0 determined_chatterjee[467076]: 167 167
Dec  5 02:24:07 compute-0 systemd[1]: libpod-a86886461fb495a47b0ce9b3ca593d50e2ac77c96bfd94331100bc1501314156.scope: Deactivated successfully.
Dec  5 02:24:07 compute-0 podman[467060]: 2025-12-05 02:24:07.04516654 +0000 UTC m=+0.239011824 container died a86886461fb495a47b0ce9b3ca593d50e2ac77c96bfd94331100bc1501314156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chatterjee, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:24:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-64eaf4cfecac5b5d7f99519898db350aaa0d864b706b84bbbb505a98632e76f4-merged.mount: Deactivated successfully.
Dec  5 02:24:07 compute-0 podman[467060]: 2025-12-05 02:24:07.122689826 +0000 UTC m=+0.316535080 container remove a86886461fb495a47b0ce9b3ca593d50e2ac77c96bfd94331100bc1501314156 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  5 02:24:07 compute-0 systemd[1]: libpod-conmon-a86886461fb495a47b0ce9b3ca593d50e2ac77c96bfd94331100bc1501314156.scope: Deactivated successfully.
Dec  5 02:24:07 compute-0 podman[467099]: 2025-12-05 02:24:07.397834044 +0000 UTC m=+0.076459639 container create f047144d4a4bcb52799bbcf3d0c6b15a2429ef066429b65787e026e01018c1f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_curran, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:24:07 compute-0 podman[467099]: 2025-12-05 02:24:07.366082552 +0000 UTC m=+0.044708197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:24:07 compute-0 systemd[1]: Started libpod-conmon-f047144d4a4bcb52799bbcf3d0c6b15a2429ef066429b65787e026e01018c1f6.scope.
Dec  5 02:24:07 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:24:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8dfd8744e51bf35fcf2090e1a0584581ee1801652d8f7a211e071ece7dfe4d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:24:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8dfd8744e51bf35fcf2090e1a0584581ee1801652d8f7a211e071ece7dfe4d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:24:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8dfd8744e51bf35fcf2090e1a0584581ee1801652d8f7a211e071ece7dfe4d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:24:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8dfd8744e51bf35fcf2090e1a0584581ee1801652d8f7a211e071ece7dfe4d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:24:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8dfd8744e51bf35fcf2090e1a0584581ee1801652d8f7a211e071ece7dfe4d4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 02:24:07 compute-0 podman[467099]: 2025-12-05 02:24:07.57571013 +0000 UTC m=+0.254335705 container init f047144d4a4bcb52799bbcf3d0c6b15a2429ef066429b65787e026e01018c1f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_curran, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:24:07 compute-0 podman[467099]: 2025-12-05 02:24:07.597191443 +0000 UTC m=+0.275817008 container start f047144d4a4bcb52799bbcf3d0c6b15a2429ef066429b65787e026e01018c1f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  5 02:24:07 compute-0 podman[467099]: 2025-12-05 02:24:07.602019919 +0000 UTC m=+0.280645504 container attach f047144d4a4bcb52799bbcf3d0c6b15a2429ef066429b65787e026e01018c1f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_curran, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  5 02:24:07 compute-0 nova_compute[349548]: 2025-12-05 02:24:07.681 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:24:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:24:08 compute-0 nova_compute[349548]: 2025-12-05 02:24:08.589 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:24:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2254: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:24:08 compute-0 keen_curran[467114]: --> passed data devices: 0 physical, 3 LVM
Dec  5 02:24:08 compute-0 keen_curran[467114]: --> relative data size: 1.0
Dec  5 02:24:08 compute-0 keen_curran[467114]: --> All data devices are unavailable
Dec  5 02:24:08 compute-0 systemd[1]: libpod-f047144d4a4bcb52799bbcf3d0c6b15a2429ef066429b65787e026e01018c1f6.scope: Deactivated successfully.
Dec  5 02:24:08 compute-0 systemd[1]: libpod-f047144d4a4bcb52799bbcf3d0c6b15a2429ef066429b65787e026e01018c1f6.scope: Consumed 1.260s CPU time.
Dec  5 02:24:08 compute-0 podman[467099]: 2025-12-05 02:24:08.938059132 +0000 UTC m=+1.616684747 container died f047144d4a4bcb52799bbcf3d0c6b15a2429ef066429b65787e026e01018c1f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_curran, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  5 02:24:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8dfd8744e51bf35fcf2090e1a0584581ee1801652d8f7a211e071ece7dfe4d4-merged.mount: Deactivated successfully.
Dec  5 02:24:09 compute-0 podman[467099]: 2025-12-05 02:24:09.045545501 +0000 UTC m=+1.724171076 container remove f047144d4a4bcb52799bbcf3d0c6b15a2429ef066429b65787e026e01018c1f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:24:09 compute-0 systemd[1]: libpod-conmon-f047144d4a4bcb52799bbcf3d0c6b15a2429ef066429b65787e026e01018c1f6.scope: Deactivated successfully.
Dec  5 02:24:10 compute-0 podman[467292]: 2025-12-05 02:24:10.200700595 +0000 UTC m=+0.102413577 container create c4b89e938c23cf3957a1d57142445cfaedae4c416752d3bfa8baf538bd54c7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:24:10 compute-0 podman[467292]: 2025-12-05 02:24:10.151495803 +0000 UTC m=+0.053208825 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:24:10 compute-0 systemd[1]: Started libpod-conmon-c4b89e938c23cf3957a1d57142445cfaedae4c416752d3bfa8baf538bd54c7cb.scope.
Dec  5 02:24:10 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:24:10 compute-0 podman[467292]: 2025-12-05 02:24:10.336223431 +0000 UTC m=+0.237936453 container init c4b89e938c23cf3957a1d57142445cfaedae4c416752d3bfa8baf538bd54c7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_almeida, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:24:10 compute-0 podman[467292]: 2025-12-05 02:24:10.355353449 +0000 UTC m=+0.257066431 container start c4b89e938c23cf3957a1d57142445cfaedae4c416752d3bfa8baf538bd54c7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:24:10 compute-0 serene_almeida[467307]: 167 167
Dec  5 02:24:10 compute-0 podman[467292]: 2025-12-05 02:24:10.362837699 +0000 UTC m=+0.264550681 container attach c4b89e938c23cf3957a1d57142445cfaedae4c416752d3bfa8baf538bd54c7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  5 02:24:10 compute-0 podman[467292]: 2025-12-05 02:24:10.366413639 +0000 UTC m=+0.268126621 container died c4b89e938c23cf3957a1d57142445cfaedae4c416752d3bfa8baf538bd54c7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:24:10 compute-0 systemd[1]: libpod-c4b89e938c23cf3957a1d57142445cfaedae4c416752d3bfa8baf538bd54c7cb.scope: Deactivated successfully.
Dec  5 02:24:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a0c2d72d42a611c4f808ff2f778a22c970c538521c2db42247bd567546aca2e-merged.mount: Deactivated successfully.
Dec  5 02:24:10 compute-0 podman[467292]: 2025-12-05 02:24:10.449662467 +0000 UTC m=+0.351375449 container remove c4b89e938c23cf3957a1d57142445cfaedae4c416752d3bfa8baf538bd54c7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  5 02:24:10 compute-0 systemd[1]: libpod-conmon-c4b89e938c23cf3957a1d57142445cfaedae4c416752d3bfa8baf538bd54c7cb.scope: Deactivated successfully.
Dec  5 02:24:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2255: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:24:10 compute-0 podman[467331]: 2025-12-05 02:24:10.767204695 +0000 UTC m=+0.099736201 container create ddd7f85ef2cbcbe11d6df9d2d9f90f34f4ba2c850c3bd137580278d8493d3e66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hertz, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Dec  5 02:24:10 compute-0 podman[467331]: 2025-12-05 02:24:10.729876986 +0000 UTC m=+0.062408532 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:24:10 compute-0 systemd[1]: Started libpod-conmon-ddd7f85ef2cbcbe11d6df9d2d9f90f34f4ba2c850c3bd137580278d8493d3e66.scope.
Dec  5 02:24:10 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:24:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ab2ff402b86960098750a7e50dc0a4ca62c4f508e7052a39892cb8b2aae942/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:24:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ab2ff402b86960098750a7e50dc0a4ca62c4f508e7052a39892cb8b2aae942/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:24:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ab2ff402b86960098750a7e50dc0a4ca62c4f508e7052a39892cb8b2aae942/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:24:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45ab2ff402b86960098750a7e50dc0a4ca62c4f508e7052a39892cb8b2aae942/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:24:10 compute-0 podman[467331]: 2025-12-05 02:24:10.948046694 +0000 UTC m=+0.280578250 container init ddd7f85ef2cbcbe11d6df9d2d9f90f34f4ba2c850c3bd137580278d8493d3e66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hertz, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec  5 02:24:10 compute-0 podman[467331]: 2025-12-05 02:24:10.977630325 +0000 UTC m=+0.310161831 container start ddd7f85ef2cbcbe11d6df9d2d9f90f34f4ba2c850c3bd137580278d8493d3e66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hertz, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Dec  5 02:24:10 compute-0 podman[467331]: 2025-12-05 02:24:10.984813716 +0000 UTC m=+0.317345273 container attach ddd7f85ef2cbcbe11d6df9d2d9f90f34f4ba2c850c3bd137580278d8493d3e66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hertz, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]: {
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:    "0": [
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:        {
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:            "devices": [
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:                "/dev/loop3"
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:            ],
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:            "lv_name": "ceph_lv0",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:            "lv_size": "21470642176",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:            "name": "ceph_lv0",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:            "tags": {
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:                "ceph.cluster_name": "ceph",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:                "ceph.crush_device_class": "",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:                "ceph.encrypted": "0",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:                "ceph.osd_id": "0",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:                "ceph.type": "block",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:                "ceph.vdo": "0"
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:            },
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:            "type": "block",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:            "vg_name": "ceph_vg0"
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:        }
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:    ],
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:    "1": [
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:        {
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:            "devices": [
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:                "/dev/loop4"
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:            ],
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:            "lv_name": "ceph_lv1",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:            "lv_size": "21470642176",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:            "name": "ceph_lv1",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:            "tags": {
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:                "ceph.cluster_name": "ceph",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:                "ceph.crush_device_class": "",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:                "ceph.encrypted": "0",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:                "ceph.osd_id": "1",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:                "ceph.type": "block",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:                "ceph.vdo": "0"
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:            },
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:            "type": "block",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:            "vg_name": "ceph_vg1"
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:        }
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:    ],
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:    "2": [
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:        {
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:            "devices": [
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:                "/dev/loop5"
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:            ],
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:            "lv_name": "ceph_lv2",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:            "lv_size": "21470642176",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:            "name": "ceph_lv2",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:            "tags": {
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:                "ceph.cluster_name": "ceph",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:                "ceph.crush_device_class": "",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:                "ceph.encrypted": "0",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:                "ceph.osd_id": "2",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:                "ceph.type": "block",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:                "ceph.vdo": "0"
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:            },
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:            "type": "block",
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:            "vg_name": "ceph_vg2"
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:        }
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]:    ]
Dec  5 02:24:11 compute-0 dazzling_hertz[467347]: }
Dec  5 02:24:11 compute-0 systemd[1]: libpod-ddd7f85ef2cbcbe11d6df9d2d9f90f34f4ba2c850c3bd137580278d8493d3e66.scope: Deactivated successfully.
Dec  5 02:24:11 compute-0 podman[467331]: 2025-12-05 02:24:11.831190198 +0000 UTC m=+1.163721674 container died ddd7f85ef2cbcbe11d6df9d2d9f90f34f4ba2c850c3bd137580278d8493d3e66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hertz, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:24:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-45ab2ff402b86960098750a7e50dc0a4ca62c4f508e7052a39892cb8b2aae942-merged.mount: Deactivated successfully.
Dec  5 02:24:11 compute-0 podman[467331]: 2025-12-05 02:24:11.926509075 +0000 UTC m=+1.259040581 container remove ddd7f85ef2cbcbe11d6df9d2d9f90f34f4ba2c850c3bd137580278d8493d3e66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hertz, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:24:11 compute-0 systemd[1]: libpod-conmon-ddd7f85ef2cbcbe11d6df9d2d9f90f34f4ba2c850c3bd137580278d8493d3e66.scope: Deactivated successfully.
Dec  5 02:24:12 compute-0 nova_compute[349548]: 2025-12-05 02:24:12.683 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:24:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2256: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:24:13 compute-0 podman[467505]: 2025-12-05 02:24:13.093404458 +0000 UTC m=+0.082945440 container create 25e54172e4bd8b5e62cece1b1342e49e77f30d2b2eaae75bbffd71efbbbeabe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_buck, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:24:13 compute-0 podman[467505]: 2025-12-05 02:24:13.072324066 +0000 UTC m=+0.061865068 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:24:13 compute-0 systemd[1]: Started libpod-conmon-25e54172e4bd8b5e62cece1b1342e49e77f30d2b2eaae75bbffd71efbbbeabe8.scope.
Dec  5 02:24:13 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:24:13 compute-0 podman[467505]: 2025-12-05 02:24:13.240745457 +0000 UTC m=+0.230286529 container init 25e54172e4bd8b5e62cece1b1342e49e77f30d2b2eaae75bbffd71efbbbeabe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  5 02:24:13 compute-0 podman[467505]: 2025-12-05 02:24:13.259337759 +0000 UTC m=+0.248878771 container start 25e54172e4bd8b5e62cece1b1342e49e77f30d2b2eaae75bbffd71efbbbeabe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  5 02:24:13 compute-0 podman[467505]: 2025-12-05 02:24:13.26652003 +0000 UTC m=+0.256061042 container attach 25e54172e4bd8b5e62cece1b1342e49e77f30d2b2eaae75bbffd71efbbbeabe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:24:13 compute-0 peaceful_buck[467520]: 167 167
Dec  5 02:24:13 compute-0 systemd[1]: libpod-25e54172e4bd8b5e62cece1b1342e49e77f30d2b2eaae75bbffd71efbbbeabe8.scope: Deactivated successfully.
Dec  5 02:24:13 compute-0 podman[467525]: 2025-12-05 02:24:13.360593903 +0000 UTC m=+0.063570617 container died 25e54172e4bd8b5e62cece1b1342e49e77f30d2b2eaae75bbffd71efbbbeabe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:24:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:24:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e3c47d56e4402339041cb994e60580ede9e0748d602fc1eb30ec3836f40387b-merged.mount: Deactivated successfully.
Dec  5 02:24:13 compute-0 podman[467525]: 2025-12-05 02:24:13.451554307 +0000 UTC m=+0.154530951 container remove 25e54172e4bd8b5e62cece1b1342e49e77f30d2b2eaae75bbffd71efbbbeabe8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_buck, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:24:13 compute-0 systemd[1]: libpod-conmon-25e54172e4bd8b5e62cece1b1342e49e77f30d2b2eaae75bbffd71efbbbeabe8.scope: Deactivated successfully.
Dec  5 02:24:13 compute-0 nova_compute[349548]: 2025-12-05 02:24:13.593 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:24:13 compute-0 podman[467546]: 2025-12-05 02:24:13.817053033 +0000 UTC m=+0.111202334 container create 6321ce3e0d24e45183bbcf336bb1ad4e151e8ba5cd3458011817be62ecfc1310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:24:13 compute-0 podman[467546]: 2025-12-05 02:24:13.772160302 +0000 UTC m=+0.066309653 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:24:13 compute-0 systemd[1]: Started libpod-conmon-6321ce3e0d24e45183bbcf336bb1ad4e151e8ba5cd3458011817be62ecfc1310.scope.
Dec  5 02:24:13 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:24:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fde6c0fcf2c3be4e483ad998f9a906337b4be96b4d822bf766f0f9b2bbf96fe5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:24:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fde6c0fcf2c3be4e483ad998f9a906337b4be96b4d822bf766f0f9b2bbf96fe5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:24:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fde6c0fcf2c3be4e483ad998f9a906337b4be96b4d822bf766f0f9b2bbf96fe5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:24:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fde6c0fcf2c3be4e483ad998f9a906337b4be96b4d822bf766f0f9b2bbf96fe5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:24:13 compute-0 podman[467546]: 2025-12-05 02:24:13.983009834 +0000 UTC m=+0.277159185 container init 6321ce3e0d24e45183bbcf336bb1ad4e151e8ba5cd3458011817be62ecfc1310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:24:14 compute-0 podman[467546]: 2025-12-05 02:24:14.015565698 +0000 UTC m=+0.309715009 container start 6321ce3e0d24e45183bbcf336bb1ad4e151e8ba5cd3458011817be62ecfc1310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hopper, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  5 02:24:14 compute-0 podman[467546]: 2025-12-05 02:24:14.022065101 +0000 UTC m=+0.316214412 container attach 6321ce3e0d24e45183bbcf336bb1ad4e151e8ba5cd3458011817be62ecfc1310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Dec  5 02:24:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2257: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:24:15 compute-0 dreamy_hopper[467561]: {
Dec  5 02:24:15 compute-0 dreamy_hopper[467561]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 02:24:15 compute-0 dreamy_hopper[467561]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:24:15 compute-0 dreamy_hopper[467561]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 02:24:15 compute-0 dreamy_hopper[467561]:        "osd_id": 0,
Dec  5 02:24:15 compute-0 dreamy_hopper[467561]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:24:15 compute-0 dreamy_hopper[467561]:        "type": "bluestore"
Dec  5 02:24:15 compute-0 dreamy_hopper[467561]:    },
Dec  5 02:24:15 compute-0 dreamy_hopper[467561]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 02:24:15 compute-0 dreamy_hopper[467561]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:24:15 compute-0 dreamy_hopper[467561]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 02:24:15 compute-0 dreamy_hopper[467561]:        "osd_id": 1,
Dec  5 02:24:15 compute-0 dreamy_hopper[467561]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:24:15 compute-0 dreamy_hopper[467561]:        "type": "bluestore"
Dec  5 02:24:15 compute-0 dreamy_hopper[467561]:    },
Dec  5 02:24:15 compute-0 dreamy_hopper[467561]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 02:24:15 compute-0 dreamy_hopper[467561]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:24:15 compute-0 dreamy_hopper[467561]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 02:24:15 compute-0 dreamy_hopper[467561]:        "osd_id": 2,
Dec  5 02:24:15 compute-0 dreamy_hopper[467561]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:24:15 compute-0 dreamy_hopper[467561]:        "type": "bluestore"
Dec  5 02:24:15 compute-0 dreamy_hopper[467561]:    }
Dec  5 02:24:15 compute-0 dreamy_hopper[467561]: }
Dec  5 02:24:15 compute-0 systemd[1]: libpod-6321ce3e0d24e45183bbcf336bb1ad4e151e8ba5cd3458011817be62ecfc1310.scope: Deactivated successfully.
Dec  5 02:24:15 compute-0 systemd[1]: libpod-6321ce3e0d24e45183bbcf336bb1ad4e151e8ba5cd3458011817be62ecfc1310.scope: Consumed 1.291s CPU time.
Dec  5 02:24:15 compute-0 podman[467594]: 2025-12-05 02:24:15.39430826 +0000 UTC m=+0.057897047 container died 6321ce3e0d24e45183bbcf336bb1ad4e151e8ba5cd3458011817be62ecfc1310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hopper, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:24:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-fde6c0fcf2c3be4e483ad998f9a906337b4be96b4d822bf766f0f9b2bbf96fe5-merged.mount: Deactivated successfully.
Dec  5 02:24:15 compute-0 podman[467594]: 2025-12-05 02:24:15.508085486 +0000 UTC m=+0.171674233 container remove 6321ce3e0d24e45183bbcf336bb1ad4e151e8ba5cd3458011817be62ecfc1310 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hopper, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:24:15 compute-0 systemd[1]: libpod-conmon-6321ce3e0d24e45183bbcf336bb1ad4e151e8ba5cd3458011817be62ecfc1310.scope: Deactivated successfully.
Dec  5 02:24:15 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 02:24:15 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:24:15 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 02:24:15 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:24:15 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 2b08f8c6-8f9c-4d6b-9bee-aa9e27ca399f does not exist
Dec  5 02:24:15 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 621d11ca-9aba-4734-8b8d-14923b5301a6 does not exist
Dec  5 02:24:15 compute-0 podman[467634]: 2025-12-05 02:24:15.918116152 +0000 UTC m=+0.137846162 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 02:24:16 compute-0 podman[467681]: 2025-12-05 02:24:16.028872003 +0000 UTC m=+0.083738983 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec  5 02:24:16 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:24:16 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:24:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:24:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:24:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:24:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:24:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:24:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:24:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:24:16
Dec  5 02:24:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 02:24:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 02:24:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'volumes', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'vms', 'images', 'backups']
Dec  5 02:24:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 02:24:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2258: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:24:17 compute-0 nova_compute[349548]: 2025-12-05 02:24:17.687 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:24:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 02:24:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:24:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 02:24:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:24:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:24:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:24:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:24:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:24:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:24:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:24:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:24:18 compute-0 nova_compute[349548]: 2025-12-05 02:24:18.597 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:24:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2259: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:24:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2260: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:24:21 compute-0 podman[467702]: 2025-12-05 02:24:21.733875422 +0000 UTC m=+0.129667263 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.29.0, io.openshift.expose-services=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, distribution-scope=public, name=ubi9, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, release-0.7.12=, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container)
Dec  5 02:24:21 compute-0 podman[467703]: 2025-12-05 02:24:21.745740445 +0000 UTC m=+0.133929173 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm)
Dec  5 02:24:21 compute-0 podman[467701]: 2025-12-05 02:24:21.748615586 +0000 UTC m=+0.147547975 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  5 02:24:22 compute-0 nova_compute[349548]: 2025-12-05 02:24:22.691 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:24:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2261: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:24:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 02:24:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 10K writes, 37K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 2749 syncs, 3.64 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 337 writes, 730 keys, 337 commit groups, 1.0 writes per commit group, ingest: 0.46 MB, 0.00 MB/s#012Interval WAL: 337 writes, 162 syncs, 2.08 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  5 02:24:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:24:23 compute-0 nova_compute[349548]: 2025-12-05 02:24:23.599 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:24:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2262: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:24:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2263: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015211533217750863 of space, bias 1.0, pg target 0.4563459965325259 quantized to 32 (current 32)
Dec  5 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  5 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:24:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 02:24:27 compute-0 nova_compute[349548]: 2025-12-05 02:24:27.694 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:24:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:24:28 compute-0 nova_compute[349548]: 2025-12-05 02:24:28.603 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:24:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2264: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:24:29 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 02:24:29 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.2 total, 600.0 interval#012Cumulative writes: 11K writes, 45K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 11K writes, 3184 syncs, 3.69 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 439 writes, 1119 keys, 439 commit groups, 1.0 writes per commit group, ingest: 1.04 MB, 0.00 MB/s#012Interval WAL: 439 writes, 202 syncs, 2.17 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  5 02:24:29 compute-0 podman[158197]: time="2025-12-05T02:24:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:24:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:24:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 02:24:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:24:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8671 "" "Go-http-client/1.1"
Dec  5 02:24:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2265: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:24:31 compute-0 openstack_network_exporter[366555]: ERROR   02:24:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:24:31 compute-0 openstack_network_exporter[366555]: ERROR   02:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:24:31 compute-0 openstack_network_exporter[366555]: ERROR   02:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:24:31 compute-0 openstack_network_exporter[366555]: ERROR   02:24:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:24:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:24:31 compute-0 openstack_network_exporter[366555]: ERROR   02:24:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:24:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:24:32 compute-0 nova_compute[349548]: 2025-12-05 02:24:32.698 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:24:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2266: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:24:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:24:33 compute-0 nova_compute[349548]: 2025-12-05 02:24:33.607 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:24:33 compute-0 podman[467759]: 2025-12-05 02:24:33.737340454 +0000 UTC m=+0.135190818 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 02:24:33 compute-0 podman[467761]: 2025-12-05 02:24:33.748501197 +0000 UTC m=+0.135246739 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-minimal-container, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, release=1755695350, version=9.6, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_id=edpm)
Dec  5 02:24:33 compute-0 podman[467758]: 2025-12-05 02:24:33.754957729 +0000 UTC m=+0.157773683 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  5 02:24:33 compute-0 podman[467760]: 2025-12-05 02:24:33.783333306 +0000 UTC m=+0.173638158 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  5 02:24:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2267: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:24:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 02:24:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 9570 writes, 37K keys, 9570 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 9570 writes, 2504 syncs, 3.82 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 506 writes, 1776 keys, 506 commit groups, 1.0 writes per commit group, ingest: 2.56 MB, 0.00 MB/s#012Interval WAL: 506 writes, 185 syncs, 2.74 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  5 02:24:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2268: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:24:37 compute-0 ceph-mgr[193209]: [devicehealth INFO root] Check health
Dec  5 02:24:37 compute-0 nova_compute[349548]: 2025-12-05 02:24:37.700 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.327 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.328 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.328 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d083590>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.342 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '292fd084-0808-4a80-adc1-6ab1f28e188a', 'name': 'te-3255585-asg-ymkpcnuo2iqm-rsaqvth2jwvx-k3ipymnd45pa', 'flavor': {'id': 'bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'b01709a3378347e1a3f25eeb2b8b1bca', 'user_id': '99591ed8361e41579fee1d14f16bf0f7', 'hostId': '1d9ee94bfdb0c27cf886050001bab7f2a93221931735791e86b3ac18', 'status': 'active', 'metadata': {'metering.server_group': '92ca195d-98d1-443c-9947-dcb7ca7b926a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.347 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7', 'name': 'te-3255585-asg-ymkpcnuo2iqm-egephyv4dydi-sxgc5dh3lpwo', 'flavor': {'id': 'bfbe6bb0-5bd1-4eb9-8063-0f971ebf0e49', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'b01709a3378347e1a3f25eeb2b8b1bca', 'user_id': '99591ed8361e41579fee1d14f16bf0f7', 'hostId': '1d9ee94bfdb0c27cf886050001bab7f2a93221931735791e86b3ac18', 'status': 'active', 'metadata': {'metering.server_group': '92ca195d-98d1-443c-9947-dcb7ca7b926a'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.348 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.348 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd61438050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.349 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd61438050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.349 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.350 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-05T02:24:38.349364) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.352 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.353 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.353 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.353 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.353 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.353 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.354 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-05T02:24:38.353756) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.378 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.379 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.402 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.403 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.404 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.404 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.404 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.404 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.405 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.405 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.405 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-05T02:24:38.405215) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.406 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.406 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.406 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.407 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:24:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.407 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.407 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.407 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.407 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.409 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-05T02:24:38.407735) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.469 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.bytes volume: 30882304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.470 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.536 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.bytes volume: 31304192 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.536 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.537 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.538 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.538 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.538 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.538 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.539 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.539 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-05T02:24:38.539163) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.539 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.latency volume: 3200956192 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.540 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.latency volume: 237184283 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.541 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.latency volume: 2882860455 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.541 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.latency volume: 200982064 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.542 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.542 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.543 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.543 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.543 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.543 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.543 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.requests volume: 1101 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.544 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.544 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.requests volume: 1122 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.545 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.545 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.546 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.546 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.546 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.546 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-05T02:24:38.543549) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.547 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.547 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.547 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.547 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.548 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-05T02:24:38.547143) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.548 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.usage volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.549 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.549 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.550 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.550 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.550 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.550 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.550 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.551 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.bytes volume: 73146368 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.551 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.552 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.bytes volume: 73129984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.552 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.553 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.553 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.553 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.553 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.554 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.554 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.554 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-05T02:24:38.550738) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.555 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-05T02:24:38.554626) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.595 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 nova_compute[349548]: 2025-12-05 02:24:38.609 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.637 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.638 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.638 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.638 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.639 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.639 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.639 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.639 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.latency volume: 11353966152 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.640 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.641 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.latency volume: 10991220303 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.642 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.642 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-05T02:24:38.639511) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.643 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.643 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.643 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.644 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.644 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.644 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.645 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.requests volume: 315 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.645 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.646 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-05T02:24:38.644571) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.646 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.requests volume: 302 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.647 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.648 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.648 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.648 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.648 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.648 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.649 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.649 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-05T02:24:38.648963) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.655 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.662 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.663 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.663 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.663 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.663 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.663 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.664 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.664 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.664 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-05T02:24:38.663982) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.665 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.665 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.665 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.666 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.666 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.666 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.666 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.667 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.667 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.668 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.allocation volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.668 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/disk.device.allocation volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.669 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.669 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.669 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.670 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.670 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.670 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.670 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-05T02:24:38.666705) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.671 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.671 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.672 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.672 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.673 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.673 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.673 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-05T02:24:38.670657) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.673 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.674 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.674 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-05T02:24:38.673837) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.674 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.675 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.675 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.676 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.676 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.676 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.676 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.676 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.676 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.677 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-05T02:24:38.676550) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.677 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.677 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.677 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.678 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.678 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.678 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.678 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.678 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.678 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.678 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/memory.usage volume: 42.4765625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.679 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-05T02:24:38.678546) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.679 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/memory.usage volume: 42.26953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.679 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.679 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.679 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.680 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.680 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.680 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.680 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.bytes volume: 2150 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.680 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.681 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.681 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.681 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.681 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.681 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.681 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-05T02:24:38.680230) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.681 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.682 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.682 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.682 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-05T02:24:38.681830) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.683 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.683 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.683 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.683 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.683 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.683 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.683 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.684 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.684 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.684 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.684 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.684 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.684 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.685 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.685 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-05T02:24:38.683598) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.685 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/cpu volume: 339720000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.685 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/cpu volume: 337230000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.685 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-05T02:24:38.685324) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.686 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.686 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.686 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.686 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.686 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.687 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.687 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.687 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.687 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.688 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.688 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.688 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.688 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.688 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.688 14 DEBUG ceilometer.compute.pollsters [-] 292fd084-0808-4a80-adc1-6ab1f28e188a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.688 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-05T02:24:38.686981) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.689 14 DEBUG ceilometer.compute.pollsters [-] e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.689 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.690 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.690 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.690 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.690 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.691 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.691 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.691 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.691 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.691 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.691 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.692 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.692 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-05T02:24:38.688694) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.692 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.692 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.692 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.693 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.693 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.693 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.693 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.693 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.694 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.694 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.694 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.694 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.695 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.695 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:24:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:24:38.695 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:24:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2269: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:24:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2270: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:24:42 compute-0 nova_compute[349548]: 2025-12-05 02:24:42.704 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:24:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2271: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:24:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:24:43 compute-0 nova_compute[349548]: 2025-12-05 02:24:43.613 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:24:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2272: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:24:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 02:24:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1786086419' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 02:24:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 02:24:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1786086419' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 02:24:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:24:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:24:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:24:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:24:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:24:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:24:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2273: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:24:46 compute-0 podman[467846]: 2025-12-05 02:24:46.714589239 +0000 UTC m=+0.116622936 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 02:24:46 compute-0 podman[467845]: 2025-12-05 02:24:46.722303386 +0000 UTC m=+0.129258181 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  5 02:24:47 compute-0 nova_compute[349548]: 2025-12-05 02:24:47.705 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:24:48 compute-0 nova_compute[349548]: 2025-12-05 02:24:48.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:24:48 compute-0 nova_compute[349548]: 2025-12-05 02:24:48.068 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 02:24:48 compute-0 nova_compute[349548]: 2025-12-05 02:24:48.068 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  5 02:24:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:24:48 compute-0 nova_compute[349548]: 2025-12-05 02:24:48.462 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  5 02:24:48 compute-0 nova_compute[349548]: 2025-12-05 02:24:48.463 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquired lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  5 02:24:48 compute-0 nova_compute[349548]: 2025-12-05 02:24:48.463 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  5 02:24:48 compute-0 nova_compute[349548]: 2025-12-05 02:24:48.464 349552 DEBUG nova.objects.instance [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 292fd084-0808-4a80-adc1-6ab1f28e188a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:24:48 compute-0 nova_compute[349548]: 2025-12-05 02:24:48.616 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:24:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2274: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:24:50 compute-0 nova_compute[349548]: 2025-12-05 02:24:50.498 349552 DEBUG nova.network.neutron [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updating instance_info_cache with network_info: [{"id": "706f9405-4061-481e-a252-9b14f4534a4e", "address": "fa:16:3e:cf:10:bc", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.151", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706f9405-40", "ovs_interfaceid": "706f9405-4061-481e-a252-9b14f4534a4e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:24:50 compute-0 nova_compute[349548]: 2025-12-05 02:24:50.520 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Releasing lock "refresh_cache-292fd084-0808-4a80-adc1-6ab1f28e188a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  5 02:24:50 compute-0 nova_compute[349548]: 2025-12-05 02:24:50.521 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  5 02:24:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2275: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:24:52 compute-0 nova_compute[349548]: 2025-12-05 02:24:52.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:24:52 compute-0 nova_compute[349548]: 2025-12-05 02:24:52.065 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 02:24:52 compute-0 nova_compute[349548]: 2025-12-05 02:24:52.709 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:24:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2276: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:24:52 compute-0 podman[467889]: 2025-12-05 02:24:52.73405842 +0000 UTC m=+0.123729496 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, com.redhat.component=ubi9-container, config_id=edpm, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, managed_by=edpm_ansible)
Dec  5 02:24:52 compute-0 podman[467890]: 2025-12-05 02:24:52.736407426 +0000 UTC m=+0.120865796 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  5 02:24:52 compute-0 podman[467888]: 2025-12-05 02:24:52.744961286 +0000 UTC m=+0.141434463 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible)
Dec  5 02:24:53 compute-0 nova_compute[349548]: 2025-12-05 02:24:53.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:24:53 compute-0 nova_compute[349548]: 2025-12-05 02:24:53.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:24:53 compute-0 nova_compute[349548]: 2025-12-05 02:24:53.115 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:24:53 compute-0 nova_compute[349548]: 2025-12-05 02:24:53.116 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:24:53 compute-0 nova_compute[349548]: 2025-12-05 02:24:53.117 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:24:53 compute-0 nova_compute[349548]: 2025-12-05 02:24:53.118 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 02:24:53 compute-0 nova_compute[349548]: 2025-12-05 02:24:53.119 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:24:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:24:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:24:53 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2111209664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:24:53 compute-0 nova_compute[349548]: 2025-12-05 02:24:53.614 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:24:53 compute-0 nova_compute[349548]: 2025-12-05 02:24:53.620 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:24:53 compute-0 nova_compute[349548]: 2025-12-05 02:24:53.717 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:24:53 compute-0 nova_compute[349548]: 2025-12-05 02:24:53.718 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:24:53 compute-0 nova_compute[349548]: 2025-12-05 02:24:53.727 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:24:53 compute-0 nova_compute[349548]: 2025-12-05 02:24:53.728 349552 DEBUG nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Dec  5 02:24:54 compute-0 nova_compute[349548]: 2025-12-05 02:24:54.323 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:24:54 compute-0 nova_compute[349548]: 2025-12-05 02:24:54.325 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3438MB free_disk=59.897029876708984GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 02:24:54 compute-0 nova_compute[349548]: 2025-12-05 02:24:54.326 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:24:54 compute-0 nova_compute[349548]: 2025-12-05 02:24:54.326 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:24:54 compute-0 nova_compute[349548]: 2025-12-05 02:24:54.441 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance 292fd084-0808-4a80-adc1-6ab1f28e188a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:24:54 compute-0 nova_compute[349548]: 2025-12-05 02:24:54.441 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Instance e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  5 02:24:54 compute-0 nova_compute[349548]: 2025-12-05 02:24:54.442 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 02:24:54 compute-0 nova_compute[349548]: 2025-12-05 02:24:54.442 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 02:24:54 compute-0 nova_compute[349548]: 2025-12-05 02:24:54.517 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:24:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2277: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:24:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:24:54 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2511445024' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:24:55 compute-0 nova_compute[349548]: 2025-12-05 02:24:55.013 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:24:55 compute-0 nova_compute[349548]: 2025-12-05 02:24:55.026 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:24:55 compute-0 nova_compute[349548]: 2025-12-05 02:24:55.062 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:24:55 compute-0 nova_compute[349548]: 2025-12-05 02:24:55.066 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 02:24:55 compute-0 nova_compute[349548]: 2025-12-05 02:24:55.067 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:24:56 compute-0 nova_compute[349548]: 2025-12-05 02:24:56.064 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:24:56 compute-0 nova_compute[349548]: 2025-12-05 02:24:56.064 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:24:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:24:56.227 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:24:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:24:56.227 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:24:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:24:56.228 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:24:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2278: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:24:57 compute-0 nova_compute[349548]: 2025-12-05 02:24:57.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:24:57 compute-0 nova_compute[349548]: 2025-12-05 02:24:57.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:24:57 compute-0 nova_compute[349548]: 2025-12-05 02:24:57.712 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:24:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:24:58 compute-0 nova_compute[349548]: 2025-12-05 02:24:58.626 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:24:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2279: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:24:59 compute-0 podman[158197]: time="2025-12-05T02:24:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:24:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:24:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 02:24:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:24:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8656 "" "Go-http-client/1.1"
Dec  5 02:25:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2280: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:25:01 compute-0 nova_compute[349548]: 2025-12-05 02:25:01.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:25:01 compute-0 openstack_network_exporter[366555]: ERROR   02:25:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:25:01 compute-0 openstack_network_exporter[366555]: ERROR   02:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:25:01 compute-0 openstack_network_exporter[366555]: ERROR   02:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:25:01 compute-0 openstack_network_exporter[366555]: ERROR   02:25:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:25:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:25:01 compute-0 openstack_network_exporter[366555]: ERROR   02:25:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:25:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:25:02 compute-0 nova_compute[349548]: 2025-12-05 02:25:02.714 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:25:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2281: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:25:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:25:03 compute-0 nova_compute[349548]: 2025-12-05 02:25:03.630 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:25:04 compute-0 podman[467988]: 2025-12-05 02:25:04.704869567 +0000 UTC m=+0.106138733 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec  5 02:25:04 compute-0 podman[467989]: 2025-12-05 02:25:04.717746948 +0000 UTC m=+0.115677300 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  5 02:25:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2282: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:25:04 compute-0 podman[467991]: 2025-12-05 02:25:04.731380791 +0000 UTC m=+0.112382788 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., architecture=x86_64, release=1755695350, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, name=ubi9-minimal, config_id=edpm, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public)
Dec  5 02:25:04 compute-0 podman[467990]: 2025-12-05 02:25:04.791643983 +0000 UTC m=+0.182945949 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  5 02:25:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2283: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:25:07 compute-0 nova_compute[349548]: 2025-12-05 02:25:07.717 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:25:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:25:08 compute-0 nova_compute[349548]: 2025-12-05 02:25:08.635 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:25:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2284: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:25:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2285: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:25:12 compute-0 nova_compute[349548]: 2025-12-05 02:25:12.720 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:25:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2286: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:25:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:25:13 compute-0 nova_compute[349548]: 2025-12-05 02:25:13.638 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:25:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2287: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:25:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:25:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:25:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:25:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:25:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:25:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:25:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:25:16
Dec  5 02:25:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 02:25:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 02:25:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['images', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'volumes', 'vms']
Dec  5 02:25:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 02:25:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2288: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:25:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:25:17 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:25:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 02:25:17 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:25:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 02:25:17 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:25:17 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 73ede9fb-70f1-4556-854b-1fc07962fd79 does not exist
Dec  5 02:25:17 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev bc87b376-2ec2-4497-908a-6957447c51a3 does not exist
Dec  5 02:25:17 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 75fcb778-ad7c-45ce-a39a-639e55449ad7 does not exist
Dec  5 02:25:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 02:25:17 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 02:25:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 02:25:17 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:25:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:25:17 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:25:17 compute-0 podman[468229]: 2025-12-05 02:25:17.721295027 +0000 UTC m=+0.137586025 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 02:25:17 compute-0 nova_compute[349548]: 2025-12-05 02:25:17.722 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:25:17 compute-0 podman[468228]: 2025-12-05 02:25:17.729784526 +0000 UTC m=+0.151052914 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec  5 02:25:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:25:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:25:18 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:25:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 02:25:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:25:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 02:25:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:25:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:25:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:25:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:25:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:25:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:25:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:25:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:25:18 compute-0 podman[468383]: 2025-12-05 02:25:18.545180737 +0000 UTC m=+0.082129258 container create 3b240ea3e6a750fb1e6d146cc6dc234ef2f825fe2eb44f5837d055c26bbb59ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:25:18 compute-0 systemd[1]: Started libpod-conmon-3b240ea3e6a750fb1e6d146cc6dc234ef2f825fe2eb44f5837d055c26bbb59ae.scope.
Dec  5 02:25:18 compute-0 podman[468383]: 2025-12-05 02:25:18.516654616 +0000 UTC m=+0.053603117 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:25:18 compute-0 nova_compute[349548]: 2025-12-05 02:25:18.642 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:25:18 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:25:18 compute-0 podman[468383]: 2025-12-05 02:25:18.707419883 +0000 UTC m=+0.244368414 container init 3b240ea3e6a750fb1e6d146cc6dc234ef2f825fe2eb44f5837d055c26bbb59ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_euler, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:25:18 compute-0 podman[468383]: 2025-12-05 02:25:18.725555082 +0000 UTC m=+0.262503603 container start 3b240ea3e6a750fb1e6d146cc6dc234ef2f825fe2eb44f5837d055c26bbb59ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_euler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  5 02:25:18 compute-0 podman[468383]: 2025-12-05 02:25:18.732371293 +0000 UTC m=+0.269319814 container attach 3b240ea3e6a750fb1e6d146cc6dc234ef2f825fe2eb44f5837d055c26bbb59ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_euler, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:25:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2289: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:25:18 compute-0 lucid_euler[468398]: 167 167
Dec  5 02:25:18 compute-0 systemd[1]: libpod-3b240ea3e6a750fb1e6d146cc6dc234ef2f825fe2eb44f5837d055c26bbb59ae.scope: Deactivated successfully.
Dec  5 02:25:18 compute-0 podman[468383]: 2025-12-05 02:25:18.739566666 +0000 UTC m=+0.276515187 container died 3b240ea3e6a750fb1e6d146cc6dc234ef2f825fe2eb44f5837d055c26bbb59ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:25:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-7da4b8089aeb3086dfe5b2b2ab193da303889768c97df5d10a5180ef377498e6-merged.mount: Deactivated successfully.
Dec  5 02:25:18 compute-0 podman[468383]: 2025-12-05 02:25:18.826017344 +0000 UTC m=+0.362965855 container remove 3b240ea3e6a750fb1e6d146cc6dc234ef2f825fe2eb44f5837d055c26bbb59ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_euler, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  5 02:25:18 compute-0 systemd[1]: libpod-conmon-3b240ea3e6a750fb1e6d146cc6dc234ef2f825fe2eb44f5837d055c26bbb59ae.scope: Deactivated successfully.
Dec  5 02:25:19 compute-0 podman[468422]: 2025-12-05 02:25:19.12640923 +0000 UTC m=+0.099284439 container create b4e9acd19f7c6e04d9a7b4c31dde4bf5916daf967817cac4b9db899dafb5a261 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:25:19 compute-0 podman[468422]: 2025-12-05 02:25:19.09506603 +0000 UTC m=+0.067941299 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:25:19 compute-0 systemd[1]: Started libpod-conmon-b4e9acd19f7c6e04d9a7b4c31dde4bf5916daf967817cac4b9db899dafb5a261.scope.
Dec  5 02:25:19 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:25:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fd11b5c318ce8ec7668e4f8a3f9b0d14884e56253c97645c004b555381b6e85/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:25:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fd11b5c318ce8ec7668e4f8a3f9b0d14884e56253c97645c004b555381b6e85/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:25:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fd11b5c318ce8ec7668e4f8a3f9b0d14884e56253c97645c004b555381b6e85/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:25:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fd11b5c318ce8ec7668e4f8a3f9b0d14884e56253c97645c004b555381b6e85/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:25:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fd11b5c318ce8ec7668e4f8a3f9b0d14884e56253c97645c004b555381b6e85/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 02:25:19 compute-0 podman[468422]: 2025-12-05 02:25:19.31618375 +0000 UTC m=+0.289059019 container init b4e9acd19f7c6e04d9a7b4c31dde4bf5916daf967817cac4b9db899dafb5a261 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Dec  5 02:25:19 compute-0 podman[468422]: 2025-12-05 02:25:19.348783026 +0000 UTC m=+0.321658225 container start b4e9acd19f7c6e04d9a7b4c31dde4bf5916daf967817cac4b9db899dafb5a261 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  5 02:25:19 compute-0 podman[468422]: 2025-12-05 02:25:19.355508365 +0000 UTC m=+0.328383634 container attach b4e9acd19f7c6e04d9a7b4c31dde4bf5916daf967817cac4b9db899dafb5a261 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_davinci, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  5 02:25:20 compute-0 charming_davinci[468436]: --> passed data devices: 0 physical, 3 LVM
Dec  5 02:25:20 compute-0 charming_davinci[468436]: --> relative data size: 1.0
Dec  5 02:25:20 compute-0 charming_davinci[468436]: --> All data devices are unavailable
Dec  5 02:25:20 compute-0 systemd[1]: libpod-b4e9acd19f7c6e04d9a7b4c31dde4bf5916daf967817cac4b9db899dafb5a261.scope: Deactivated successfully.
Dec  5 02:25:20 compute-0 systemd[1]: libpod-b4e9acd19f7c6e04d9a7b4c31dde4bf5916daf967817cac4b9db899dafb5a261.scope: Consumed 1.259s CPU time.
Dec  5 02:25:20 compute-0 podman[468422]: 2025-12-05 02:25:20.685806468 +0000 UTC m=+1.658681677 container died b4e9acd19f7c6e04d9a7b4c31dde4bf5916daf967817cac4b9db899dafb5a261 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_davinci, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:25:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2290: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:25:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fd11b5c318ce8ec7668e4f8a3f9b0d14884e56253c97645c004b555381b6e85-merged.mount: Deactivated successfully.
Dec  5 02:25:20 compute-0 podman[468422]: 2025-12-05 02:25:20.80126865 +0000 UTC m=+1.774143859 container remove b4e9acd19f7c6e04d9a7b4c31dde4bf5916daf967817cac4b9db899dafb5a261 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_davinci, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:25:20 compute-0 systemd[1]: libpod-conmon-b4e9acd19f7c6e04d9a7b4c31dde4bf5916daf967817cac4b9db899dafb5a261.scope: Deactivated successfully.
Dec  5 02:25:22 compute-0 podman[468618]: 2025-12-05 02:25:22.010349349 +0000 UTC m=+0.068722642 container create dfe9802e0e31b9233065e599125cc6a276ad22e75766333a7208049a4064ecd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_shamir, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  5 02:25:22 compute-0 podman[468618]: 2025-12-05 02:25:21.978063582 +0000 UTC m=+0.036436875 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:25:22 compute-0 systemd[1]: Started libpod-conmon-dfe9802e0e31b9233065e599125cc6a276ad22e75766333a7208049a4064ecd4.scope.
Dec  5 02:25:22 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:25:22 compute-0 podman[468618]: 2025-12-05 02:25:22.156335039 +0000 UTC m=+0.214708382 container init dfe9802e0e31b9233065e599125cc6a276ad22e75766333a7208049a4064ecd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_shamir, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  5 02:25:22 compute-0 podman[468618]: 2025-12-05 02:25:22.174558621 +0000 UTC m=+0.232931874 container start dfe9802e0e31b9233065e599125cc6a276ad22e75766333a7208049a4064ecd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_shamir, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:25:22 compute-0 podman[468618]: 2025-12-05 02:25:22.180167868 +0000 UTC m=+0.238541201 container attach dfe9802e0e31b9233065e599125cc6a276ad22e75766333a7208049a4064ecd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_shamir, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  5 02:25:22 compute-0 upbeat_shamir[468634]: 167 167
Dec  5 02:25:22 compute-0 systemd[1]: libpod-dfe9802e0e31b9233065e599125cc6a276ad22e75766333a7208049a4064ecd4.scope: Deactivated successfully.
Dec  5 02:25:22 compute-0 podman[468618]: 2025-12-05 02:25:22.187411812 +0000 UTC m=+0.245785095 container died dfe9802e0e31b9233065e599125cc6a276ad22e75766333a7208049a4064ecd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:25:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-cdda18f28c946cda09eed0a9dcd803fa020cf02e83f1dd83dab0807ac0843d13-merged.mount: Deactivated successfully.
Dec  5 02:25:22 compute-0 podman[468618]: 2025-12-05 02:25:22.27350398 +0000 UTC m=+0.331877263 container remove dfe9802e0e31b9233065e599125cc6a276ad22e75766333a7208049a4064ecd4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:25:22 compute-0 systemd[1]: libpod-conmon-dfe9802e0e31b9233065e599125cc6a276ad22e75766333a7208049a4064ecd4.scope: Deactivated successfully.
Dec  5 02:25:22 compute-0 podman[468658]: 2025-12-05 02:25:22.529097737 +0000 UTC m=+0.106113451 container create 99eed49c996ce859f852f2110c6e544650d58b040e1d9966c4561ce9b87d13db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:25:22 compute-0 podman[468658]: 2025-12-05 02:25:22.478082164 +0000 UTC m=+0.055097998 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:25:22 compute-0 systemd[1]: Started libpod-conmon-99eed49c996ce859f852f2110c6e544650d58b040e1d9966c4561ce9b87d13db.scope.
Dec  5 02:25:22 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:25:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0632bc5c54943c3c74ec6dcb911259c24096b2713800bbe5a5d2ae35789fa947/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:25:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0632bc5c54943c3c74ec6dcb911259c24096b2713800bbe5a5d2ae35789fa947/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:25:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0632bc5c54943c3c74ec6dcb911259c24096b2713800bbe5a5d2ae35789fa947/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:25:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0632bc5c54943c3c74ec6dcb911259c24096b2713800bbe5a5d2ae35789fa947/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:25:22 compute-0 podman[468658]: 2025-12-05 02:25:22.717090667 +0000 UTC m=+0.294106421 container init 99eed49c996ce859f852f2110c6e544650d58b040e1d9966c4561ce9b87d13db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_satoshi, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:25:22 compute-0 nova_compute[349548]: 2025-12-05 02:25:22.730 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:25:22 compute-0 podman[468658]: 2025-12-05 02:25:22.734841766 +0000 UTC m=+0.311857490 container start 99eed49c996ce859f852f2110c6e544650d58b040e1d9966c4561ce9b87d13db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_satoshi, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:25:22 compute-0 podman[468658]: 2025-12-05 02:25:22.740466164 +0000 UTC m=+0.317481908 container attach 99eed49c996ce859f852f2110c6e544650d58b040e1d9966c4561ce9b87d13db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:25:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2291: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:25:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:25:23 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Dec  5 02:25:23 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:25:23.430341) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  5 02:25:23 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Dec  5 02:25:23 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901523430422, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 1061, "num_deletes": 256, "total_data_size": 1555138, "memory_usage": 1584464, "flush_reason": "Manual Compaction"}
Dec  5 02:25:23 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Dec  5 02:25:23 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901523445529, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 1540836, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 46276, "largest_seqno": 47336, "table_properties": {"data_size": 1535574, "index_size": 2722, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 10970, "raw_average_key_size": 19, "raw_value_size": 1525110, "raw_average_value_size": 2689, "num_data_blocks": 122, "num_entries": 567, "num_filter_entries": 567, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764901418, "oldest_key_time": 1764901418, "file_creation_time": 1764901523, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Dec  5 02:25:23 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 15270 microseconds, and 7644 cpu microseconds.
Dec  5 02:25:23 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 02:25:23 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:25:23.445609) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 1540836 bytes OK
Dec  5 02:25:23 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:25:23.445632) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Dec  5 02:25:23 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:25:23.448219) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Dec  5 02:25:23 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:25:23.448240) EVENT_LOG_v1 {"time_micros": 1764901523448233, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  5 02:25:23 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:25:23.448261) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  5 02:25:23 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 1550143, prev total WAL file size 1550143, number of live WAL files 2.
Dec  5 02:25:23 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:25:23 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:25:23.449513) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373538' seq:72057594037927935, type:22 .. '6C6F676D0032303130' seq:0, type:0; will stop at (end)
Dec  5 02:25:23 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  5 02:25:23 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(1504KB)], [110(7577KB)]
Dec  5 02:25:23 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901523449605, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 9299687, "oldest_snapshot_seqno": -1}
Dec  5 02:25:23 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 6121 keys, 9192886 bytes, temperature: kUnknown
Dec  5 02:25:23 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901523527103, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 9192886, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9152744, "index_size": 23712, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15365, "raw_key_size": 159890, "raw_average_key_size": 26, "raw_value_size": 9042807, "raw_average_value_size": 1477, "num_data_blocks": 946, "num_entries": 6121, "num_filter_entries": 6121, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764901523, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Dec  5 02:25:23 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 02:25:23 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:25:23.527474) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 9192886 bytes
Dec  5 02:25:23 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:25:23.531454) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 119.8 rd, 118.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 7.4 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(12.0) write-amplify(6.0) OK, records in: 6645, records dropped: 524 output_compression: NoCompression
Dec  5 02:25:23 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:25:23.531492) EVENT_LOG_v1 {"time_micros": 1764901523531475, "job": 66, "event": "compaction_finished", "compaction_time_micros": 77608, "compaction_time_cpu_micros": 43916, "output_level": 6, "num_output_files": 1, "total_output_size": 9192886, "num_input_records": 6645, "num_output_records": 6121, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  5 02:25:23 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:25:23 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901523532315, "job": 66, "event": "table_file_deletion", "file_number": 112}
Dec  5 02:25:23 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:25:23 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901523536330, "job": 66, "event": "table_file_deletion", "file_number": 110}
Dec  5 02:25:23 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:25:23.449188) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:25:23 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:25:23.536452) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:25:23 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:25:23.536458) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:25:23 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:25:23.536460) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:25:23 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:25:23.536463) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:25:23 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:25:23.536465) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]: {
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:    "0": [
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:        {
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:            "devices": [
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:                "/dev/loop3"
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:            ],
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:            "lv_name": "ceph_lv0",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:            "lv_size": "21470642176",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:            "name": "ceph_lv0",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:            "tags": {
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:                "ceph.cluster_name": "ceph",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:                "ceph.crush_device_class": "",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:                "ceph.encrypted": "0",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:                "ceph.osd_id": "0",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:                "ceph.type": "block",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:                "ceph.vdo": "0"
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:            },
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:            "type": "block",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:            "vg_name": "ceph_vg0"
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:        }
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:    ],
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:    "1": [
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:        {
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:            "devices": [
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:                "/dev/loop4"
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:            ],
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:            "lv_name": "ceph_lv1",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:            "lv_size": "21470642176",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:            "name": "ceph_lv1",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:            "tags": {
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:                "ceph.cluster_name": "ceph",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:                "ceph.crush_device_class": "",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:                "ceph.encrypted": "0",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:                "ceph.osd_id": "1",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:                "ceph.type": "block",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:                "ceph.vdo": "0"
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:            },
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:            "type": "block",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:            "vg_name": "ceph_vg1"
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:        }
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:    ],
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:    "2": [
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:        {
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:            "devices": [
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:                "/dev/loop5"
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:            ],
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:            "lv_name": "ceph_lv2",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:            "lv_size": "21470642176",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:            "name": "ceph_lv2",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:            "tags": {
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:                "ceph.cluster_name": "ceph",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:                "ceph.crush_device_class": "",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:                "ceph.encrypted": "0",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:                "ceph.osd_id": "2",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:                "ceph.type": "block",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:                "ceph.vdo": "0"
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:            },
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:            "type": "block",
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:            "vg_name": "ceph_vg2"
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:        }
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]:    ]
Dec  5 02:25:23 compute-0 crazy_satoshi[468674]: }
Dec  5 02:25:23 compute-0 systemd[1]: libpod-99eed49c996ce859f852f2110c6e544650d58b040e1d9966c4561ce9b87d13db.scope: Deactivated successfully.
Dec  5 02:25:23 compute-0 podman[468658]: 2025-12-05 02:25:23.624802291 +0000 UTC m=+1.201818045 container died 99eed49c996ce859f852f2110c6e544650d58b040e1d9966c4561ce9b87d13db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_satoshi, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  5 02:25:23 compute-0 nova_compute[349548]: 2025-12-05 02:25:23.646 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:25:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-0632bc5c54943c3c74ec6dcb911259c24096b2713800bbe5a5d2ae35789fa947-merged.mount: Deactivated successfully.
Dec  5 02:25:23 compute-0 podman[468658]: 2025-12-05 02:25:23.708478311 +0000 UTC m=+1.285494035 container remove 99eed49c996ce859f852f2110c6e544650d58b040e1d9966c4561ce9b87d13db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  5 02:25:23 compute-0 systemd[1]: libpod-conmon-99eed49c996ce859f852f2110c6e544650d58b040e1d9966c4561ce9b87d13db.scope: Deactivated successfully.
Dec  5 02:25:23 compute-0 podman[468683]: 2025-12-05 02:25:23.730042327 +0000 UTC m=+0.139615152 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec  5 02:25:23 compute-0 podman[468684]: 2025-12-05 02:25:23.741045046 +0000 UTC m=+0.139741146 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.component=ubi9-container, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, vendor=Red Hat, Inc., release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Dec  5 02:25:23 compute-0 podman[468685]: 2025-12-05 02:25:23.748711221 +0000 UTC m=+0.143349207 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec  5 02:25:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2292: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:25:24 compute-0 podman[468888]: 2025-12-05 02:25:24.834596219 +0000 UTC m=+0.088911708 container create 9dc59d4d2dcfa88bb8d3652b39feb3596df67eaaa873303767edbb4a95222361 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wright, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  5 02:25:24 compute-0 podman[468888]: 2025-12-05 02:25:24.807534319 +0000 UTC m=+0.061849878 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:25:24 compute-0 systemd[1]: Started libpod-conmon-9dc59d4d2dcfa88bb8d3652b39feb3596df67eaaa873303767edbb4a95222361.scope.
Dec  5 02:25:24 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:25:24 compute-0 podman[468888]: 2025-12-05 02:25:24.984225082 +0000 UTC m=+0.238540611 container init 9dc59d4d2dcfa88bb8d3652b39feb3596df67eaaa873303767edbb4a95222361 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:25:25 compute-0 podman[468888]: 2025-12-05 02:25:25.000629612 +0000 UTC m=+0.254945101 container start 9dc59d4d2dcfa88bb8d3652b39feb3596df67eaaa873303767edbb4a95222361 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  5 02:25:25 compute-0 podman[468888]: 2025-12-05 02:25:25.00873913 +0000 UTC m=+0.263054679 container attach 9dc59d4d2dcfa88bb8d3652b39feb3596df67eaaa873303767edbb4a95222361 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  5 02:25:25 compute-0 infallible_wright[468904]: 167 167
Dec  5 02:25:25 compute-0 systemd[1]: libpod-9dc59d4d2dcfa88bb8d3652b39feb3596df67eaaa873303767edbb4a95222361.scope: Deactivated successfully.
Dec  5 02:25:25 compute-0 podman[468888]: 2025-12-05 02:25:25.013976997 +0000 UTC m=+0.268292486 container died 9dc59d4d2dcfa88bb8d3652b39feb3596df67eaaa873303767edbb4a95222361 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:25:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f7e13bd03303cc09e79fbb408ae93e8897b911166c56c90585fea3ad5bb14e6-merged.mount: Deactivated successfully.
Dec  5 02:25:25 compute-0 podman[468888]: 2025-12-05 02:25:25.093500201 +0000 UTC m=+0.347815710 container remove 9dc59d4d2dcfa88bb8d3652b39feb3596df67eaaa873303767edbb4a95222361 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  5 02:25:25 compute-0 systemd[1]: libpod-conmon-9dc59d4d2dcfa88bb8d3652b39feb3596df67eaaa873303767edbb4a95222361.scope: Deactivated successfully.
Dec  5 02:25:25 compute-0 podman[468927]: 2025-12-05 02:25:25.398866297 +0000 UTC m=+0.112291235 container create 792b82be21e2917cd76caab184cc671d5e73bcf00df0c324250d269eea94d444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_pike, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  5 02:25:25 compute-0 podman[468927]: 2025-12-05 02:25:25.355960532 +0000 UTC m=+0.069385520 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:25:25 compute-0 systemd[1]: Started libpod-conmon-792b82be21e2917cd76caab184cc671d5e73bcf00df0c324250d269eea94d444.scope.
Dec  5 02:25:25 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:25:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afb6b144b12e995e297b661738217946bf38b8b4e1f38992bfc3735d6724d708/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:25:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afb6b144b12e995e297b661738217946bf38b8b4e1f38992bfc3735d6724d708/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:25:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afb6b144b12e995e297b661738217946bf38b8b4e1f38992bfc3735d6724d708/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:25:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afb6b144b12e995e297b661738217946bf38b8b4e1f38992bfc3735d6724d708/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:25:25 compute-0 podman[468927]: 2025-12-05 02:25:25.57947558 +0000 UTC m=+0.292900518 container init 792b82be21e2917cd76caab184cc671d5e73bcf00df0c324250d269eea94d444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:25:25 compute-0 podman[468927]: 2025-12-05 02:25:25.60475884 +0000 UTC m=+0.318183768 container start 792b82be21e2917cd76caab184cc671d5e73bcf00df0c324250d269eea94d444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Dec  5 02:25:25 compute-0 podman[468927]: 2025-12-05 02:25:25.611587352 +0000 UTC m=+0.325012290 container attach 792b82be21e2917cd76caab184cc671d5e73bcf00df0c324250d269eea94d444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_pike, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  5 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.627 349552 DEBUG oslo_concurrency.lockutils [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Acquiring lock "292fd084-0808-4a80-adc1-6ab1f28e188a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.629 349552 DEBUG oslo_concurrency.lockutils [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "292fd084-0808-4a80-adc1-6ab1f28e188a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.629 349552 DEBUG oslo_concurrency.lockutils [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Acquiring lock "292fd084-0808-4a80-adc1-6ab1f28e188a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.629 349552 DEBUG oslo_concurrency.lockutils [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "292fd084-0808-4a80-adc1-6ab1f28e188a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.632 349552 DEBUG oslo_concurrency.lockutils [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "292fd084-0808-4a80-adc1-6ab1f28e188a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.635 349552 INFO nova.compute.manager [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Terminating instance#033[00m
Dec  5 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.637 349552 DEBUG nova.compute.manager [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  5 02:25:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2293: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:25:26 compute-0 kernel: tap706f9405-40 (unregistering): left promiscuous mode
Dec  5 02:25:26 compute-0 NetworkManager[49092]: <info>  [1764901526.7827] device (tap706f9405-40): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  5 02:25:26 compute-0 jolly_pike[468943]: {
Dec  5 02:25:26 compute-0 jolly_pike[468943]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 02:25:26 compute-0 jolly_pike[468943]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:25:26 compute-0 jolly_pike[468943]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 02:25:26 compute-0 jolly_pike[468943]:        "osd_id": 0,
Dec  5 02:25:26 compute-0 jolly_pike[468943]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:25:26 compute-0 jolly_pike[468943]:        "type": "bluestore"
Dec  5 02:25:26 compute-0 jolly_pike[468943]:    },
Dec  5 02:25:26 compute-0 jolly_pike[468943]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 02:25:26 compute-0 jolly_pike[468943]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:25:26 compute-0 jolly_pike[468943]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 02:25:26 compute-0 jolly_pike[468943]:        "osd_id": 1,
Dec  5 02:25:26 compute-0 jolly_pike[468943]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:25:26 compute-0 jolly_pike[468943]:        "type": "bluestore"
Dec  5 02:25:26 compute-0 jolly_pike[468943]:    },
Dec  5 02:25:26 compute-0 jolly_pike[468943]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 02:25:26 compute-0 jolly_pike[468943]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:25:26 compute-0 jolly_pike[468943]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 02:25:26 compute-0 jolly_pike[468943]:        "osd_id": 2,
Dec  5 02:25:26 compute-0 jolly_pike[468943]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:25:26 compute-0 jolly_pike[468943]:        "type": "bluestore"
Dec  5 02:25:26 compute-0 jolly_pike[468943]:    }
Dec  5 02:25:26 compute-0 jolly_pike[468943]: }
Dec  5 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.794 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:25:26 compute-0 ovn_controller[89286]: 2025-12-05T02:25:26Z|00183|binding|INFO|Releasing lport 706f9405-4061-481e-a252-9b14f4534a4e from this chassis (sb_readonly=0)
Dec  5 02:25:26 compute-0 ovn_controller[89286]: 2025-12-05T02:25:26Z|00184|binding|INFO|Setting lport 706f9405-4061-481e-a252-9b14f4534a4e down in Southbound
Dec  5 02:25:26 compute-0 ovn_controller[89286]: 2025-12-05T02:25:26Z|00185|binding|INFO|Removing iface tap706f9405-40 ovn-installed in OVS
Dec  5 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.799 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:25:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:26.810 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cf:10:bc 10.100.0.151'], port_security=['fa:16:3e:cf:10:bc 10.100.0.151'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.151/16', 'neutron:device_id': '292fd084-0808-4a80-adc1-6ab1f28e188a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7842201-32d0-4f34-ad6b-51f98e5f8322', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b01709a3378347e1a3f25eeb2b8b1bca', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cb556767-8d1b-4432-9d0a-485dcba856ee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=40610b26-f7eb-46a6-9c49-714ab1f77db8, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=706f9405-4061-481e-a252-9b14f4534a4e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 02:25:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:26.814 287122 INFO neutron.agent.ovn.metadata.agent [-] Port 706f9405-4061-481e-a252-9b14f4534a4e in datapath d7842201-32d0-4f34-ad6b-51f98e5f8322 unbound from our chassis#033[00m
Dec  5 02:25:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:26.818 287122 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d7842201-32d0-4f34-ad6b-51f98e5f8322#033[00m
Dec  5 02:25:26 compute-0 systemd[1]: libpod-792b82be21e2917cd76caab184cc671d5e73bcf00df0c324250d269eea94d444.scope: Deactivated successfully.
Dec  5 02:25:26 compute-0 podman[468927]: 2025-12-05 02:25:26.825622268 +0000 UTC m=+1.539047166 container died 792b82be21e2917cd76caab184cc671d5e73bcf00df0c324250d269eea94d444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_pike, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  5 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.830 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:25:26 compute-0 systemd[1]: libpod-792b82be21e2917cd76caab184cc671d5e73bcf00df0c324250d269eea94d444.scope: Consumed 1.208s CPU time.
Dec  5 02:25:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:26.840 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[775d4d5d-f8e3-4e8e-8d5f-ef40b0d67580]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:25:26 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Dec  5 02:25:26 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000b.scope: Consumed 7min 37.742s CPU time.
Dec  5 02:25:26 compute-0 systemd-machined[138700]: Machine qemu-12-instance-0000000b terminated.
Dec  5 02:25:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-afb6b144b12e995e297b661738217946bf38b8b4e1f38992bfc3735d6724d708-merged.mount: Deactivated successfully.
Dec  5 02:25:26 compute-0 NetworkManager[49092]: <info>  [1764901526.8703] manager: (tap706f9405-40): new Tun device (/org/freedesktop/NetworkManager/Devices/81)
Dec  5 02:25:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:26.895 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[8ba5bea6-effc-4be5-b191-7cc93efcf199]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.903 349552 INFO nova.virt.libvirt.driver [-] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Instance destroyed successfully.#033[00m
Dec  5 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.904 349552 DEBUG nova.objects.instance [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lazy-loading 'resources' on Instance uuid 292fd084-0808-4a80-adc1-6ab1f28e188a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:25:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:26.902 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[3eddc6a7-37c8-445c-901a-28a70a0db463]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:25:26 compute-0 podman[468927]: 2025-12-05 02:25:26.915354628 +0000 UTC m=+1.628779526 container remove 792b82be21e2917cd76caab184cc671d5e73bcf00df0c324250d269eea94d444 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_pike, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  5 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.922 349552 DEBUG nova.virt.libvirt.vif [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T02:11:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-3255585-asg-ymkpcnuo2iqm-rsaqvth2jwvx-k3ipymnd45pa',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-3255585-asg-ymkpcnuo2iqm-rsaqvth2jwvx-k3ipymnd45pa',id=11,image_ref='773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-05T02:11:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='92ca195d-98d1-443c-9947-dcb7ca7b926a'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b01709a3378347e1a3f25eeb2b8b1bca',ramdisk_id='',reservation_id='r-d903m2ip',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-257639068',owner_user_name='tempest-PrometheusGabbiTest-257639068-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-05T02:11:30Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='99591ed8361e41579fee1d14f16bf0f7',uuid=292fd084-0808-4a80-adc1-6ab1f28e188a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "706f9405-4061-481e-a252-9b14f4534a4e", "address": "fa:16:3e:cf:10:bc", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.151", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706f9405-40", "ovs_interfaceid": "706f9405-4061-481e-a252-9b14f4534a4e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  5 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.922 349552 DEBUG nova.network.os_vif_util [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Converting VIF {"id": "706f9405-4061-481e-a252-9b14f4534a4e", "address": "fa:16:3e:cf:10:bc", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.151", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706f9405-40", "ovs_interfaceid": "706f9405-4061-481e-a252-9b14f4534a4e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.923 349552 DEBUG nova.network.os_vif_util [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cf:10:bc,bridge_name='br-int',has_traffic_filtering=True,id=706f9405-4061-481e-a252-9b14f4534a4e,network=Network(d7842201-32d0-4f34-ad6b-51f98e5f8322),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap706f9405-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.923 349552 DEBUG os_vif [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:10:bc,bridge_name='br-int',has_traffic_filtering=True,id=706f9405-4061-481e-a252-9b14f4534a4e,network=Network(d7842201-32d0-4f34-ad6b-51f98e5f8322),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap706f9405-40') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  5 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.925 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.926 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap706f9405-40, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:25:26 compute-0 systemd[1]: libpod-conmon-792b82be21e2917cd76caab184cc671d5e73bcf00df0c324250d269eea94d444.scope: Deactivated successfully.
Dec  5 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.927 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.933 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.936 349552 INFO os_vif [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cf:10:bc,bridge_name='br-int',has_traffic_filtering=True,id=706f9405-4061-481e-a252-9b14f4534a4e,network=Network(d7842201-32d0-4f34-ad6b-51f98e5f8322),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap706f9405-40')#033[00m
Dec  5 02:25:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:26.937 412758 DEBUG oslo.privsep.daemon [-] privsep: reply[fa5ec4f1-1775-439c-a909-4a3d2d608653]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:25:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:26.959 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[8a6f8012-d5ce-4ee1-93bf-c9857ffa1bb3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7842201-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:26:70'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 8, 'rx_bytes': 1960, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 8, 'rx_bytes': 1960, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677128, 'reachable_time': 17791, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 469015, 'error': None, 'target': 'ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:25:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 02:25:26 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:25:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 02:25:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:26.976 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[291654eb-68e0-4a01-bd1e-abb9feba5878]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapd7842201-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 677143, 'tstamp': 677143}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 469027, 'error': None, 'target': 'ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tapd7842201-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 677147, 'tstamp': 677147}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 469027, 'error': None, 'target': 'ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:25:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:26.979 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7842201-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:25:26 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:25:26 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 07b70f09-6882-44e3-8765-2f8f1114554d does not exist
Dec  5 02:25:26 compute-0 nova_compute[349548]: 2025-12-05 02:25:26.982 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:25:26 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev bdcb81db-eaff-41bd-8873-ac2cfb5aeeb0 does not exist
Dec  5 02:25:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:26.983 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7842201-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:25:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:26.983 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 02:25:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:26.984 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd7842201-30, col_values=(('external_ids', {'iface-id': '9309009c-26a0-4ed9-8142-14ad142ca1c0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:25:26 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:26.984 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  5 02:25:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:27.331 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:c8:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:b5:45:4f:f9:d2'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 02:25:27 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:27.332 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  5 02:25:27 compute-0 nova_compute[349548]: 2025-12-05 02:25:27.337 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:25:27 compute-0 nova_compute[349548]: 2025-12-05 02:25:27.376 349552 DEBUG nova.compute.manager [req-5d3ef75e-ece6-4088-bd2d-d6f06bac962b req-bfc1b7ba-543c-4722-8785-1ec4bf5e1e18 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Received event network-vif-unplugged-706f9405-4061-481e-a252-9b14f4534a4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:25:27 compute-0 nova_compute[349548]: 2025-12-05 02:25:27.376 349552 DEBUG oslo_concurrency.lockutils [req-5d3ef75e-ece6-4088-bd2d-d6f06bac962b req-bfc1b7ba-543c-4722-8785-1ec4bf5e1e18 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "292fd084-0808-4a80-adc1-6ab1f28e188a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:25:27 compute-0 nova_compute[349548]: 2025-12-05 02:25:27.376 349552 DEBUG oslo_concurrency.lockutils [req-5d3ef75e-ece6-4088-bd2d-d6f06bac962b req-bfc1b7ba-543c-4722-8785-1ec4bf5e1e18 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "292fd084-0808-4a80-adc1-6ab1f28e188a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:25:27 compute-0 nova_compute[349548]: 2025-12-05 02:25:27.376 349552 DEBUG oslo_concurrency.lockutils [req-5d3ef75e-ece6-4088-bd2d-d6f06bac962b req-bfc1b7ba-543c-4722-8785-1ec4bf5e1e18 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "292fd084-0808-4a80-adc1-6ab1f28e188a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:25:27 compute-0 nova_compute[349548]: 2025-12-05 02:25:27.376 349552 DEBUG nova.compute.manager [req-5d3ef75e-ece6-4088-bd2d-d6f06bac962b req-bfc1b7ba-543c-4722-8785-1ec4bf5e1e18 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] No waiting events found dispatching network-vif-unplugged-706f9405-4061-481e-a252-9b14f4534a4e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 02:25:27 compute-0 nova_compute[349548]: 2025-12-05 02:25:27.377 349552 DEBUG nova.compute.manager [req-5d3ef75e-ece6-4088-bd2d-d6f06bac962b req-bfc1b7ba-543c-4722-8785-1ec4bf5e1e18 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Received event network-vif-unplugged-706f9405-4061-481e-a252-9b14f4534a4e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  5 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015211533217750863 of space, bias 1.0, pg target 0.4563459965325259 quantized to 32 (current 32)
Dec  5 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00125203744627857 of space, bias 1.0, pg target 0.375611233883571 quantized to 32 (current 32)
Dec  5 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:25:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 02:25:27 compute-0 nova_compute[349548]: 2025-12-05 02:25:27.732 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:25:27 compute-0 nova_compute[349548]: 2025-12-05 02:25:27.749 349552 INFO nova.virt.libvirt.driver [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Deleting instance files /var/lib/nova/instances/292fd084-0808-4a80-adc1-6ab1f28e188a_del#033[00m
Dec  5 02:25:27 compute-0 nova_compute[349548]: 2025-12-05 02:25:27.750 349552 INFO nova.virt.libvirt.driver [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Deletion of /var/lib/nova/instances/292fd084-0808-4a80-adc1-6ab1f28e188a_del complete#033[00m
Dec  5 02:25:27 compute-0 nova_compute[349548]: 2025-12-05 02:25:27.850 349552 INFO nova.compute.manager [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Took 1.21 seconds to destroy the instance on the hypervisor.#033[00m
Dec  5 02:25:27 compute-0 nova_compute[349548]: 2025-12-05 02:25:27.850 349552 DEBUG oslo.service.loopingcall [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  5 02:25:27 compute-0 nova_compute[349548]: 2025-12-05 02:25:27.851 349552 DEBUG nova.compute.manager [-] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  5 02:25:27 compute-0 nova_compute[349548]: 2025-12-05 02:25:27.851 349552 DEBUG nova.network.neutron [-] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  5 02:25:27 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:25:27 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:25:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:25:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2294: 321 pgs: 321 active+clean; 236 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Dec  5 02:25:29 compute-0 podman[158197]: time="2025-12-05T02:25:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:25:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:25:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43813 "" "Go-http-client/1.1"
Dec  5 02:25:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:25:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8672 "" "Go-http-client/1.1"
Dec  5 02:25:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2295: 321 pgs: 321 active+clean; 217 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 2 op/s
Dec  5 02:25:31 compute-0 openstack_network_exporter[366555]: ERROR   02:25:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:25:31 compute-0 openstack_network_exporter[366555]: ERROR   02:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:25:31 compute-0 openstack_network_exporter[366555]: ERROR   02:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:25:31 compute-0 openstack_network_exporter[366555]: ERROR   02:25:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:25:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:25:31 compute-0 openstack_network_exporter[366555]: ERROR   02:25:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:25:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:25:31 compute-0 nova_compute[349548]: 2025-12-05 02:25:31.702 349552 DEBUG nova.compute.manager [req-03225acd-c0cd-4888-9127-7a6ca0abdd40 req-2055049e-35f8-4479-9ea8-0d8881d567cc a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Received event network-vif-plugged-706f9405-4061-481e-a252-9b14f4534a4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:25:31 compute-0 nova_compute[349548]: 2025-12-05 02:25:31.703 349552 DEBUG oslo_concurrency.lockutils [req-03225acd-c0cd-4888-9127-7a6ca0abdd40 req-2055049e-35f8-4479-9ea8-0d8881d567cc a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "292fd084-0808-4a80-adc1-6ab1f28e188a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:25:31 compute-0 nova_compute[349548]: 2025-12-05 02:25:31.704 349552 DEBUG oslo_concurrency.lockutils [req-03225acd-c0cd-4888-9127-7a6ca0abdd40 req-2055049e-35f8-4479-9ea8-0d8881d567cc a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "292fd084-0808-4a80-adc1-6ab1f28e188a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:25:31 compute-0 nova_compute[349548]: 2025-12-05 02:25:31.705 349552 DEBUG oslo_concurrency.lockutils [req-03225acd-c0cd-4888-9127-7a6ca0abdd40 req-2055049e-35f8-4479-9ea8-0d8881d567cc a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "292fd084-0808-4a80-adc1-6ab1f28e188a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:25:31 compute-0 nova_compute[349548]: 2025-12-05 02:25:31.705 349552 DEBUG nova.compute.manager [req-03225acd-c0cd-4888-9127-7a6ca0abdd40 req-2055049e-35f8-4479-9ea8-0d8881d567cc a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] No waiting events found dispatching network-vif-plugged-706f9405-4061-481e-a252-9b14f4534a4e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 02:25:31 compute-0 nova_compute[349548]: 2025-12-05 02:25:31.706 349552 WARNING nova.compute.manager [req-03225acd-c0cd-4888-9127-7a6ca0abdd40 req-2055049e-35f8-4479-9ea8-0d8881d567cc a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Received unexpected event network-vif-plugged-706f9405-4061-481e-a252-9b14f4534a4e for instance with vm_state active and task_state deleting.#033[00m
Dec  5 02:25:31 compute-0 nova_compute[349548]: 2025-12-05 02:25:31.929 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:25:32 compute-0 nova_compute[349548]: 2025-12-05 02:25:32.151 349552 DEBUG nova.network.neutron [-] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:25:32 compute-0 nova_compute[349548]: 2025-12-05 02:25:32.170 349552 INFO nova.compute.manager [-] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Took 4.32 seconds to deallocate network for instance.#033[00m
Dec  5 02:25:32 compute-0 nova_compute[349548]: 2025-12-05 02:25:32.221 349552 DEBUG oslo_concurrency.lockutils [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:25:32 compute-0 nova_compute[349548]: 2025-12-05 02:25:32.222 349552 DEBUG oslo_concurrency.lockutils [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:25:32 compute-0 nova_compute[349548]: 2025-12-05 02:25:32.321 349552 DEBUG oslo_concurrency.processutils [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:25:32 compute-0 nova_compute[349548]: 2025-12-05 02:25:32.735 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:25:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2296: 321 pgs: 321 active+clean; 157 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec  5 02:25:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:25:32 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/424006511' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:25:32 compute-0 nova_compute[349548]: 2025-12-05 02:25:32.837 349552 DEBUG oslo_concurrency.processutils [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:25:32 compute-0 nova_compute[349548]: 2025-12-05 02:25:32.850 349552 DEBUG nova.compute.provider_tree [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:25:32 compute-0 nova_compute[349548]: 2025-12-05 02:25:32.868 349552 DEBUG nova.scheduler.client.report [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:25:32 compute-0 nova_compute[349548]: 2025-12-05 02:25:32.895 349552 DEBUG oslo_concurrency.lockutils [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:25:32 compute-0 nova_compute[349548]: 2025-12-05 02:25:32.927 349552 INFO nova.scheduler.client.report [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Deleted allocations for instance 292fd084-0808-4a80-adc1-6ab1f28e188a#033[00m
Dec  5 02:25:33 compute-0 nova_compute[349548]: 2025-12-05 02:25:33.010 349552 DEBUG oslo_concurrency.lockutils [None req-284530b8-d64e-4dd5-b432-26a75f74bead 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "292fd084-0808-4a80-adc1-6ab1f28e188a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.381s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:25:33 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:33.335 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8dd76c1c-ab01-42af-b35e-2e870841b6ad, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:25:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:25:33 compute-0 nova_compute[349548]: 2025-12-05 02:25:33.806 349552 DEBUG nova.compute.manager [req-2a116095-54a5-4ba7-ae51-3f19bec548dc req-cf3ca216-b0e2-4087-b4a8-8d125204fc3f a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Received event network-vif-deleted-706f9405-4061-481e-a252-9b14f4534a4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:25:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2297: 321 pgs: 321 active+clean; 157 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec  5 02:25:35 compute-0 podman[469104]: 2025-12-05 02:25:35.747160607 +0000 UTC m=+0.151496696 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  5 02:25:35 compute-0 podman[469105]: 2025-12-05 02:25:35.751110427 +0000 UTC m=+0.152609947 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  5 02:25:35 compute-0 podman[469107]: 2025-12-05 02:25:35.751782226 +0000 UTC m=+0.134168599 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, architecture=x86_64, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, version=9.6, io.buildah.version=1.33.7, distribution-scope=public, build-date=2025-08-20T13:12:41, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_id=edpm, io.openshift.tags=minimal rhel9)
Dec  5 02:25:35 compute-0 podman[469106]: 2025-12-05 02:25:35.764158644 +0000 UTC m=+0.158058360 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  5 02:25:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2298: 321 pgs: 321 active+clean; 157 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec  5 02:25:36 compute-0 nova_compute[349548]: 2025-12-05 02:25:36.933 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:25:37 compute-0 nova_compute[349548]: 2025-12-05 02:25:37.738 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:25:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:25:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2299: 321 pgs: 321 active+clean; 157 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec  5 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.034 349552 DEBUG oslo_concurrency.lockutils [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Acquiring lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.035 349552 DEBUG oslo_concurrency.lockutils [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.036 349552 DEBUG oslo_concurrency.lockutils [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Acquiring lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.037 349552 DEBUG oslo_concurrency.lockutils [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.038 349552 DEBUG oslo_concurrency.lockutils [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.041 349552 INFO nova.compute.manager [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Terminating instance#033[00m
Dec  5 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.044 349552 DEBUG nova.compute.manager [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  5 02:25:40 compute-0 kernel: tapafc3cf6c-cb (unregistering): left promiscuous mode
Dec  5 02:25:40 compute-0 NetworkManager[49092]: <info>  [1764901540.1705] device (tapafc3cf6c-cb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  5 02:25:40 compute-0 ovn_controller[89286]: 2025-12-05T02:25:40Z|00186|binding|INFO|Releasing lport afc3cf6c-cbe3-4163-920e-7122f474d371 from this chassis (sb_readonly=0)
Dec  5 02:25:40 compute-0 ovn_controller[89286]: 2025-12-05T02:25:40Z|00187|binding|INFO|Setting lport afc3cf6c-cbe3-4163-920e-7122f474d371 down in Southbound
Dec  5 02:25:40 compute-0 ovn_controller[89286]: 2025-12-05T02:25:40Z|00188|binding|INFO|Removing iface tapafc3cf6c-cb ovn-installed in OVS
Dec  5 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.190 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.193 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:25:40 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:40.200 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:80:52 10.100.2.8'], port_security=['fa:16:3e:69:80:52 10.100.2.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.2.8/16', 'neutron:device_id': 'e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7842201-32d0-4f34-ad6b-51f98e5f8322', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b01709a3378347e1a3f25eeb2b8b1bca', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cb556767-8d1b-4432-9d0a-485dcba856ee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=40610b26-f7eb-46a6-9c49-714ab1f77db8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>], logical_port=afc3cf6c-cbe3-4163-920e-7122f474d371) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f64f06b6fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 02:25:40 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:40.203 287122 INFO neutron.agent.ovn.metadata.agent [-] Port afc3cf6c-cbe3-4163-920e-7122f474d371 in datapath d7842201-32d0-4f34-ad6b-51f98e5f8322 unbound from our chassis#033[00m
Dec  5 02:25:40 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:40.205 287122 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d7842201-32d0-4f34-ad6b-51f98e5f8322, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  5 02:25:40 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:40.206 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[29dd5691-c458-4afb-99a5-8333229c19db]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:25:40 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:40.207 287122 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322 namespace which is not needed anymore#033[00m
Dec  5 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.235 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:25:40 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Dec  5 02:25:40 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Consumed 7min 1.246s CPU time.
Dec  5 02:25:40 compute-0 systemd-machined[138700]: Machine qemu-16-instance-0000000f terminated.
Dec  5 02:25:40 compute-0 kernel: tapafc3cf6c-cb: entered promiscuous mode
Dec  5 02:25:40 compute-0 kernel: tapafc3cf6c-cb (unregistering): left promiscuous mode
Dec  5 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.303 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.331 349552 INFO nova.virt.libvirt.driver [-] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Instance destroyed successfully.#033[00m
Dec  5 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.332 349552 DEBUG nova.objects.instance [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lazy-loading 'resources' on Instance uuid e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  5 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.353 349552 DEBUG nova.virt.libvirt.vif [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-05T02:15:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-3255585-asg-ymkpcnuo2iqm-egephyv4dydi-sxgc5dh3lpwo',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-3255585-asg-ymkpcnuo2iqm-egephyv4dydi-sxgc5dh3lpwo',id=15,image_ref='773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-05T02:15:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='92ca195d-98d1-443c-9947-dcb7ca7b926a'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b01709a3378347e1a3f25eeb2b8b1bca',ramdisk_id='',reservation_id='r-hkm16u1q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='773b9b2a-2bf7-4ae4-aa9c-152d087ecf6e',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-257639068',owner_user_name='tempest-PrometheusGabbiTest-257639068-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-05T02:15:42Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='99591ed8361e41579fee1d14f16bf0f7',uuid=e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "afc3cf6c-cbe3-4163-920e-7122f474d371", "address": "fa:16:3e:69:80:52", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafc3cf6c-cb", "ovs_interfaceid": "afc3cf6c-cbe3-4163-920e-7122f474d371", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  5 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.354 349552 DEBUG nova.network.os_vif_util [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Converting VIF {"id": "afc3cf6c-cbe3-4163-920e-7122f474d371", "address": "fa:16:3e:69:80:52", "network": {"id": "d7842201-32d0-4f34-ad6b-51f98e5f8322", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b01709a3378347e1a3f25eeb2b8b1bca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafc3cf6c-cb", "ovs_interfaceid": "afc3cf6c-cbe3-4163-920e-7122f474d371", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  5 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.356 349552 DEBUG nova.network.os_vif_util [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:69:80:52,bridge_name='br-int',has_traffic_filtering=True,id=afc3cf6c-cbe3-4163-920e-7122f474d371,network=Network(d7842201-32d0-4f34-ad6b-51f98e5f8322),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapafc3cf6c-cb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  5 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.357 349552 DEBUG os_vif [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:69:80:52,bridge_name='br-int',has_traffic_filtering=True,id=afc3cf6c-cbe3-4163-920e-7122f474d371,network=Network(d7842201-32d0-4f34-ad6b-51f98e5f8322),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapafc3cf6c-cb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  5 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.361 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.363 349552 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapafc3cf6c-cb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.366 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.368 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.371 349552 INFO os_vif [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:69:80:52,bridge_name='br-int',has_traffic_filtering=True,id=afc3cf6c-cbe3-4163-920e-7122f474d371,network=Network(d7842201-32d0-4f34-ad6b-51f98e5f8322),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapafc3cf6c-cb')#033[00m
Dec  5 02:25:40 compute-0 neutron-haproxy-ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322[448122]: [NOTICE]   (448153) : haproxy version is 2.8.14-c23fe91
Dec  5 02:25:40 compute-0 neutron-haproxy-ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322[448122]: [NOTICE]   (448153) : path to executable is /usr/sbin/haproxy
Dec  5 02:25:40 compute-0 neutron-haproxy-ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322[448122]: [WARNING]  (448153) : Exiting Master process...
Dec  5 02:25:40 compute-0 neutron-haproxy-ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322[448122]: [WARNING]  (448153) : Exiting Master process...
Dec  5 02:25:40 compute-0 neutron-haproxy-ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322[448122]: [ALERT]    (448153) : Current worker (448155) exited with code 143 (Terminated)
Dec  5 02:25:40 compute-0 neutron-haproxy-ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322[448122]: [WARNING]  (448153) : All workers exited. Exiting... (0)
Dec  5 02:25:40 compute-0 systemd[1]: libpod-41a7f613f1a3d37be573dc0cfc9ba0c7fef5c7c9b4960a56e47c98599276663a.scope: Deactivated successfully.
Dec  5 02:25:40 compute-0 podman[469219]: 2025-12-05 02:25:40.485835144 +0000 UTC m=+0.091129721 container died 41a7f613f1a3d37be573dc0cfc9ba0c7fef5c7c9b4960a56e47c98599276663a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  5 02:25:40 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-41a7f613f1a3d37be573dc0cfc9ba0c7fef5c7c9b4960a56e47c98599276663a-userdata-shm.mount: Deactivated successfully.
Dec  5 02:25:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b0054f478c906d197442626f618ca33515a9d994cb127e662a2ffd07bf0dae3-merged.mount: Deactivated successfully.
Dec  5 02:25:40 compute-0 podman[469219]: 2025-12-05 02:25:40.561105618 +0000 UTC m=+0.166400175 container cleanup 41a7f613f1a3d37be573dc0cfc9ba0c7fef5c7c9b4960a56e47c98599276663a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  5 02:25:40 compute-0 systemd[1]: libpod-conmon-41a7f613f1a3d37be573dc0cfc9ba0c7fef5c7c9b4960a56e47c98599276663a.scope: Deactivated successfully.
Dec  5 02:25:40 compute-0 podman[469267]: 2025-12-05 02:25:40.684485863 +0000 UTC m=+0.080884473 container remove 41a7f613f1a3d37be573dc0cfc9ba0c7fef5c7c9b4960a56e47c98599276663a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  5 02:25:40 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:40.692 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[5ed36477-1ba0-4adc-9167-7cafac69eb1b]: (4, ('Fri Dec  5 02:25:40 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322 (41a7f613f1a3d37be573dc0cfc9ba0c7fef5c7c9b4960a56e47c98599276663a)\n41a7f613f1a3d37be573dc0cfc9ba0c7fef5c7c9b4960a56e47c98599276663a\nFri Dec  5 02:25:40 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322 (41a7f613f1a3d37be573dc0cfc9ba0c7fef5c7c9b4960a56e47c98599276663a)\n41a7f613f1a3d37be573dc0cfc9ba0c7fef5c7c9b4960a56e47c98599276663a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:25:40 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:40.694 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[a3efce11-42ad-47e4-aa34-0a7632c3c308]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:25:40 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:40.695 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7842201-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.698 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:25:40 compute-0 kernel: tapd7842201-30: left promiscuous mode
Dec  5 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.725 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:25:40 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:40.729 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[811fb664-4c35-4a56-abf7-3f9911be34e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:25:40 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:40.749 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[c5c53621-3bff-47fe-83d1-19061ec3aa01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:25:40 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:40.751 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[48e9abeb-cf44-4efd-9b47-56b1dc1d2c6a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:25:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2300: 321 pgs: 321 active+clean; 157 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec  5 02:25:40 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:40.767 412744 DEBUG oslo.privsep.daemon [-] privsep: reply[7542e4c1-f3c8-4dca-b5ad-dde3c9cee611]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 677120, 'reachable_time': 20407, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 469281, 'error': None, 'target': 'ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:25:40 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:40.770 287504 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d7842201-32d0-4f34-ad6b-51f98e5f8322 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  5 02:25:40 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:40.770 287504 DEBUG oslo.privsep.daemon [-] privsep: reply[5bbf8028-8372-412c-b863-c9aa40cf280c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  5 02:25:40 compute-0 systemd[1]: run-netns-ovnmeta\x2dd7842201\x2d32d0\x2d4f34\x2dad6b\x2d51f98e5f8322.mount: Deactivated successfully.
Dec  5 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.978 349552 DEBUG nova.compute.manager [req-c1cc4d6f-cc5b-4ee8-b9c0-2c288df953c0 req-d309cc95-be14-4cb2-8a37-59f7939ce07a a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Received event network-vif-unplugged-afc3cf6c-cbe3-4163-920e-7122f474d371 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.979 349552 DEBUG oslo_concurrency.lockutils [req-c1cc4d6f-cc5b-4ee8-b9c0-2c288df953c0 req-d309cc95-be14-4cb2-8a37-59f7939ce07a a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.982 349552 DEBUG oslo_concurrency.lockutils [req-c1cc4d6f-cc5b-4ee8-b9c0-2c288df953c0 req-d309cc95-be14-4cb2-8a37-59f7939ce07a a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.982 349552 DEBUG oslo_concurrency.lockutils [req-c1cc4d6f-cc5b-4ee8-b9c0-2c288df953c0 req-d309cc95-be14-4cb2-8a37-59f7939ce07a a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.982 349552 DEBUG nova.compute.manager [req-c1cc4d6f-cc5b-4ee8-b9c0-2c288df953c0 req-d309cc95-be14-4cb2-8a37-59f7939ce07a a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] No waiting events found dispatching network-vif-unplugged-afc3cf6c-cbe3-4163-920e-7122f474d371 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 02:25:40 compute-0 nova_compute[349548]: 2025-12-05 02:25:40.983 349552 DEBUG nova.compute.manager [req-c1cc4d6f-cc5b-4ee8-b9c0-2c288df953c0 req-d309cc95-be14-4cb2-8a37-59f7939ce07a a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Received event network-vif-unplugged-afc3cf6c-cbe3-4163-920e-7122f474d371 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  5 02:25:41 compute-0 nova_compute[349548]: 2025-12-05 02:25:41.257 349552 INFO nova.virt.libvirt.driver [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Deleting instance files /var/lib/nova/instances/e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7_del#033[00m
Dec  5 02:25:41 compute-0 nova_compute[349548]: 2025-12-05 02:25:41.258 349552 INFO nova.virt.libvirt.driver [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Deletion of /var/lib/nova/instances/e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7_del complete#033[00m
Dec  5 02:25:41 compute-0 nova_compute[349548]: 2025-12-05 02:25:41.344 349552 INFO nova.compute.manager [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Took 1.30 seconds to destroy the instance on the hypervisor.#033[00m
Dec  5 02:25:41 compute-0 nova_compute[349548]: 2025-12-05 02:25:41.345 349552 DEBUG oslo.service.loopingcall [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  5 02:25:41 compute-0 nova_compute[349548]: 2025-12-05 02:25:41.345 349552 DEBUG nova.compute.manager [-] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  5 02:25:41 compute-0 nova_compute[349548]: 2025-12-05 02:25:41.346 349552 DEBUG nova.network.neutron [-] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  5 02:25:41 compute-0 nova_compute[349548]: 2025-12-05 02:25:41.889 349552 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764901526.887611, 292fd084-0808-4a80-adc1-6ab1f28e188a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:25:41 compute-0 nova_compute[349548]: 2025-12-05 02:25:41.889 349552 INFO nova.compute.manager [-] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] VM Stopped (Lifecycle Event)#033[00m
Dec  5 02:25:41 compute-0 nova_compute[349548]: 2025-12-05 02:25:41.912 349552 DEBUG nova.compute.manager [None req-60e48250-4e16-4ef0-b54d-3b2d708a498d - - - - - -] [instance: 292fd084-0808-4a80-adc1-6ab1f28e188a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:25:42 compute-0 nova_compute[349548]: 2025-12-05 02:25:42.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:25:42 compute-0 nova_compute[349548]: 2025-12-05 02:25:42.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  5 02:25:42 compute-0 nova_compute[349548]: 2025-12-05 02:25:42.088 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  5 02:25:42 compute-0 nova_compute[349548]: 2025-12-05 02:25:42.742 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:25:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2301: 321 pgs: 321 active+clean; 93 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 2.3 KiB/s wr, 50 op/s
Dec  5 02:25:43 compute-0 nova_compute[349548]: 2025-12-05 02:25:43.080 349552 DEBUG nova.compute.manager [req-a105f9ce-3157-42d5-86c2-2f8b9fef44c5 req-30a9a9c3-90d7-4533-bf9d-9244eac16136 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Received event network-vif-plugged-afc3cf6c-cbe3-4163-920e-7122f474d371 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:25:43 compute-0 nova_compute[349548]: 2025-12-05 02:25:43.081 349552 DEBUG oslo_concurrency.lockutils [req-a105f9ce-3157-42d5-86c2-2f8b9fef44c5 req-30a9a9c3-90d7-4533-bf9d-9244eac16136 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Acquiring lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:25:43 compute-0 nova_compute[349548]: 2025-12-05 02:25:43.081 349552 DEBUG oslo_concurrency.lockutils [req-a105f9ce-3157-42d5-86c2-2f8b9fef44c5 req-30a9a9c3-90d7-4533-bf9d-9244eac16136 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:25:43 compute-0 nova_compute[349548]: 2025-12-05 02:25:43.081 349552 DEBUG oslo_concurrency.lockutils [req-a105f9ce-3157-42d5-86c2-2f8b9fef44c5 req-30a9a9c3-90d7-4533-bf9d-9244eac16136 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] Lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:25:43 compute-0 nova_compute[349548]: 2025-12-05 02:25:43.082 349552 DEBUG nova.compute.manager [req-a105f9ce-3157-42d5-86c2-2f8b9fef44c5 req-30a9a9c3-90d7-4533-bf9d-9244eac16136 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] No waiting events found dispatching network-vif-plugged-afc3cf6c-cbe3-4163-920e-7122f474d371 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  5 02:25:43 compute-0 nova_compute[349548]: 2025-12-05 02:25:43.082 349552 WARNING nova.compute.manager [req-a105f9ce-3157-42d5-86c2-2f8b9fef44c5 req-30a9a9c3-90d7-4533-bf9d-9244eac16136 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Received unexpected event network-vif-plugged-afc3cf6c-cbe3-4163-920e-7122f474d371 for instance with vm_state active and task_state deleting.#033[00m
Dec  5 02:25:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:25:43 compute-0 nova_compute[349548]: 2025-12-05 02:25:43.542 349552 DEBUG nova.network.neutron [-] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  5 02:25:43 compute-0 nova_compute[349548]: 2025-12-05 02:25:43.563 349552 INFO nova.compute.manager [-] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Took 2.22 seconds to deallocate network for instance.#033[00m
Dec  5 02:25:43 compute-0 nova_compute[349548]: 2025-12-05 02:25:43.625 349552 DEBUG oslo_concurrency.lockutils [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:25:43 compute-0 nova_compute[349548]: 2025-12-05 02:25:43.626 349552 DEBUG oslo_concurrency.lockutils [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:25:43 compute-0 nova_compute[349548]: 2025-12-05 02:25:43.688 349552 DEBUG oslo_concurrency.processutils [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:25:44 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:25:44 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1129251840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:25:44 compute-0 nova_compute[349548]: 2025-12-05 02:25:44.185 349552 DEBUG oslo_concurrency.processutils [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:25:44 compute-0 nova_compute[349548]: 2025-12-05 02:25:44.193 349552 DEBUG nova.compute.provider_tree [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:25:44 compute-0 nova_compute[349548]: 2025-12-05 02:25:44.212 349552 DEBUG nova.scheduler.client.report [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:25:44 compute-0 nova_compute[349548]: 2025-12-05 02:25:44.293 349552 DEBUG oslo_concurrency.lockutils [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:25:44 compute-0 nova_compute[349548]: 2025-12-05 02:25:44.327 349552 INFO nova.scheduler.client.report [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Deleted allocations for instance e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7#033[00m
Dec  5 02:25:44 compute-0 nova_compute[349548]: 2025-12-05 02:25:44.386 349552 DEBUG oslo_concurrency.lockutils [None req-b20d2d0a-2297-49b3-b81a-0c68647b70ba 99591ed8361e41579fee1d14f16bf0f7 b01709a3378347e1a3f25eeb2b8b1bca - - default default] Lock "e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.351s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:25:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2302: 321 pgs: 321 active+clean; 93 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Dec  5 02:25:45 compute-0 nova_compute[349548]: 2025-12-05 02:25:45.293 349552 DEBUG nova.compute.manager [req-66ea741d-9d2f-4a20-bbd1-5052153c7497 req-3cfb8963-9904-41da-a7be-68f2b29f2ed6 a514695d37f64a428b2369a0ed0e45f8 3966983fb0bf4222a79ffd1fb50974ce - - default default] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Received event network-vif-deleted-afc3cf6c-cbe3-4163-920e-7122f474d371 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  5 02:25:45 compute-0 nova_compute[349548]: 2025-12-05 02:25:45.369 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:25:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 02:25:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1737544502' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 02:25:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 02:25:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1737544502' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 02:25:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:25:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:25:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:25:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:25:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:25:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:25:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2303: 321 pgs: 321 active+clean; 77 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec  5 02:25:47 compute-0 nova_compute[349548]: 2025-12-05 02:25:47.744 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:25:48 compute-0 nova_compute[349548]: 2025-12-05 02:25:48.087 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:25:48 compute-0 nova_compute[349548]: 2025-12-05 02:25:48.088 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 02:25:48 compute-0 nova_compute[349548]: 2025-12-05 02:25:48.127 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  5 02:25:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:25:48 compute-0 podman[469305]: 2025-12-05 02:25:48.706971848 +0000 UTC m=+0.112532131 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent)
Dec  5 02:25:48 compute-0 podman[469306]: 2025-12-05 02:25:48.721960619 +0000 UTC m=+0.124830197 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  5 02:25:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2304: 321 pgs: 321 active+clean; 77 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec  5 02:25:50 compute-0 nova_compute[349548]: 2025-12-05 02:25:50.374 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:25:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2305: 321 pgs: 321 active+clean; 77 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec  5 02:25:52 compute-0 nova_compute[349548]: 2025-12-05 02:25:52.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:25:52 compute-0 nova_compute[349548]: 2025-12-05 02:25:52.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 02:25:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Dec  5 02:25:52 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Dec  5 02:25:52 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Dec  5 02:25:52 compute-0 nova_compute[349548]: 2025-12-05 02:25:52.747 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:25:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2307: 321 pgs: 321 active+clean; 77 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 511 B/s wr, 10 op/s
Dec  5 02:25:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:25:54 compute-0 nova_compute[349548]: 2025-12-05 02:25:54.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:25:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Dec  5 02:25:54 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Dec  5 02:25:54 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Dec  5 02:25:54 compute-0 podman[469351]: 2025-12-05 02:25:54.723830133 +0000 UTC m=+0.129371864 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  5 02:25:54 compute-0 podman[469353]: 2025-12-05 02:25:54.742812456 +0000 UTC m=+0.127571504 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec  5 02:25:54 compute-0 podman[469352]: 2025-12-05 02:25:54.754832514 +0000 UTC m=+0.151756533 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.4, architecture=x86_64, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release-0.7.12=, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, vcs-type=git, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, release=1214.1726694543, managed_by=edpm_ansible, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  5 02:25:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2309: 321 pgs: 321 active+clean; 77 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 7.4 KiB/s rd, 639 B/s wr, 9 op/s
Dec  5 02:25:55 compute-0 nova_compute[349548]: 2025-12-05 02:25:55.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:25:55 compute-0 nova_compute[349548]: 2025-12-05 02:25:55.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:25:55 compute-0 nova_compute[349548]: 2025-12-05 02:25:55.131 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:25:55 compute-0 nova_compute[349548]: 2025-12-05 02:25:55.132 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:25:55 compute-0 nova_compute[349548]: 2025-12-05 02:25:55.132 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:25:55 compute-0 nova_compute[349548]: 2025-12-05 02:25:55.133 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 02:25:55 compute-0 nova_compute[349548]: 2025-12-05 02:25:55.134 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:25:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Dec  5 02:25:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Dec  5 02:25:55 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Dec  5 02:25:55 compute-0 nova_compute[349548]: 2025-12-05 02:25:55.324 349552 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764901540.3228314, e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  5 02:25:55 compute-0 nova_compute[349548]: 2025-12-05 02:25:55.326 349552 INFO nova.compute.manager [-] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] VM Stopped (Lifecycle Event)#033[00m
Dec  5 02:25:55 compute-0 nova_compute[349548]: 2025-12-05 02:25:55.352 349552 DEBUG nova.compute.manager [None req-0dbaae16-d180-419b-a043-f3b8f258d1d5 - - - - - -] [instance: e76adc0a-f45a-4b38-9b2b-ae93ab4c0bb7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  5 02:25:55 compute-0 nova_compute[349548]: 2025-12-05 02:25:55.379 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:25:55 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:25:55 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/955854603' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:25:55 compute-0 nova_compute[349548]: 2025-12-05 02:25:55.690 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:25:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:56.228 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:25:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:56.229 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:25:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:25:56.229 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:25:56 compute-0 nova_compute[349548]: 2025-12-05 02:25:56.323 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:25:56 compute-0 nova_compute[349548]: 2025-12-05 02:25:56.326 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4005MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 02:25:56 compute-0 nova_compute[349548]: 2025-12-05 02:25:56.326 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:25:56 compute-0 nova_compute[349548]: 2025-12-05 02:25:56.327 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:25:56 compute-0 nova_compute[349548]: 2025-12-05 02:25:56.550 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 02:25:56 compute-0 nova_compute[349548]: 2025-12-05 02:25:56.552 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 02:25:56 compute-0 nova_compute[349548]: 2025-12-05 02:25:56.637 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:25:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2311: 321 pgs: 321 active+clean; 85 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.3 MiB/s wr, 22 op/s
Dec  5 02:25:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:25:57 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3052801297' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:25:57 compute-0 nova_compute[349548]: 2025-12-05 02:25:57.165 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:25:57 compute-0 nova_compute[349548]: 2025-12-05 02:25:57.180 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:25:57 compute-0 nova_compute[349548]: 2025-12-05 02:25:57.202 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:25:57 compute-0 nova_compute[349548]: 2025-12-05 02:25:57.231 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 02:25:57 compute-0 nova_compute[349548]: 2025-12-05 02:25:57.233 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:25:57 compute-0 nova_compute[349548]: 2025-12-05 02:25:57.777 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:25:58 compute-0 nova_compute[349548]: 2025-12-05 02:25:58.230 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:25:58 compute-0 nova_compute[349548]: 2025-12-05 02:25:58.231 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:25:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:25:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2312: 321 pgs: 321 active+clean; 65 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 3.1 MiB/s wr, 92 op/s
Dec  5 02:25:59 compute-0 nova_compute[349548]: 2025-12-05 02:25:59.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:25:59 compute-0 podman[158197]: time="2025-12-05T02:25:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:25:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:25:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec  5 02:25:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:25:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8198 "" "Go-http-client/1.1"
Dec  5 02:26:00 compute-0 nova_compute[349548]: 2025-12-05 02:26:00.062 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:26:00 compute-0 nova_compute[349548]: 2025-12-05 02:26:00.383 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:26:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2313: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 2.6 MiB/s wr, 77 op/s
Dec  5 02:26:01 compute-0 openstack_network_exporter[366555]: ERROR   02:26:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:26:01 compute-0 openstack_network_exporter[366555]: ERROR   02:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:26:01 compute-0 openstack_network_exporter[366555]: ERROR   02:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:26:01 compute-0 openstack_network_exporter[366555]: ERROR   02:26:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:26:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:26:01 compute-0 openstack_network_exporter[366555]: ERROR   02:26:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:26:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:26:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2314: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 2.4 MiB/s wr, 72 op/s
Dec  5 02:26:02 compute-0 nova_compute[349548]: 2025-12-05 02:26:02.782 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:26:03 compute-0 nova_compute[349548]: 2025-12-05 02:26:03.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:26:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:26:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Dec  5 02:26:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Dec  5 02:26:03 compute-0 ceph-mon[192914]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Dec  5 02:26:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2316: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 1.3 MiB/s wr, 58 op/s
Dec  5 02:26:05 compute-0 nova_compute[349548]: 2025-12-05 02:26:05.387 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:26:06 compute-0 nova_compute[349548]: 2025-12-05 02:26:06.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:26:06 compute-0 podman[469453]: 2025-12-05 02:26:06.712060521 +0000 UTC m=+0.106204304 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 02:26:06 compute-0 podman[469452]: 2025-12-05 02:26:06.727101424 +0000 UTC m=+0.124710064 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3)
Dec  5 02:26:06 compute-0 podman[469455]: 2025-12-05 02:26:06.727991689 +0000 UTC m=+0.109833136 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, name=ubi9-minimal, managed_by=edpm_ansible, architecture=x86_64, version=9.6, container_name=openstack_network_exporter, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  5 02:26:06 compute-0 podman[469454]: 2025-12-05 02:26:06.769193456 +0000 UTC m=+0.152706610 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:26:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2317: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 1.2 MiB/s wr, 55 op/s
Dec  5 02:26:07 compute-0 nova_compute[349548]: 2025-12-05 02:26:07.080 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:26:07 compute-0 nova_compute[349548]: 2025-12-05 02:26:07.081 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  5 02:26:07 compute-0 nova_compute[349548]: 2025-12-05 02:26:07.784 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:26:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:26:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2318: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 307 B/s wr, 29 op/s
Dec  5 02:26:10 compute-0 nova_compute[349548]: 2025-12-05 02:26:10.394 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:26:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2319: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Dec  5 02:26:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2320: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 0 B/s wr, 71 op/s
Dec  5 02:26:12 compute-0 nova_compute[349548]: 2025-12-05 02:26:12.785 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:26:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:26:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2321: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 0 B/s wr, 63 op/s
Dec  5 02:26:15 compute-0 nova_compute[349548]: 2025-12-05 02:26:15.397 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:26:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:26:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:26:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:26:16
Dec  5 02:26:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 02:26:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 02:26:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', '.rgw.root', 'volumes', 'backups', 'vms', 'images', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log']
Dec  5 02:26:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 02:26:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:26:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:26:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:26:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:26:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2322: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  5 02:26:17 compute-0 nova_compute[349548]: 2025-12-05 02:26:17.789 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:26:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 02:26:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:26:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 02:26:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:26:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:26:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:26:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:26:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:26:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:26:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:26:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:26:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2323: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  5 02:26:19 compute-0 podman[469537]: 2025-12-05 02:26:19.707634121 +0000 UTC m=+0.115455183 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Dec  5 02:26:19 compute-0 podman[469538]: 2025-12-05 02:26:19.740437592 +0000 UTC m=+0.143099690 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 02:26:20 compute-0 nova_compute[349548]: 2025-12-05 02:26:20.401 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:26:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2324: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 36 op/s
Dec  5 02:26:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2325: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 8.4 KiB/s rd, 0 B/s wr, 13 op/s
Dec  5 02:26:22 compute-0 nova_compute[349548]: 2025-12-05 02:26:22.792 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:26:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:26:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2326: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:26:25 compute-0 nova_compute[349548]: 2025-12-05 02:26:25.407 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:26:25 compute-0 podman[469577]: 2025-12-05 02:26:25.716233346 +0000 UTC m=+0.119823007 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  5 02:26:25 compute-0 podman[469578]: 2025-12-05 02:26:25.728309975 +0000 UTC m=+0.122186373 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, maintainer=Red Hat, Inc., name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, distribution-scope=public, managed_by=edpm_ansible, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, release-0.7.12=)
Dec  5 02:26:25 compute-0 podman[469579]: 2025-12-05 02:26:25.740864538 +0000 UTC m=+0.123421608 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:26:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2327: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  5 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec  5 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:26:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 02:26:27 compute-0 nova_compute[349548]: 2025-12-05 02:26:27.795 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:26:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:26:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:26:28 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:26:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 02:26:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:26:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 02:26:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:26:28 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 7428df2d-cd5a-456d-be9b-75c57688bc4f does not exist
Dec  5 02:26:28 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 46673529-1cb0-467d-b38f-60fdebea18fc does not exist
Dec  5 02:26:28 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev e105e11f-c350-4062-b99a-ddcec7ac8488 does not exist
Dec  5 02:26:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 02:26:28 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 02:26:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 02:26:28 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:26:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:26:28 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:26:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2328: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:26:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:26:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:26:29 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:26:29 compute-0 podman[469902]: 2025-12-05 02:26:29.683396786 +0000 UTC m=+0.092197080 container create 276cd942c49468324aefd5ebe189668f72ad7ee03d058cc42e9519a797d32daf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  5 02:26:29 compute-0 podman[469902]: 2025-12-05 02:26:29.654153625 +0000 UTC m=+0.062953959 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:26:29 compute-0 podman[158197]: time="2025-12-05T02:26:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:26:29 compute-0 systemd[1]: Started libpod-conmon-276cd942c49468324aefd5ebe189668f72ad7ee03d058cc42e9519a797d32daf.scope.
Dec  5 02:26:29 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:26:29 compute-0 podman[469902]: 2025-12-05 02:26:29.822724409 +0000 UTC m=+0.231524763 container init 276cd942c49468324aefd5ebe189668f72ad7ee03d058cc42e9519a797d32daf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_noether, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  5 02:26:29 compute-0 podman[469902]: 2025-12-05 02:26:29.839184632 +0000 UTC m=+0.247984926 container start 276cd942c49468324aefd5ebe189668f72ad7ee03d058cc42e9519a797d32daf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_noether, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  5 02:26:29 compute-0 podman[469902]: 2025-12-05 02:26:29.845415717 +0000 UTC m=+0.254216061 container attach 276cd942c49468324aefd5ebe189668f72ad7ee03d058cc42e9519a797d32daf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  5 02:26:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:26:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 43954 "" "Go-http-client/1.1"
Dec  5 02:26:29 compute-0 admiring_noether[469917]: 167 167
Dec  5 02:26:29 compute-0 podman[469902]: 2025-12-05 02:26:29.858440253 +0000 UTC m=+0.267240547 container died 276cd942c49468324aefd5ebe189668f72ad7ee03d058cc42e9519a797d32daf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_noether, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  5 02:26:29 compute-0 systemd[1]: libpod-276cd942c49468324aefd5ebe189668f72ad7ee03d058cc42e9519a797d32daf.scope: Deactivated successfully.
Dec  5 02:26:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:26:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8190 "" "Go-http-client/1.1"
Dec  5 02:26:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-712987297a90ba39d12d0209fa2bfc10910b34f2212bf01c7c46faaaf40013a3-merged.mount: Deactivated successfully.
Dec  5 02:26:29 compute-0 podman[469902]: 2025-12-05 02:26:29.938631355 +0000 UTC m=+0.347431619 container remove 276cd942c49468324aefd5ebe189668f72ad7ee03d058cc42e9519a797d32daf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_noether, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:26:29 compute-0 systemd[1]: libpod-conmon-276cd942c49468324aefd5ebe189668f72ad7ee03d058cc42e9519a797d32daf.scope: Deactivated successfully.
Dec  5 02:26:30 compute-0 podman[469943]: 2025-12-05 02:26:30.227716164 +0000 UTC m=+0.091695686 container create 4d75d118ab3fb9cc10a2f500e5bd44a8b3c2b7b6d018fef8f1ce3ffc51b0fe7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  5 02:26:30 compute-0 podman[469943]: 2025-12-05 02:26:30.186349072 +0000 UTC m=+0.050328644 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:26:30 compute-0 systemd[1]: Started libpod-conmon-4d75d118ab3fb9cc10a2f500e5bd44a8b3c2b7b6d018fef8f1ce3ffc51b0fe7c.scope.
Dec  5 02:26:30 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:26:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5853ebb4a996c5eeabebb06445f9f5a5640501e55558f24fb85080bf4c52836/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:26:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5853ebb4a996c5eeabebb06445f9f5a5640501e55558f24fb85080bf4c52836/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:26:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5853ebb4a996c5eeabebb06445f9f5a5640501e55558f24fb85080bf4c52836/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:26:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5853ebb4a996c5eeabebb06445f9f5a5640501e55558f24fb85080bf4c52836/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:26:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5853ebb4a996c5eeabebb06445f9f5a5640501e55558f24fb85080bf4c52836/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 02:26:30 compute-0 podman[469943]: 2025-12-05 02:26:30.388145659 +0000 UTC m=+0.252125221 container init 4d75d118ab3fb9cc10a2f500e5bd44a8b3c2b7b6d018fef8f1ce3ffc51b0fe7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_keller, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec  5 02:26:30 compute-0 nova_compute[349548]: 2025-12-05 02:26:30.411 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:26:30 compute-0 podman[469943]: 2025-12-05 02:26:30.417107812 +0000 UTC m=+0.281087324 container start 4d75d118ab3fb9cc10a2f500e5bd44a8b3c2b7b6d018fef8f1ce3ffc51b0fe7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_keller, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Dec  5 02:26:30 compute-0 podman[469943]: 2025-12-05 02:26:30.423198593 +0000 UTC m=+0.287178175 container attach 4d75d118ab3fb9cc10a2f500e5bd44a8b3c2b7b6d018fef8f1ce3ffc51b0fe7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_keller, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  5 02:26:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2329: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:26:31 compute-0 openstack_network_exporter[366555]: ERROR   02:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:26:31 compute-0 openstack_network_exporter[366555]: ERROR   02:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:26:31 compute-0 openstack_network_exporter[366555]: ERROR   02:26:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:26:31 compute-0 openstack_network_exporter[366555]: ERROR   02:26:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:26:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:26:31 compute-0 openstack_network_exporter[366555]: ERROR   02:26:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:26:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:26:31 compute-0 blissful_keller[469959]: --> passed data devices: 0 physical, 3 LVM
Dec  5 02:26:31 compute-0 blissful_keller[469959]: --> relative data size: 1.0
Dec  5 02:26:31 compute-0 blissful_keller[469959]: --> All data devices are unavailable
Dec  5 02:26:31 compute-0 systemd[1]: libpod-4d75d118ab3fb9cc10a2f500e5bd44a8b3c2b7b6d018fef8f1ce3ffc51b0fe7c.scope: Deactivated successfully.
Dec  5 02:26:31 compute-0 systemd[1]: libpod-4d75d118ab3fb9cc10a2f500e5bd44a8b3c2b7b6d018fef8f1ce3ffc51b0fe7c.scope: Consumed 1.332s CPU time.
Dec  5 02:26:31 compute-0 podman[469943]: 2025-12-05 02:26:31.797139411 +0000 UTC m=+1.661118923 container died 4d75d118ab3fb9cc10a2f500e5bd44a8b3c2b7b6d018fef8f1ce3ffc51b0fe7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_keller, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  5 02:26:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5853ebb4a996c5eeabebb06445f9f5a5640501e55558f24fb85080bf4c52836-merged.mount: Deactivated successfully.
Dec  5 02:26:31 compute-0 podman[469943]: 2025-12-05 02:26:31.933429549 +0000 UTC m=+1.797409061 container remove 4d75d118ab3fb9cc10a2f500e5bd44a8b3c2b7b6d018fef8f1ce3ffc51b0fe7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_keller, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  5 02:26:31 compute-0 systemd[1]: libpod-conmon-4d75d118ab3fb9cc10a2f500e5bd44a8b3c2b7b6d018fef8f1ce3ffc51b0fe7c.scope: Deactivated successfully.
Dec  5 02:26:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2330: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:26:32 compute-0 nova_compute[349548]: 2025-12-05 02:26:32.798 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:26:33 compute-0 podman[470139]: 2025-12-05 02:26:33.07708983 +0000 UTC m=+0.092571171 container create a0e974d76641cd6d297decefb2fa5e2872a506f61e30cc86a6f737a13f9d1dfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  5 02:26:33 compute-0 podman[470139]: 2025-12-05 02:26:33.042076597 +0000 UTC m=+0.057557978 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:26:33 compute-0 systemd[1]: Started libpod-conmon-a0e974d76641cd6d297decefb2fa5e2872a506f61e30cc86a6f737a13f9d1dfa.scope.
Dec  5 02:26:33 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:26:33 compute-0 podman[470139]: 2025-12-05 02:26:33.22450652 +0000 UTC m=+0.239987901 container init a0e974d76641cd6d297decefb2fa5e2872a506f61e30cc86a6f737a13f9d1dfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bartik, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:26:33 compute-0 podman[470139]: 2025-12-05 02:26:33.240201971 +0000 UTC m=+0.255683312 container start a0e974d76641cd6d297decefb2fa5e2872a506f61e30cc86a6f737a13f9d1dfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  5 02:26:33 compute-0 podman[470139]: 2025-12-05 02:26:33.247579838 +0000 UTC m=+0.263061179 container attach a0e974d76641cd6d297decefb2fa5e2872a506f61e30cc86a6f737a13f9d1dfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bartik, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Dec  5 02:26:33 compute-0 funny_bartik[470154]: 167 167
Dec  5 02:26:33 compute-0 systemd[1]: libpod-a0e974d76641cd6d297decefb2fa5e2872a506f61e30cc86a6f737a13f9d1dfa.scope: Deactivated successfully.
Dec  5 02:26:33 compute-0 podman[470139]: 2025-12-05 02:26:33.250717366 +0000 UTC m=+0.266198727 container died a0e974d76641cd6d297decefb2fa5e2872a506f61e30cc86a6f737a13f9d1dfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  5 02:26:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-72cea38ead625c29569bf959665300a0873118d82065e5e1e2d5a20433b7cbd8-merged.mount: Deactivated successfully.
Dec  5 02:26:33 compute-0 podman[470139]: 2025-12-05 02:26:33.316997768 +0000 UTC m=+0.332479079 container remove a0e974d76641cd6d297decefb2fa5e2872a506f61e30cc86a6f737a13f9d1dfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_bartik, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Dec  5 02:26:33 compute-0 systemd[1]: libpod-conmon-a0e974d76641cd6d297decefb2fa5e2872a506f61e30cc86a6f737a13f9d1dfa.scope: Deactivated successfully.
Dec  5 02:26:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:26:33 compute-0 podman[470177]: 2025-12-05 02:26:33.605561553 +0000 UTC m=+0.087252052 container create b82b15e8e1e24664d4af56e4d4aaebad42f061e7adb571ce33c1e67a89f9e6d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  5 02:26:33 compute-0 podman[470177]: 2025-12-05 02:26:33.576175167 +0000 UTC m=+0.057865646 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:26:33 compute-0 systemd[1]: Started libpod-conmon-b82b15e8e1e24664d4af56e4d4aaebad42f061e7adb571ce33c1e67a89f9e6d6.scope.
Dec  5 02:26:33 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:26:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52018a5df5fb1c36637641c58e84df0f222bf91df66824de22b28c508b823e92/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:26:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52018a5df5fb1c36637641c58e84df0f222bf91df66824de22b28c508b823e92/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:26:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52018a5df5fb1c36637641c58e84df0f222bf91df66824de22b28c508b823e92/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:26:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52018a5df5fb1c36637641c58e84df0f222bf91df66824de22b28c508b823e92/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:26:33 compute-0 podman[470177]: 2025-12-05 02:26:33.804403417 +0000 UTC m=+0.286093946 container init b82b15e8e1e24664d4af56e4d4aaebad42f061e7adb571ce33c1e67a89f9e6d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_blackburn, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Dec  5 02:26:33 compute-0 podman[470177]: 2025-12-05 02:26:33.8162474 +0000 UTC m=+0.297937889 container start b82b15e8e1e24664d4af56e4d4aaebad42f061e7adb571ce33c1e67a89f9e6d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:26:33 compute-0 podman[470177]: 2025-12-05 02:26:33.822557937 +0000 UTC m=+0.304248436 container attach b82b15e8e1e24664d4af56e4d4aaebad42f061e7adb571ce33c1e67a89f9e6d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]: {
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:    "0": [
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:        {
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:            "devices": [
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:                "/dev/loop3"
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:            ],
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:            "lv_name": "ceph_lv0",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:            "lv_size": "21470642176",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:            "name": "ceph_lv0",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:            "tags": {
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:                "ceph.cluster_name": "ceph",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:                "ceph.crush_device_class": "",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:                "ceph.encrypted": "0",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:                "ceph.osd_id": "0",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:                "ceph.type": "block",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:                "ceph.vdo": "0"
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:            },
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:            "type": "block",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:            "vg_name": "ceph_vg0"
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:        }
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:    ],
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:    "1": [
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:        {
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:            "devices": [
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:                "/dev/loop4"
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:            ],
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:            "lv_name": "ceph_lv1",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:            "lv_size": "21470642176",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:            "name": "ceph_lv1",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:            "tags": {
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:                "ceph.cluster_name": "ceph",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:                "ceph.crush_device_class": "",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:                "ceph.encrypted": "0",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:                "ceph.osd_id": "1",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:                "ceph.type": "block",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:                "ceph.vdo": "0"
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:            },
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:            "type": "block",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:            "vg_name": "ceph_vg1"
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:        }
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:    ],
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:    "2": [
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:        {
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:            "devices": [
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:                "/dev/loop5"
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:            ],
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:            "lv_name": "ceph_lv2",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:            "lv_size": "21470642176",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:            "name": "ceph_lv2",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:            "tags": {
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:                "ceph.cluster_name": "ceph",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:                "ceph.crush_device_class": "",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:                "ceph.encrypted": "0",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:                "ceph.osd_id": "2",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:                "ceph.type": "block",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:                "ceph.vdo": "0"
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:            },
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:            "type": "block",
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:            "vg_name": "ceph_vg2"
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:        }
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]:    ]
Dec  5 02:26:34 compute-0 elegant_blackburn[470193]: }
Dec  5 02:26:34 compute-0 systemd[1]: libpod-b82b15e8e1e24664d4af56e4d4aaebad42f061e7adb571ce33c1e67a89f9e6d6.scope: Deactivated successfully.
Dec  5 02:26:34 compute-0 podman[470177]: 2025-12-05 02:26:34.655044817 +0000 UTC m=+1.136735316 container died b82b15e8e1e24664d4af56e4d4aaebad42f061e7adb571ce33c1e67a89f9e6d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  5 02:26:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-52018a5df5fb1c36637641c58e84df0f222bf91df66824de22b28c508b823e92-merged.mount: Deactivated successfully.
Dec  5 02:26:34 compute-0 podman[470177]: 2025-12-05 02:26:34.745958581 +0000 UTC m=+1.227649050 container remove b82b15e8e1e24664d4af56e4d4aaebad42f061e7adb571ce33c1e67a89f9e6d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  5 02:26:34 compute-0 systemd[1]: libpod-conmon-b82b15e8e1e24664d4af56e4d4aaebad42f061e7adb571ce33c1e67a89f9e6d6.scope: Deactivated successfully.
Dec  5 02:26:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2331: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:26:35 compute-0 nova_compute[349548]: 2025-12-05 02:26:35.417 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:26:35 compute-0 podman[470349]: 2025-12-05 02:26:35.850669007 +0000 UTC m=+0.087726984 container create f75e0aa3c5e1e0d20327c1a45800b56e8e2a100bc1be18f4da00aea1154cd42b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bhabha, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:26:35 compute-0 podman[470349]: 2025-12-05 02:26:35.816100977 +0000 UTC m=+0.053159014 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:26:35 compute-0 systemd[1]: Started libpod-conmon-f75e0aa3c5e1e0d20327c1a45800b56e8e2a100bc1be18f4da00aea1154cd42b.scope.
Dec  5 02:26:35 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:26:35 compute-0 podman[470349]: 2025-12-05 02:26:35.985010011 +0000 UTC m=+0.222068038 container init f75e0aa3c5e1e0d20327c1a45800b56e8e2a100bc1be18f4da00aea1154cd42b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bhabha, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:26:36 compute-0 podman[470349]: 2025-12-05 02:26:36.000403913 +0000 UTC m=+0.237461910 container start f75e0aa3c5e1e0d20327c1a45800b56e8e2a100bc1be18f4da00aea1154cd42b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bhabha, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  5 02:26:36 compute-0 podman[470349]: 2025-12-05 02:26:36.007955215 +0000 UTC m=+0.245013182 container attach f75e0aa3c5e1e0d20327c1a45800b56e8e2a100bc1be18f4da00aea1154cd42b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bhabha, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:26:36 compute-0 hopeful_bhabha[470364]: 167 167
Dec  5 02:26:36 compute-0 podman[470349]: 2025-12-05 02:26:36.011952127 +0000 UTC m=+0.249010094 container died f75e0aa3c5e1e0d20327c1a45800b56e8e2a100bc1be18f4da00aea1154cd42b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bhabha, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  5 02:26:36 compute-0 systemd[1]: libpod-f75e0aa3c5e1e0d20327c1a45800b56e8e2a100bc1be18f4da00aea1154cd42b.scope: Deactivated successfully.
Dec  5 02:26:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-14c1de820f9b2704449172cf3f57ca4ea3a7119546dbccb9f07cda48194777ae-merged.mount: Deactivated successfully.
Dec  5 02:26:36 compute-0 podman[470349]: 2025-12-05 02:26:36.078716812 +0000 UTC m=+0.315774759 container remove f75e0aa3c5e1e0d20327c1a45800b56e8e2a100bc1be18f4da00aea1154cd42b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bhabha, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  5 02:26:36 compute-0 systemd[1]: libpod-conmon-f75e0aa3c5e1e0d20327c1a45800b56e8e2a100bc1be18f4da00aea1154cd42b.scope: Deactivated successfully.
Dec  5 02:26:36 compute-0 podman[470387]: 2025-12-05 02:26:36.313512227 +0000 UTC m=+0.074333059 container create c8c7c4e8ff4ee1cb6cfab13a0a23c166a5a19ab363eb126ac339125613c3a976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_wright, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  5 02:26:36 compute-0 podman[470387]: 2025-12-05 02:26:36.277065923 +0000 UTC m=+0.037886765 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:26:36 compute-0 systemd[1]: Started libpod-conmon-c8c7c4e8ff4ee1cb6cfab13a0a23c166a5a19ab363eb126ac339125613c3a976.scope.
Dec  5 02:26:36 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:26:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f4961c78851a0e43162b22e067eda534efea07d920f61f54f1d5588f36f08a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:26:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f4961c78851a0e43162b22e067eda534efea07d920f61f54f1d5588f36f08a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:26:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f4961c78851a0e43162b22e067eda534efea07d920f61f54f1d5588f36f08a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:26:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f4961c78851a0e43162b22e067eda534efea07d920f61f54f1d5588f36f08a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:26:36 compute-0 podman[470387]: 2025-12-05 02:26:36.476204006 +0000 UTC m=+0.237024848 container init c8c7c4e8ff4ee1cb6cfab13a0a23c166a5a19ab363eb126ac339125613c3a976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  5 02:26:36 compute-0 podman[470387]: 2025-12-05 02:26:36.508315768 +0000 UTC m=+0.269136610 container start c8c7c4e8ff4ee1cb6cfab13a0a23c166a5a19ab363eb126ac339125613c3a976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_wright, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:26:36 compute-0 podman[470387]: 2025-12-05 02:26:36.515209292 +0000 UTC m=+0.276030164 container attach c8c7c4e8ff4ee1cb6cfab13a0a23c166a5a19ab363eb126ac339125613c3a976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Dec  5 02:26:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2332: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:26:36 compute-0 ovn_controller[89286]: 2025-12-05T02:26:36Z|00189|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec  5 02:26:37 compute-0 nova_compute[349548]: 2025-12-05 02:26:37.110 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:26:37 compute-0 musing_wright[470403]: {
Dec  5 02:26:37 compute-0 musing_wright[470403]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 02:26:37 compute-0 musing_wright[470403]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:26:37 compute-0 musing_wright[470403]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 02:26:37 compute-0 musing_wright[470403]:        "osd_id": 0,
Dec  5 02:26:37 compute-0 musing_wright[470403]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:26:37 compute-0 musing_wright[470403]:        "type": "bluestore"
Dec  5 02:26:37 compute-0 musing_wright[470403]:    },
Dec  5 02:26:37 compute-0 musing_wright[470403]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 02:26:37 compute-0 musing_wright[470403]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:26:37 compute-0 musing_wright[470403]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 02:26:37 compute-0 musing_wright[470403]:        "osd_id": 1,
Dec  5 02:26:37 compute-0 musing_wright[470403]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:26:37 compute-0 musing_wright[470403]:        "type": "bluestore"
Dec  5 02:26:37 compute-0 musing_wright[470403]:    },
Dec  5 02:26:37 compute-0 musing_wright[470403]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 02:26:37 compute-0 musing_wright[470403]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:26:37 compute-0 musing_wright[470403]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 02:26:37 compute-0 musing_wright[470403]:        "osd_id": 2,
Dec  5 02:26:37 compute-0 musing_wright[470403]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:26:37 compute-0 musing_wright[470403]:        "type": "bluestore"
Dec  5 02:26:37 compute-0 musing_wright[470403]:    }
Dec  5 02:26:37 compute-0 musing_wright[470403]: }
Dec  5 02:26:37 compute-0 systemd[1]: libpod-c8c7c4e8ff4ee1cb6cfab13a0a23c166a5a19ab363eb126ac339125613c3a976.scope: Deactivated successfully.
Dec  5 02:26:37 compute-0 systemd[1]: libpod-c8c7c4e8ff4ee1cb6cfab13a0a23c166a5a19ab363eb126ac339125613c3a976.scope: Consumed 1.201s CPU time.
Dec  5 02:26:37 compute-0 podman[470387]: 2025-12-05 02:26:37.710569443 +0000 UTC m=+1.471390255 container died c8c7c4e8ff4ee1cb6cfab13a0a23c166a5a19ab363eb126ac339125613c3a976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_wright, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  5 02:26:37 compute-0 podman[470434]: 2025-12-05 02:26:37.722030295 +0000 UTC m=+0.117613934 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, distribution-scope=public, managed_by=edpm_ansible, version=9.6, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=openstack_network_exporter, io.openshift.expose-services=, release=1755695350, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  5 02:26:37 compute-0 podman[470429]: 2025-12-05 02:26:37.726659675 +0000 UTC m=+0.131399551 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0)
Dec  5 02:26:37 compute-0 podman[470430]: 2025-12-05 02:26:37.729728911 +0000 UTC m=+0.130313381 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 02:26:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f4961c78851a0e43162b22e067eda534efea07d920f61f54f1d5588f36f08a2-merged.mount: Deactivated successfully.
Dec  5 02:26:37 compute-0 podman[470387]: 2025-12-05 02:26:37.776109114 +0000 UTC m=+1.536929916 container remove c8c7c4e8ff4ee1cb6cfab13a0a23c166a5a19ab363eb126ac339125613c3a976 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_wright, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:26:37 compute-0 podman[470433]: 2025-12-05 02:26:37.776643189 +0000 UTC m=+0.165638043 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:26:37 compute-0 systemd[1]: libpod-conmon-c8c7c4e8ff4ee1cb6cfab13a0a23c166a5a19ab363eb126ac339125613c3a976.scope: Deactivated successfully.
Dec  5 02:26:37 compute-0 nova_compute[349548]: 2025-12-05 02:26:37.800 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:26:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 02:26:37 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:26:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 02:26:37 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:26:37 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 15087611-b428-4a72-a071-d9d103b888bc does not exist
Dec  5 02:26:37 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 0fe80d39-c08d-4c04-a278-fa03667be193 does not exist
Dec  5 02:26:38 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:26:38 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.328 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.329 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.329 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.330 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.333 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.334 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.341 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.342 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.343 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.343 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.343 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.344 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.344 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.344 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.345 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.345 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.346 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.346 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.346 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.346 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.347 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.347 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.347 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.347 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.347 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.348 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.348 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.348 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.348 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.348 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:26:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:26:38.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:26:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:26:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2333: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:26:40 compute-0 nova_compute[349548]: 2025-12-05 02:26:40.421 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:26:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2334: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:26:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2335: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:26:42 compute-0 nova_compute[349548]: 2025-12-05 02:26:42.803 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:26:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:26:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2336: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:26:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 02:26:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/899715316' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 02:26:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 02:26:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/899715316' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 02:26:45 compute-0 nova_compute[349548]: 2025-12-05 02:26:45.427 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:26:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:26:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:26:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:26:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:26:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:26:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:26:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2337: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:26:47 compute-0 nova_compute[349548]: 2025-12-05 02:26:47.806 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:26:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:26:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2338: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:26:50 compute-0 nova_compute[349548]: 2025-12-05 02:26:50.403 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:26:50 compute-0 nova_compute[349548]: 2025-12-05 02:26:50.404 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 02:26:50 compute-0 nova_compute[349548]: 2025-12-05 02:26:50.404 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  5 02:26:50 compute-0 nova_compute[349548]: 2025-12-05 02:26:50.429 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  5 02:26:50 compute-0 nova_compute[349548]: 2025-12-05 02:26:50.431 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:26:50 compute-0 podman[470583]: 2025-12-05 02:26:50.731584498 +0000 UTC m=+0.126101342 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 02:26:50 compute-0 podman[470582]: 2025-12-05 02:26:50.743104152 +0000 UTC m=+0.142043660 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Dec  5 02:26:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2339: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:26:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2340: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:26:52 compute-0 nova_compute[349548]: 2025-12-05 02:26:52.811 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:26:53 compute-0 nova_compute[349548]: 2025-12-05 02:26:53.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:26:53 compute-0 nova_compute[349548]: 2025-12-05 02:26:53.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 02:26:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:26:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2341: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:26:55 compute-0 nova_compute[349548]: 2025-12-05 02:26:55.436 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:26:56 compute-0 nova_compute[349548]: 2025-12-05 02:26:56.068 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:26:56 compute-0 nova_compute[349548]: 2025-12-05 02:26:56.069 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:26:56.229 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:26:56.230 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:26:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:26:56.230 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:26:56 compute-0 podman[470625]: 2025-12-05 02:26:56.733481686 +0000 UTC m=+0.136611778 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, release-0.7.12=, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, vcs-type=git, config_id=edpm, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30)
Dec  5 02:26:56 compute-0 podman[470626]: 2025-12-05 02:26:56.736785778 +0000 UTC m=+0.133656324 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  5 02:26:56 compute-0 podman[470624]: 2025-12-05 02:26:56.75038803 +0000 UTC m=+0.159052098 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec  5 02:26:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2342: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:26:57 compute-0 nova_compute[349548]: 2025-12-05 02:26:57.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:26:57 compute-0 nova_compute[349548]: 2025-12-05 02:26:57.108 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:26:57 compute-0 nova_compute[349548]: 2025-12-05 02:26:57.108 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:26:57 compute-0 nova_compute[349548]: 2025-12-05 02:26:57.109 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:26:57 compute-0 nova_compute[349548]: 2025-12-05 02:26:57.109 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 02:26:57 compute-0 nova_compute[349548]: 2025-12-05 02:26:57.109 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:26:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:26:57 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2954224562' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:26:57 compute-0 nova_compute[349548]: 2025-12-05 02:26:57.613 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:26:57 compute-0 nova_compute[349548]: 2025-12-05 02:26:57.812 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:26:58 compute-0 nova_compute[349548]: 2025-12-05 02:26:58.112 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:26:58 compute-0 nova_compute[349548]: 2025-12-05 02:26:58.113 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3966MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 02:26:58 compute-0 nova_compute[349548]: 2025-12-05 02:26:58.113 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:26:58 compute-0 nova_compute[349548]: 2025-12-05 02:26:58.114 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:26:58 compute-0 nova_compute[349548]: 2025-12-05 02:26:58.197 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 02:26:58 compute-0 nova_compute[349548]: 2025-12-05 02:26:58.198 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 02:26:58 compute-0 nova_compute[349548]: 2025-12-05 02:26:58.220 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:26:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:26:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:26:58 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2702320788' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:26:58 compute-0 nova_compute[349548]: 2025-12-05 02:26:58.806 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:26:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2343: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:26:58 compute-0 nova_compute[349548]: 2025-12-05 02:26:58.819 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:26:58 compute-0 nova_compute[349548]: 2025-12-05 02:26:58.842 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:26:58 compute-0 nova_compute[349548]: 2025-12-05 02:26:58.844 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 02:26:58 compute-0 nova_compute[349548]: 2025-12-05 02:26:58.845 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:26:59 compute-0 podman[158197]: time="2025-12-05T02:26:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:26:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:26:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec  5 02:26:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:26:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8189 "" "Go-http-client/1.1"
Dec  5 02:26:59 compute-0 nova_compute[349548]: 2025-12-05 02:26:59.842 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:27:00 compute-0 nova_compute[349548]: 2025-12-05 02:27:00.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:27:00 compute-0 nova_compute[349548]: 2025-12-05 02:27:00.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:27:00 compute-0 nova_compute[349548]: 2025-12-05 02:27:00.441 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:27:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2344: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:27:01 compute-0 openstack_network_exporter[366555]: ERROR   02:27:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:27:01 compute-0 openstack_network_exporter[366555]: ERROR   02:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:27:01 compute-0 openstack_network_exporter[366555]: ERROR   02:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:27:01 compute-0 openstack_network_exporter[366555]: ERROR   02:27:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:27:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:27:01 compute-0 openstack_network_exporter[366555]: ERROR   02:27:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:27:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:27:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2345: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:27:02 compute-0 nova_compute[349548]: 2025-12-05 02:27:02.815 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:27:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:27:04 compute-0 nova_compute[349548]: 2025-12-05 02:27:04.068 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:27:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2346: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:27:05 compute-0 nova_compute[349548]: 2025-12-05 02:27:05.447 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:27:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2347: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:27:07 compute-0 nova_compute[349548]: 2025-12-05 02:27:07.818 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:27:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:27:08 compute-0 podman[470723]: 2025-12-05 02:27:08.715070395 +0000 UTC m=+0.111779080 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  5 02:27:08 compute-0 podman[470725]: 2025-12-05 02:27:08.72841985 +0000 UTC m=+0.114439605 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, version=9.6, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, release=1755695350, architecture=x86_64, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter)
Dec  5 02:27:08 compute-0 podman[470722]: 2025-12-05 02:27:08.75405182 +0000 UTC m=+0.156739253 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec  5 02:27:08 compute-0 podman[470724]: 2025-12-05 02:27:08.767314813 +0000 UTC m=+0.158233406 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  5 02:27:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2348: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:27:10 compute-0 nova_compute[349548]: 2025-12-05 02:27:10.451 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:27:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2349: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:27:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2350: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:27:12 compute-0 nova_compute[349548]: 2025-12-05 02:27:12.822 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:27:12 compute-0 systemd-logind[792]: New session 64 of user zuul.
Dec  5 02:27:12 compute-0 systemd[1]: Started Session 64 of User zuul.
Dec  5 02:27:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:27:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2351: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:27:15 compute-0 nova_compute[349548]: 2025-12-05 02:27:15.456 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:27:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:27:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:27:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:27:16
Dec  5 02:27:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 02:27:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 02:27:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'images', '.mgr', 'vms', 'volumes']
Dec  5 02:27:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 02:27:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:27:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:27:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:27:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:27:16 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15545 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec  5 02:27:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2352: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:27:17 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15547 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  5 02:27:17 compute-0 nova_compute[349548]: 2025-12-05 02:27:17.825 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:27:17 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Dec  5 02:27:17 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1078030792' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec  5 02:27:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 02:27:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:27:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 02:27:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:27:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:27:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:27:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:27:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:27:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:27:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:27:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:27:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2353: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:27:20 compute-0 nova_compute[349548]: 2025-12-05 02:27:20.461 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:27:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2354: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:27:21 compute-0 ovs-vsctl[471091]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec  5 02:27:21 compute-0 podman[471098]: 2025-12-05 02:27:21.734310825 +0000 UTC m=+0.141327671 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  5 02:27:21 compute-0 podman[471100]: 2025-12-05 02:27:21.751323089 +0000 UTC m=+0.154102252 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 02:27:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2355: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:27:22 compute-0 nova_compute[349548]: 2025-12-05 02:27:22.830 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:27:22 compute-0 virtqemud[138703]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec  5 02:27:22 compute-0 virtqemud[138703]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec  5 02:27:23 compute-0 virtqemud[138703]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec  5 02:27:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:27:23 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: cache status {prefix=cache status} (starting...)
Dec  5 02:27:23 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: client ls {prefix=client ls} (starting...)
Dec  5 02:27:24 compute-0 lvm[471450]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  5 02:27:24 compute-0 lvm[471447]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  5 02:27:24 compute-0 lvm[471447]: VG ceph_vg0 finished
Dec  5 02:27:24 compute-0 lvm[471450]: VG ceph_vg2 finished
Dec  5 02:27:24 compute-0 lvm[471493]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  5 02:27:24 compute-0 lvm[471493]: VG ceph_vg1 finished
Dec  5 02:27:24 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: damage ls {prefix=damage ls} (starting...)
Dec  5 02:27:24 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: dump loads {prefix=dump loads} (starting...)
Dec  5 02:27:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2356: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:27:24 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec  5 02:27:24 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15551 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec  5 02:27:25 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec  5 02:27:25 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec  5 02:27:25 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec  5 02:27:25 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15553 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec  5 02:27:25 compute-0 nova_compute[349548]: 2025-12-05 02:27:25.464 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:27:25 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec  5 02:27:25 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Dec  5 02:27:25 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3396425913' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec  5 02:27:25 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec  5 02:27:25 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:27:25 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2277704271' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:27:26 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: ops {prefix=ops} (starting...)
Dec  5 02:27:26 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15561 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  5 02:27:26 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T02:27:26.212+0000 7f1b09f03640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec  5 02:27:26 compute-0 ceph-mgr[193209]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec  5 02:27:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Dec  5 02:27:26 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3289709011' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec  5 02:27:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Dec  5 02:27:26 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2324153540' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec  5 02:27:26 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: session ls {prefix=session ls} (starting...)
Dec  5 02:27:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2357: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:27:26 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: status {prefix=status} (starting...)
Dec  5 02:27:26 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Dec  5 02:27:26 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2857087708' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec  5 02:27:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Dec  5 02:27:27 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3935095675' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec  5 02:27:27 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Dec  5 02:27:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:27:27.193540) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  5 02:27:27 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Dec  5 02:27:27 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901647193570, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 1277, "num_deletes": 253, "total_data_size": 1926638, "memory_usage": 1959536, "flush_reason": "Manual Compaction"}
Dec  5 02:27:27 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Dec  5 02:27:27 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901647205610, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 1896585, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47337, "largest_seqno": 48613, "table_properties": {"data_size": 1890458, "index_size": 3394, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13116, "raw_average_key_size": 20, "raw_value_size": 1878084, "raw_average_value_size": 2880, "num_data_blocks": 152, "num_entries": 652, "num_filter_entries": 652, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764901524, "oldest_key_time": 1764901524, "file_creation_time": 1764901647, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Dec  5 02:27:27 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 12114 microseconds, and 4638 cpu microseconds.
Dec  5 02:27:27 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 02:27:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:27:27.205655) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 1896585 bytes OK
Dec  5 02:27:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:27:27.205670) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Dec  5 02:27:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:27:27.207232) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Dec  5 02:27:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:27:27.207245) EVENT_LOG_v1 {"time_micros": 1764901647207241, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  5 02:27:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:27:27.207258) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  5 02:27:27 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 1920830, prev total WAL file size 1920830, number of live WAL files 2.
Dec  5 02:27:27 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:27:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:27:27.208368) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Dec  5 02:27:27 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  5 02:27:27 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(1852KB)], [113(8977KB)]
Dec  5 02:27:27 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901647208389, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 11089471, "oldest_snapshot_seqno": -1}
Dec  5 02:27:27 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 6252 keys, 9317781 bytes, temperature: kUnknown
Dec  5 02:27:27 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901647258310, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 9317781, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9276629, "index_size": 24402, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15685, "raw_key_size": 163330, "raw_average_key_size": 26, "raw_value_size": 9164185, "raw_average_value_size": 1465, "num_data_blocks": 970, "num_entries": 6252, "num_filter_entries": 6252, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764901647, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Dec  5 02:27:27 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 02:27:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:27:27.258484) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 9317781 bytes
Dec  5 02:27:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:27:27.260202) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 221.9 rd, 186.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 8.8 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(10.8) write-amplify(4.9) OK, records in: 6773, records dropped: 521 output_compression: NoCompression
Dec  5 02:27:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:27:27.260217) EVENT_LOG_v1 {"time_micros": 1764901647260210, "job": 68, "event": "compaction_finished", "compaction_time_micros": 49972, "compaction_time_cpu_micros": 19571, "output_level": 6, "num_output_files": 1, "total_output_size": 9317781, "num_input_records": 6773, "num_output_records": 6252, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  5 02:27:27 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:27:27 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901647260602, "job": 68, "event": "table_file_deletion", "file_number": 115}
Dec  5 02:27:27 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:27:27 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901647262019, "job": 68, "event": "table_file_deletion", "file_number": 113}
Dec  5 02:27:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:27:27.208266) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:27:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:27:27.262433) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:27:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:27:27.262442) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:27:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:27:27.262446) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:27:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:27:27.262450) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:27:27 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:27:27.262454) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:27:27 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15571 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  5 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  5 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec  5 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:27:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 02:27:27 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Dec  5 02:27:27 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/801306621' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec  5 02:27:27 compute-0 podman[471954]: 2025-12-05 02:27:27.667324586 +0000 UTC m=+0.079690923 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 02:27:27 compute-0 podman[471951]: 2025-12-05 02:27:27.675660618 +0000 UTC m=+0.086407308 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, com.redhat.component=ubi9-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, container_name=kepler, managed_by=edpm_ansible, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, vcs-type=git, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  5 02:27:27 compute-0 podman[471943]: 2025-12-05 02:27:27.685871194 +0000 UTC m=+0.108072856 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, io.buildah.version=1.41.4)
Dec  5 02:27:27 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15575 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec  5 02:27:27 compute-0 nova_compute[349548]: 2025-12-05 02:27:27.830 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:27:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Dec  5 02:27:28 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3441343144' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec  5 02:27:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Dec  5 02:27:28 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1173665859' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec  5 02:27:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:27:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec  5 02:27:28 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3071045181' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  5 02:27:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Dec  5 02:27:28 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/760973472' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec  5 02:27:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2358: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:27:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Dec  5 02:27:29 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3982683381' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec  5 02:27:29 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15587 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec  5 02:27:29 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T02:27:29.174+0000 7f1b09f03640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  5 02:27:29 compute-0 ceph-mgr[193209]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  5 02:27:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Dec  5 02:27:29 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1835647' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec  5 02:27:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Dec  5 02:27:29 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/326692733' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec  5 02:27:29 compute-0 podman[158197]: time="2025-12-05T02:27:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:27:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:27:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec  5 02:27:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:27:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8199 "" "Go-http-client/1.1"
Dec  5 02:27:30 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15593 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  5 02:27:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Dec  5 02:27:30 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2479113107' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec  5 02:27:30 compute-0 nova_compute[349548]: 2025-12-05 02:27:30.467 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:27:30 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15597 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 2220032 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 2220032 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 2220032 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8a41000/0x0/0x4ffc00000, data 0x2f74dbe/0x3036000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420165 data_alloc: 234881024 data_used: 25731072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 2220032 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 2220032 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 2220032 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 2220032 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8a41000/0x0/0x4ffc00000, data 0x2f74dbe/0x3036000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110501888 unmapped: 2220032 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420165 data_alloc: 234881024 data_used: 25731072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110510080 unmapped: 2211840 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110510080 unmapped: 2211840 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8a41000/0x0/0x4ffc00000, data 0x2f74dbe/0x3036000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110510080 unmapped: 2211840 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110510080 unmapped: 2211840 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110510080 unmapped: 2211840 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420165 data_alloc: 234881024 data_used: 25731072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110510080 unmapped: 2211840 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8a41000/0x0/0x4ffc00000, data 0x2f74dbe/0x3036000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110510080 unmapped: 2211840 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110510080 unmapped: 2211840 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110510080 unmapped: 2211840 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110510080 unmapped: 2211840 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420165 data_alloc: 234881024 data_used: 25731072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110510080 unmapped: 2211840 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8a41000/0x0/0x4ffc00000, data 0x2f74dbe/0x3036000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110510080 unmapped: 2211840 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8a41000/0x0/0x4ffc00000, data 0x2f74dbe/0x3036000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110510080 unmapped: 2211840 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110510080 unmapped: 2211840 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110510080 unmapped: 2211840 heap: 112721920 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8a41000/0x0/0x4ffc00000, data 0x2f74dbe/0x3036000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c43a4cf400 session 0x55c43980c960
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c43a4cf000 session 0x55c43980d680
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c438c37800 session 0x55c439965680
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c4399ee400 session 0x55c439562000
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1420165 data_alloc: 234881024 data_used: 25731072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 48.267883301s of 48.293380737s, submitted: 8
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110452736 unmapped: 3317760 heap: 113770496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c4399ef800 session 0x55c43980d2c0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c43a4cf400 session 0x55c43967a780
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c43a4cec00 session 0x55c439c74d20
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c438c37800 session 0x55c439c74b40
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c4399ee400 session 0x55c4378c34a0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c4399ef800 session 0x55c439e16b40
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c43a4cf400 session 0x55c43ac4dc20
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c439892c00 session 0x55c439345e00
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109871104 unmapped: 10199040 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c438c37800 session 0x55c43ac4cb40
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c4399ee400 session 0x55c43a54f860
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c4399ef800 session 0x55c439866960
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c43a4cf400 session 0x55c4398a9680
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f84d8000/0x0/0x4ffc00000, data 0x34e3dce/0x35a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109879296 unmapped: 10190848 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f84d8000/0x0/0x4ffc00000, data 0x34e3dce/0x35a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109879296 unmapped: 10190848 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109953024 unmapped: 10117120 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c4398b2400 session 0x55c439101680
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1464999 data_alloc: 234881024 data_used: 25731072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c438c37800 session 0x55c437aaf860
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109953024 unmapped: 10117120 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f84d8000/0x0/0x4ffc00000, data 0x34e3dce/0x35a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c4399ee400 session 0x55c439344b40
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c4399ef800 session 0x55c43ac4cf00
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 111067136 unmapped: 9003008 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f84d4000/0x0/0x4ffc00000, data 0x34e6df1/0x35aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110804992 unmapped: 9265152 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f84d4000/0x0/0x4ffc00000, data 0x34e6df1/0x35aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110804992 unmapped: 9265152 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110804992 unmapped: 9265152 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1474021 data_alloc: 234881024 data_used: 26497024
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 111722496 unmapped: 8347648 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114540544 unmapped: 5529600 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f84d4000/0x0/0x4ffc00000, data 0x34e6df1/0x35aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114548736 unmapped: 5521408 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114548736 unmapped: 5521408 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114548736 unmapped: 5521408 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1500421 data_alloc: 251658240 data_used: 30220288
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f84d4000/0x0/0x4ffc00000, data 0x34e6df1/0x35aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114548736 unmapped: 5521408 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114548736 unmapped: 5521408 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114548736 unmapped: 5521408 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f84d4000/0x0/0x4ffc00000, data 0x34e6df1/0x35aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f84d4000/0x0/0x4ffc00000, data 0x34e6df1/0x35aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114548736 unmapped: 5521408 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f84d4000/0x0/0x4ffc00000, data 0x34e6df1/0x35aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114548736 unmapped: 5521408 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1500421 data_alloc: 251658240 data_used: 30220288
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114548736 unmapped: 5521408 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114548736 unmapped: 5521408 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114548736 unmapped: 5521408 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114548736 unmapped: 5521408 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114556928 unmapped: 5513216 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f84d4000/0x0/0x4ffc00000, data 0x34e6df1/0x35aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1500421 data_alloc: 251658240 data_used: 30220288
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114556928 unmapped: 5513216 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f84d4000/0x0/0x4ffc00000, data 0x34e6df1/0x35aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 25.994903564s of 26.165307999s, submitted: 27
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114663424 unmapped: 5406720 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114745344 unmapped: 5324800 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 5660672 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 5660672 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f84d4000/0x0/0x4ffc00000, data 0x34e6df1/0x35aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1500021 data_alloc: 251658240 data_used: 30224384
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 5660672 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f84d4000/0x0/0x4ffc00000, data 0x34e6df1/0x35aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 5660672 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 5660672 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 5660672 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 5660672 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f84d4000/0x0/0x4ffc00000, data 0x34e6df1/0x35aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1500021 data_alloc: 251658240 data_used: 30224384
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 5660672 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 5660672 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f84d4000/0x0/0x4ffc00000, data 0x34e6df1/0x35aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 5660672 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114409472 unmapped: 5660672 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114384896 unmapped: 5685248 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1500021 data_alloc: 251658240 data_used: 30224384
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114393088 unmapped: 5677056 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114393088 unmapped: 5677056 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f84d4000/0x0/0x4ffc00000, data 0x34e6df1/0x35aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114425856 unmapped: 5644288 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f84d4000/0x0/0x4ffc00000, data 0x34e6df1/0x35aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114515968 unmapped: 5554176 heap: 120070144 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.991294861s of 17.457975388s, submitted: 90
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117735424 unmapped: 4440064 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1553143 data_alloc: 251658240 data_used: 30351360
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117768192 unmapped: 4407296 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e37000/0x0/0x4ffc00000, data 0x3b83df1/0x3c47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117825536 unmapped: 4349952 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117825536 unmapped: 4349952 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e2f000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117825536 unmapped: 4349952 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e2f000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117825536 unmapped: 4349952 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e2f000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1554279 data_alloc: 251658240 data_used: 30339072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 4317184 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e2f000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 4317184 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 4317184 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 4317184 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117858304 unmapped: 4317184 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1554279 data_alloc: 251658240 data_used: 30339072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 4308992 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117866496 unmapped: 4308992 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e2f000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117874688 unmapped: 4300800 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117874688 unmapped: 4300800 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e2f000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117874688 unmapped: 4300800 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1554279 data_alloc: 251658240 data_used: 30339072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117874688 unmapped: 4300800 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e2f000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e2f000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117874688 unmapped: 4300800 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117874688 unmapped: 4300800 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e2f000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117874688 unmapped: 4300800 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 4284416 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1554279 data_alloc: 251658240 data_used: 30339072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 4284416 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 4284416 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 4284416 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e2f000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 4284416 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 4284416 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1554279 data_alloc: 251658240 data_used: 30339072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 4284416 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e2f000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 4284416 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 4284416 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e2f000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 4284416 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 4276224 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1554279 data_alloc: 251658240 data_used: 30339072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e2f000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117899264 unmapped: 4276224 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 32.652729034s of 32.879589081s, submitted: 50
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e2f000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116490240 unmapped: 5685248 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 5677056 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 5677056 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 5677056 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 5677056 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 5677056 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 5677056 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 5677056 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 5677056 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 5677056 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 5677056 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 5677056 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 5677056 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116498432 unmapped: 5677056 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 5668864 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 5668864 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 5668864 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 5668864 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 5668864 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 5668864 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 5668864 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 5668864 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 5668864 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 5668864 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 5668864 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 5668864 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116506624 unmapped: 5668864 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 5660672 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 5660672 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 5660672 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 5660672 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 5660672 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116514816 unmapped: 5660672 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 5652480 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 5652480 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 5652480 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 5652480 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 5652480 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 5652480 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 5652480 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 5652480 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 5652480 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 5652480 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 5652480 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 5652480 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116523008 unmapped: 5652480 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 5644288 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 5644288 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 5644288 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 5644288 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 5644288 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 5644288 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 5644288 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 5644288 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 5644288 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 5644288 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 5644288 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 5644288 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 5644288 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 5644288 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 5644288 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116531200 unmapped: 5644288 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 5636096 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 5636096 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 5636096 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 5636096 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 5636096 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 5636096 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 5636096 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 5636096 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 5636096 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116539392 unmapped: 5636096 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 5627904 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 5619712 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 5619712 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550727 data_alloc: 251658240 data_used: 30339072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 5619712 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 5619712 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 5619712 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x3b8adf1/0x3c4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 5619712 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c4399ef400 session 0x55c4398c2f00
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 153.511062622s of 153.552902222s, submitted: 2
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c4399f1000 session 0x55c439344780
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c4399ee800 session 0x55c43912a780
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 116555776 unmapped: 5619712 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1354486 data_alloc: 234881024 data_used: 19447808
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c438c37800 session 0x55c439c75e00
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 234881024 data_used: 19435520
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 234881024 data_used: 19435520
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 234881024 data_used: 19435520
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 234881024 data_used: 19435520
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 234881024 data_used: 19435520
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 234881024 data_used: 19435520
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 234881024 data_used: 19435520
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 234881024 data_used: 19435520
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 218103808 data_used: 19435520
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 218103808 data_used: 19435520
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 218103808 data_used: 19435520
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 218103808 data_used: 19435520
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 218103808 data_used: 19435520
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 218103808 data_used: 19435520
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 218103808 data_used: 19435520
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 218103808 data_used: 19435520
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 218103808 data_used: 19435520
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c43989b000 session 0x55c43986c3c0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 218103808 data_used: 19435520
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 218103808 data_used: 19435520
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 218103808 data_used: 19435520
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 218103808 data_used: 19435520
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 218103808 data_used: 19435520
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351528 data_alloc: 218103808 data_used: 19435520
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f8e13000/0x0/0x4ffc00000, data 0x2ba8d7f/0x2c6a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109502464 unmapped: 12673024 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c4399efc00 session 0x55c439345860
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c437982c00 session 0x55c4399cde00
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c43a4cf800 session 0x55c439344000
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 120.342414856s of 120.453239441s, submitted: 19
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107462656 unmapped: 14712832 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 ms_handle_reset con 0x55c437982c00 session 0x55c43912a960
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107462656 unmapped: 14712832 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1301576 data_alloc: 218103808 data_used: 16261120
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107462656 unmapped: 14712832 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f9184000/0x0/0x4ffc00000, data 0x2838d7f/0x28fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107462656 unmapped: 14712832 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107462656 unmapped: 14712832 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107462656 unmapped: 14712832 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107462656 unmapped: 14712832 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1301576 data_alloc: 218103808 data_used: 16261120
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107462656 unmapped: 14712832 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f9184000/0x0/0x4ffc00000, data 0x2838d7f/0x28fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f9184000/0x0/0x4ffc00000, data 0x2838d7f/0x28fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107462656 unmapped: 14712832 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107462656 unmapped: 14712832 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f9184000/0x0/0x4ffc00000, data 0x2838d7f/0x28fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107462656 unmapped: 14712832 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f9184000/0x0/0x4ffc00000, data 0x2838d7f/0x28fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107462656 unmapped: 14712832 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1301576 data_alloc: 218103808 data_used: 16261120
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107462656 unmapped: 14712832 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107462656 unmapped: 14712832 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f9184000/0x0/0x4ffc00000, data 0x2838d7f/0x28fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107462656 unmapped: 14712832 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f9184000/0x0/0x4ffc00000, data 0x2838d7f/0x28fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107462656 unmapped: 14712832 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107462656 unmapped: 14712832 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1301576 data_alloc: 218103808 data_used: 16261120
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 14704640 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 14704640 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f9184000/0x0/0x4ffc00000, data 0x2838d7f/0x28fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 14704640 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 14704640 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 14704640 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f9184000/0x0/0x4ffc00000, data 0x2838d7f/0x28fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1301576 data_alloc: 218103808 data_used: 16261120
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 14704640 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 14704640 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 14704640 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f9184000/0x0/0x4ffc00000, data 0x2838d7f/0x28fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107470848 unmapped: 14704640 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 25.429254532s of 25.455911636s, submitted: 4
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 15581184 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304964 data_alloc: 218103808 data_used: 16261120
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 15581184 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 15581184 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f9183000/0x0/0x4ffc00000, data 0x2838da2/0x28fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 15581184 heap: 122175488 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f9183000/0x0/0x4ffc00000, data 0x2838da2/0x28fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 heartbeat osd_stat(store_statfs(0x4f9183000/0x0/0x4ffc00000, data 0x2838da2/0x28fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 106594304 unmapped: 31416320 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 ms_handle_reset con 0x55c438c37800 session 0x55c4398c25a0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 106659840 unmapped: 31350784 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363204 data_alloc: 218103808 data_used: 16269312
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 106659840 unmapped: 31350784 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f89f2000/0x0/0x4ffc00000, data 0x2fc6342/0x308b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 106659840 unmapped: 31350784 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 106659840 unmapped: 31350784 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 106659840 unmapped: 31350784 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f89f2000/0x0/0x4ffc00000, data 0x2fc6342/0x308b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 106659840 unmapped: 31350784 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1363204 data_alloc: 218103808 data_used: 16269312
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 106659840 unmapped: 31350784 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f89f2000/0x0/0x4ffc00000, data 0x2fc6342/0x308b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 106659840 unmapped: 31350784 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 ms_handle_reset con 0x55c4399ee800 session 0x55c4398a8000
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 ms_handle_reset con 0x55c4399efc00 session 0x55c43912b680
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 ms_handle_reset con 0x55c4399ef400 session 0x55c43a54e1e0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 106659840 unmapped: 31350784 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.135446548s of 14.344388962s, submitted: 25
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 ms_handle_reset con 0x55c437982c00 session 0x55c4398d41e0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 ms_handle_reset con 0x55c438c37800 session 0x55c4398c2b40
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113647616 unmapped: 24363008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 ms_handle_reset con 0x55c4399ee800 session 0x55c4399cd680
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 ms_handle_reset con 0x55c4399efc00 session 0x55c4397dbe00
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114630656 unmapped: 23379968 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 ms_handle_reset con 0x55c4399ef800 session 0x55c43911a780
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 ms_handle_reset con 0x55c437982c00 session 0x55c439563680
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 ms_handle_reset con 0x55c438c37800 session 0x55c439964f00
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 ms_handle_reset con 0x55c4399ee800 session 0x55c436d8d0e0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1434785 data_alloc: 234881024 data_used: 23072768
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 ms_handle_reset con 0x55c4399efc00 session 0x55c43967a960
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114638848 unmapped: 23371776 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 ms_handle_reset con 0x55c438d52000 session 0x55c4373b7a40
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 ms_handle_reset con 0x55c437982c00 session 0x55c43802b0e0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 ms_handle_reset con 0x55c438c37800 session 0x55c437bdf0e0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f83c4000/0x0/0x4ffc00000, data 0x35f2416/0x36ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114638848 unmapped: 23371776 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114712576 unmapped: 23298048 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 ms_handle_reset con 0x55c4399ee800 session 0x55c43806da40
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114712576 unmapped: 23298048 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114712576 unmapped: 23298048 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1435525 data_alloc: 234881024 data_used: 23080960
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114712576 unmapped: 23298048 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f83c3000/0x0/0x4ffc00000, data 0x35f2439/0x36bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114720768 unmapped: 23289856 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 22331392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 22331392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 22331392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1481765 data_alloc: 234881024 data_used: 29540352
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 22331392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f83c3000/0x0/0x4ffc00000, data 0x35f2439/0x36bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 22331392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 22331392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f83c3000/0x0/0x4ffc00000, data 0x35f2439/0x36bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 22331392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 22331392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1481765 data_alloc: 234881024 data_used: 29540352
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 22331392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f83c3000/0x0/0x4ffc00000, data 0x35f2439/0x36bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 22331392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 22331392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 22331392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 22331392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1481765 data_alloc: 234881024 data_used: 29540352
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 22331392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 ms_handle_reset con 0x55c4399efc00 session 0x55c43806d680
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 ms_handle_reset con 0x55c438d52400 session 0x55c4397db4a0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 ms_handle_reset con 0x55c438d52800 session 0x55c439101680
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115679232 unmapped: 22331392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 23.625425339s of 23.886270523s, submitted: 40
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 heartbeat osd_stat(store_statfs(0x4f83c3000/0x0/0x4ffc00000, data 0x35f2439/0x36bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [0,1])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114130944 unmapped: 23879680 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 ms_handle_reset con 0x55c437982c00 session 0x55c437953c20
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113704960 unmapped: 24305664 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113704960 unmapped: 24305664 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387971 data_alloc: 234881024 data_used: 23068672
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 127 handle_osd_map epochs [127,128], i have 127, src has [1,128]
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113713152 unmapped: 24297472 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 128 ms_handle_reset con 0x55c438c37800 session 0x55c43965e1e0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107479040 unmapped: 30531584 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f917d000/0x0/0x4ffc00000, data 0x283c4cd/0x2900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107479040 unmapped: 30531584 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107479040 unmapped: 30531584 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107479040 unmapped: 30531584 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f917d000/0x0/0x4ffc00000, data 0x283c4cd/0x2900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1318945 data_alloc: 218103808 data_used: 16277504
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107479040 unmapped: 30531584 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f917d000/0x0/0x4ffc00000, data 0x283c4cd/0x2900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107479040 unmapped: 30531584 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 128 heartbeat osd_stat(store_statfs(0x4f917d000/0x0/0x4ffc00000, data 0x283c4cd/0x2900000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107175936 unmapped: 30834688 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107175936 unmapped: 30834688 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.102030754s of 12.603260040s, submitted: 82
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 30826496 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 129 heartbeat osd_stat(store_statfs(0x4f917a000/0x0/0x4ffc00000, data 0x283df30/0x2903000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1322943 data_alloc: 218103808 data_used: 16285696
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 30826496 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 30826496 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 30826496 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 30826496 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 30826496 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323784 data_alloc: 218103808 data_used: 16285696
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 129 heartbeat osd_stat(store_statfs(0x4f917b000/0x0/0x4ffc00000, data 0x283df30/0x2903000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 30826496 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107184128 unmapped: 30826496 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 130 ms_handle_reset con 0x55c4399ee800 session 0x55c437172d20
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107200512 unmapped: 30810112 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107200512 unmapped: 30810112 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 130 heartbeat osd_stat(store_statfs(0x4f89ea000/0x0/0x4ffc00000, data 0x2fcb4d0/0x3093000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107200512 unmapped: 30810112 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1381830 data_alloc: 218103808 data_used: 16293888
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107200512 unmapped: 30810112 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107200512 unmapped: 30810112 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 130 heartbeat osd_stat(store_statfs(0x4f89ea000/0x0/0x4ffc00000, data 0x2fcb4d0/0x3093000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107208704 unmapped: 30801920 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107208704 unmapped: 30801920 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107208704 unmapped: 30801920 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.494970322s of 15.655971527s, submitted: 28
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1380284 data_alloc: 218103808 data_used: 16293888
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107200512 unmapped: 30810112 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 131 ms_handle_reset con 0x55c4399efc00 session 0x55c439867a40
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 131 heartbeat osd_stat(store_statfs(0x4f9174000/0x0/0x4ffc00000, data 0x284167e/0x2909000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 131 heartbeat osd_stat(store_statfs(0x4f9174000/0x0/0x4ffc00000, data 0x284167e/0x2909000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 131 ms_handle_reset con 0x55c4398b3000 session 0x55c4398bda40
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332618 data_alloc: 218103808 data_used: 16302080
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 131 heartbeat osd_stat(store_statfs(0x4f9174000/0x0/0x4ffc00000, data 0x284167e/0x2909000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 30760960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 7411 writes, 29K keys, 7411 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7411 writes, 1632 syncs, 4.54 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 761 writes, 2337 keys, 761 commit groups, 1.0 writes per commit group, ingest: 1.65 MB, 0.00 MB/s#012Interval WAL: 761 writes, 334 syncs, 2.28 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107257856 unmapped: 30752768 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 30744576 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 30744576 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 30744576 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 30744576 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 30744576 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 30744576 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 30744576 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 30744576 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 30744576 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107266048 unmapped: 30744576 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 30736384 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 ms_handle_reset con 0x55c43a4cf400 session 0x55c43a54e780
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 ms_handle_reset con 0x55c4398b2800 session 0x55c4397da1e0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 173.469711304s of 174.075759888s, submitted: 52
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 ms_handle_reset con 0x55c4398b2000 session 0x55c43986dc20
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103825408 unmapped: 34185216 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 ms_handle_reset con 0x55c4398b3000 session 0x55c4399641e0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103825408 unmapped: 34185216 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c300ae/0x1cf7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210818 data_alloc: 218103808 data_used: 11628544
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103825408 unmapped: 34185216 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103825408 unmapped: 34185216 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103825408 unmapped: 34185216 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c300ae/0x1cf7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103825408 unmapped: 34185216 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c300ae/0x1cf7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103825408 unmapped: 34185216 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210818 data_alloc: 218103808 data_used: 11628544
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103825408 unmapped: 34185216 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103825408 unmapped: 34185216 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.715733528s of 10.014011383s, submitted: 44
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103849984 unmapped: 34160640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103874560 unmapped: 34136064 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103874560 unmapped: 34136064 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c300ae/0x1cf7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [0,0,0,1])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210818 data_alloc: 218103808 data_used: 11628544
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9977000/0x0/0x4ffc00000, data 0x1c300ae/0x1cf7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103890944 unmapped: 34119680 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103890944 unmapped: 34119680 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103890944 unmapped: 34119680 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103890944 unmapped: 34119680 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103890944 unmapped: 34119680 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210818 data_alloc: 218103808 data_used: 11628544
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103890944 unmapped: 34119680 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9977000/0x0/0x4ffc00000, data 0x1c300ae/0x1cf7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103890944 unmapped: 34119680 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103890944 unmapped: 34119680 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 ms_handle_reset con 0x55c4398b2c00 session 0x55c43990bc20
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 ms_handle_reset con 0x55c437982c00 session 0x55c4398c2d20
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 ms_handle_reset con 0x55c4399f0c00 session 0x55c4398c2780
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.585576057s of 11.133249283s, submitted: 76
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100327424 unmapped: 37683200 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 ms_handle_reset con 0x55c4398b2000 session 0x55c4373285a0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 80.609878540s of 80.641670227s, submitted: 13
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 37224448 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 ms_handle_reset con 0x55c4398b2800 session 0x55c439c743c0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100802560 unmapped: 37208064 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061603 data_alloc: 218103808 data_used: 4386816
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100802560 unmapped: 37208064 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa4e0000/0x0/0x4ffc00000, data 0x10c5c1b/0x118d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 ms_handle_reset con 0x55c4398b3000 session 0x55c4398a9860
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125416 data_alloc: 218103808 data_used: 4403200
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125416 data_alloc: 218103808 data_used: 4403200
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125416 data_alloc: 218103808 data_used: 4403200
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125416 data_alloc: 218103808 data_used: 4403200
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125416 data_alloc: 218103808 data_used: 4403200
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125416 data_alloc: 218103808 data_used: 4403200
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125416 data_alloc: 218103808 data_used: 4403200
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125576 data_alloc: 218103808 data_used: 4407296
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125576 data_alloc: 218103808 data_used: 4407296
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125576 data_alloc: 218103808 data_used: 4407296
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125576 data_alloc: 218103808 data_used: 4407296
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 ms_handle_reset con 0x55c437982c00 session 0x55c4373b0780
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 ms_handle_reset con 0x55c4398b2000 session 0x55c43911be00
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 ms_handle_reset con 0x55c4398b2800 session 0x55c43803e780
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 59.751800537s of 59.914466858s, submitted: 18
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c43a4cf400 session 0x55c43911a1e0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4399f0c00 session 0x55c4378c32c0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107839488 unmapped: 30171136 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9cd8000/0x0/0x4ffc00000, data 0x18c935b/0x1995000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c437982c00 session 0x55c439165e00
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4398b2800 session 0x55c436fc4d20
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107356160 unmapped: 30654464 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4398b2000 session 0x55c439164960
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4399ee800 session 0x55c4373183c0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c438edbc00 session 0x55c437aa0000
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4398b2000 session 0x55c437aa1c20
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4398b2800 session 0x55c437aa0780
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4399ee800 session 0x55c43914a1e0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210372 data_alloc: 218103808 data_used: 11231232
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c437982c00 session 0x55c439859860
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4398a0800 session 0x55c43914a000
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c43a4ce800 session 0x55c437319c20
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4398b2000 session 0x55c4398a9a40
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c437982c00 session 0x55c43914bc20
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4399ee800 session 0x55c4373312c0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107626496 unmapped: 30384128 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c438cd6800 session 0x55c437330000
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c437982c00 session 0x55c4399cd4a0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4398b2000 session 0x55c4399cde00
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 135 handle_osd_map epochs [136,136], i have 136, src has [1,136]
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107667456 unmapped: 30343168 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f90ad000/0x0/0x4ffc00000, data 0x24f1f7b/0x25c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,1])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 136 ms_handle_reset con 0x55c4398b2800 session 0x55c4397dab40
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107675648 unmapped: 30334976 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 30056448 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 136 ms_handle_reset con 0x55c4399ee800 session 0x55c4399643c0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 136 ms_handle_reset con 0x55c43a4ce800 session 0x55c4398c3680
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 136 ms_handle_reset con 0x55c437982c00 session 0x55c4398665a0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 136 ms_handle_reset con 0x55c4398b2000 session 0x55c43980cd20
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 136 ms_handle_reset con 0x55c4398b2800 session 0x55c4398d4780
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 30015488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304547 data_alloc: 218103808 data_used: 11243520
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 30015488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8b0c000/0x0/0x4ffc00000, data 0x2a94f58/0x2b62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 136 ms_handle_reset con 0x55c4399ee800 session 0x55c4398d45a0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 30015488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 136 ms_handle_reset con 0x55c4398b3800 session 0x55c437329860
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107569152 unmapped: 30441472 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107569152 unmapped: 30441472 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107175936 unmapped: 30834688 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1370618 data_alloc: 234881024 data_used: 19755008
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.508833885s of 12.396708488s, submitted: 129
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109142016 unmapped: 28868608 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f8ade000/0x0/0x4ffc00000, data 0x2ac09bb/0x2b8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110321664 unmapped: 27688960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110321664 unmapped: 27688960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f8ade000/0x0/0x4ffc00000, data 0x2ac09bb/0x2b8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110321664 unmapped: 27688960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f8ade000/0x0/0x4ffc00000, data 0x2ac09bb/0x2b8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110321664 unmapped: 27688960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c43989d400 session 0x55c43965e3c0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c438edbc00 session 0x55c43802a000
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387780 data_alloc: 234881024 data_used: 21659648
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b2400 session 0x55c437aa1e00
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 108232704 unmapped: 29777920 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b3c00 session 0x55c4399cd0e0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 30408704 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 30408704 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91d8000/0x0/0x4ffc00000, data 0x23c79bb/0x2496000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 30408704 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 30408704 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1302492 data_alloc: 234881024 data_used: 17076224
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107610112 unmapped: 30400512 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91d8000/0x0/0x4ffc00000, data 0x23c79bb/0x2496000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91d8000/0x0/0x4ffc00000, data 0x23c79bb/0x2496000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332732 data_alloc: 234881024 data_used: 21368832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91d8000/0x0/0x4ffc00000, data 0x23c79bb/0x2496000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332732 data_alloc: 234881024 data_used: 21368832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91d8000/0x0/0x4ffc00000, data 0x23c79bb/0x2496000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91d8000/0x0/0x4ffc00000, data 0x23c79bb/0x2496000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332732 data_alloc: 234881024 data_used: 21368832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109715456 unmapped: 28295168 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4399efc00 session 0x55c4399cc5a0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c438edbc00 session 0x55c4399cd2c0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c43989d400 session 0x55c4399cc000
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b2400 session 0x55c43802ba40
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109715456 unmapped: 28295168 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 26.453773499s of 26.580440521s, submitted: 32
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b3c00 session 0x55c43986d860
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4399efc00 session 0x55c43965fc20
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c438edbc00 session 0x55c4373b70e0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c43989d400 session 0x55c43914bc20
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b2400 session 0x55c4398a92c0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 27394048 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 27394048 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 27394048 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1389229 data_alloc: 234881024 data_used: 21372928
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f8b33000/0x0/0x4ffc00000, data 0x2a6ba1d/0x2b3b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110649344 unmapped: 27361280 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 27344896 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 20815872 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b3c00 session 0x55c43980c3c0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 22495232 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7e8f000/0x0/0x4ffc00000, data 0x370ea1d/0x37de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115564544 unmapped: 22446080 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1503030 data_alloc: 234881024 data_used: 22388736
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115113984 unmapped: 22896640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 20946944 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117784576 unmapped: 20226048 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120913920 unmapped: 17096704 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4399efc00 session 0x55c4398590e0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.942190170s of 12.467167854s, submitted: 119
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4399ef800 session 0x55c4399cc1e0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118046720 unmapped: 19963904 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7e8a000/0x0/0x4ffc00000, data 0x3714a1d/0x37e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,1])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1451822 data_alloc: 234881024 data_used: 22401024
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c438edbc00 session 0x55c43967b4a0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7e8a000/0x0/0x4ffc00000, data 0x3714a1d/0x37e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118063104 unmapped: 19947520 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f852e000/0x0/0x4ffc00000, data 0x30709bb/0x313f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118063104 unmapped: 19947520 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f852e000/0x0/0x4ffc00000, data 0x30709bb/0x313f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120578048 unmapped: 17432576 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120578048 unmapped: 17432576 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120602624 unmapped: 17408000 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1503456 data_alloc: 234881024 data_used: 22880256
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120602624 unmapped: 17408000 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120602624 unmapped: 17408000 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120602624 unmapped: 17408000 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120602624 unmapped: 17408000 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120610816 unmapped: 17399808 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1503456 data_alloc: 234881024 data_used: 22880256
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.669157028s of 11.117276192s, submitted: 70
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120610816 unmapped: 17399808 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120610816 unmapped: 17399808 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120610816 unmapped: 17399808 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120610816 unmapped: 17399808 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120610816 unmapped: 17399808 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1503472 data_alloc: 234881024 data_used: 22880256
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120610816 unmapped: 17399808 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120610816 unmapped: 17399808 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 17391616 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 17391616 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 17391616 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1503472 data_alloc: 234881024 data_used: 22880256
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 17391616 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 17391616 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 17391616 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 17391616 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 17391616 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1503472 data_alloc: 234881024 data_used: 22880256
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120627200 unmapped: 17383424 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120627200 unmapped: 17383424 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120627200 unmapped: 17383424 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120627200 unmapped: 17383424 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120627200 unmapped: 17383424 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1503792 data_alloc: 234881024 data_used: 22888448
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120627200 unmapped: 17383424 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120627200 unmapped: 17383424 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120627200 unmapped: 17383424 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120635392 unmapped: 17375232 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120635392 unmapped: 17375232 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1503792 data_alloc: 234881024 data_used: 22888448
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120635392 unmapped: 17375232 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120635392 unmapped: 17375232 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120635392 unmapped: 17375232 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120635392 unmapped: 17375232 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 29.523187637s of 29.533178329s, submitted: 1
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c43989d400 session 0x55c439e163c0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118743040 unmapped: 19267584 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b2400 session 0x55c439100f00
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b3c00 session 0x55c439c74960
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c438edbc00 session 0x55c43990b2c0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c43989d400 session 0x55c4399ccf00
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533172 data_alloc: 234881024 data_used: 22888448
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118743040 unmapped: 19267584 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118751232 unmapped: 19259392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118751232 unmapped: 19259392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b2800 session 0x55c437aa0b40
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b2000 session 0x55c4373183c0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118751232 unmapped: 19259392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7c4e000/0x0/0x4ffc00000, data 0x39519bb/0x3a20000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b2400 session 0x55c43965e000
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118751232 unmapped: 19259392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533172 data_alloc: 234881024 data_used: 22888448
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118751232 unmapped: 19259392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118751232 unmapped: 19259392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c437982c00 session 0x55c437c10960
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b3000 session 0x55c43965ef00
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7c4e000/0x0/0x4ffc00000, data 0x39519bb/0x3a20000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118751232 unmapped: 19259392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b2000 session 0x55c43717eb40
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115195904 unmapped: 22814720 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115204096 unmapped: 22806528 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352478 data_alloc: 234881024 data_used: 17600512
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115204096 unmapped: 22806528 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115204096 unmapped: 22806528 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.346131325s of 12.543250084s, submitted: 31
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b2800 session 0x55c437aa0960
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115204096 unmapped: 22806528 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f89f5000/0x0/0x4ffc00000, data 0x2750949/0x281d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115204096 unmapped: 22806528 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115204096 unmapped: 22806528 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355411 data_alloc: 234881024 data_used: 17719296
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115335168 unmapped: 22675456 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4399ef800 session 0x55c437aa14a0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4399ef400 session 0x55c439101e00
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115335168 unmapped: 22675456 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91bd000/0x0/0x4ffc00000, data 0x23e396c/0x24b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c437982c00 session 0x55c43ac4de00
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91bd000/0x0/0x4ffc00000, data 0x23e396c/0x24b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324103 data_alloc: 234881024 data_used: 17600512
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91bd000/0x0/0x4ffc00000, data 0x23e3949/0x24b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324103 data_alloc: 234881024 data_used: 17600512
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91bd000/0x0/0x4ffc00000, data 0x23e3949/0x24b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324103 data_alloc: 234881024 data_used: 17600512
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91bd000/0x0/0x4ffc00000, data 0x23e3949/0x24b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324103 data_alloc: 234881024 data_used: 17600512
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91bd000/0x0/0x4ffc00000, data 0x23e3949/0x24b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91bd000/0x0/0x4ffc00000, data 0x23e3949/0x24b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91bd000/0x0/0x4ffc00000, data 0x23e3949/0x24b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324103 data_alloc: 234881024 data_used: 17600512
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91bd000/0x0/0x4ffc00000, data 0x23e3949/0x24b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 28.978549957s of 29.158163071s, submitted: 17
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91bd000/0x0/0x4ffc00000, data 0x23e3949/0x24b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 23904256 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 23904256 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 23904256 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91be000/0x0/0x4ffc00000, data 0x23e3949/0x24b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 23904256 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324439 data_alloc: 234881024 data_used: 17666048
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 23904256 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113999872 unmapped: 24010752 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113999872 unmapped: 24010752 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113999872 unmapped: 24010752 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91b9000/0x0/0x4ffc00000, data 0x23e8949/0x24b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113999872 unmapped: 24010752 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1327689 data_alloc: 234881024 data_used: 17661952
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113999872 unmapped: 24010752 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113999872 unmapped: 24010752 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113999872 unmapped: 24010752 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113999872 unmapped: 24010752 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.963225365s of 13.028007507s, submitted: 11
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 24256512 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91b8000/0x0/0x4ffc00000, data 0x23e9949/0x24b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1326709 data_alloc: 234881024 data_used: 17661952
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 24256512 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 24256512 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 24256512 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f81b7000/0x0/0x4ffc00000, data 0x33e9959/0x34b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 32628736 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f81b7000/0x0/0x4ffc00000, data 0x33e9959/0x34b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7d47000/0x0/0x4ffc00000, data 0x3859959/0x3927000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,0,0,1])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2000 session 0x55c43ac4c000
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 32620544 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1474162 data_alloc: 234881024 data_used: 17670144
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 32620544 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 32620544 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 32620544 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7d44000/0x0/0x4ffc00000, data 0x385b4d6/0x392a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/262505520' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 32620544 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 32620544 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473458 data_alloc: 234881024 data_used: 17670144
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 32612352 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7d44000/0x0/0x4ffc00000, data 0x385b4d6/0x392a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 32612352 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7d44000/0x0/0x4ffc00000, data 0x385b4d6/0x392a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 32612352 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 32612352 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7d44000/0x0/0x4ffc00000, data 0x385b4d6/0x392a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.567854881s of 15.844666481s, submitted: 24
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2800 session 0x55c437380960
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 121815040 unmapped: 24592384 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b3000 session 0x55c437aa0d20
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1525586 data_alloc: 251658240 data_used: 37007360
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 132841472 unmapped: 13565952 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7d44000/0x0/0x4ffc00000, data 0x385b4d6/0x392a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c439838000
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c43989d400 session 0x55c4398a90e0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c43806cd20
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2800 session 0x55c437aa05a0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b3000 session 0x55c43a54e5a0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b3000 session 0x55c43a54ef00
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c43990b4a0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7715000/0x0/0x4ffc00000, data 0x3e8a4d6/0x3f59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2000 session 0x55c439101c20
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c4373314a0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1443055 data_alloc: 234881024 data_used: 30597120
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e01000/0x0/0x4ffc00000, data 0x336d4d6/0x343c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e01000/0x0/0x4ffc00000, data 0x336d4d6/0x343c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e01000/0x0/0x4ffc00000, data 0x336d4d6/0x343c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1443187 data_alloc: 234881024 data_used: 30597120
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4399ef400 session 0x55c439101a40
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c439100f00
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c4398a92c0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2000 session 0x55c438015680
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 20307968 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.293066978s of 13.490474701s, submitted: 33
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e01000/0x0/0x4ffc00000, data 0x336d4d6/0x343c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,1])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b3000 session 0x55c4373b0b40
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4399ee000 session 0x55c4373310e0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e91000/0x0/0x4ffc00000, data 0x370d4e6/0x37dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c437330d20
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c437aae3c0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2000 session 0x55c437380960
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 130064384 unmapped: 20021248 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 130064384 unmapped: 20021248 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1507941 data_alloc: 251658240 data_used: 35082240
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 130064384 unmapped: 20021248 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e91000/0x0/0x4ffc00000, data 0x370d4e6/0x37dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 130064384 unmapped: 20021248 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 130064384 unmapped: 20021248 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 130064384 unmapped: 20021248 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b3000 session 0x55c437381e00
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 130367488 unmapped: 19718144 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1524263 data_alloc: 251658240 data_used: 36999168
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 130367488 unmapped: 19718144 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 130400256 unmapped: 19685376 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e6d000/0x0/0x4ffc00000, data 0x37314e6/0x3801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550983 data_alloc: 251658240 data_used: 40714240
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e6d000/0x0/0x4ffc00000, data 0x37314e6/0x3801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e6d000/0x0/0x4ffc00000, data 0x37314e6/0x3801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550983 data_alloc: 251658240 data_used: 40714240
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e6d000/0x0/0x4ffc00000, data 0x37314e6/0x3801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550983 data_alloc: 251658240 data_used: 40714240
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e6d000/0x0/0x4ffc00000, data 0x37314e6/0x3801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550983 data_alloc: 251658240 data_used: 40714240
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e6d000/0x0/0x4ffc00000, data 0x37314e6/0x3801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131710976 unmapped: 18374656 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131710976 unmapped: 18374656 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e6d000/0x0/0x4ffc00000, data 0x37314e6/0x3801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131719168 unmapped: 18366464 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550983 data_alloc: 251658240 data_used: 40714240
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 32.743537903s of 32.836509705s, submitted: 6
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 137379840 unmapped: 12705792 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140926976 unmapped: 9158656 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7455000/0x0/0x4ffc00000, data 0x41434e6/0x4213000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 9125888 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7455000/0x0/0x4ffc00000, data 0x41434e6/0x4213000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 139755520 unmapped: 10330112 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 139755520 unmapped: 10330112 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1631561 data_alloc: 251658240 data_used: 41697280
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 139755520 unmapped: 10330112 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141393920 unmapped: 10166272 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6e8a000/0x0/0x4ffc00000, data 0x470c4e6/0x47dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141541376 unmapped: 10018816 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141549568 unmapped: 10010624 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141549568 unmapped: 10010624 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697183 data_alloc: 251658240 data_used: 41992192
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141549568 unmapped: 10010624 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141549568 unmapped: 10010624 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6cec000/0x0/0x4ffc00000, data 0x48aa4e6/0x497a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141549568 unmapped: 10010624 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141557760 unmapped: 10002432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.211829185s of 13.867918968s, submitted: 144
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141557760 unmapped: 10002432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697199 data_alloc: 251658240 data_used: 41992192
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141557760 unmapped: 10002432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141557760 unmapped: 10002432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6cec000/0x0/0x4ffc00000, data 0x48aa4e6/0x497a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141557760 unmapped: 10002432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6cec000/0x0/0x4ffc00000, data 0x48aa4e6/0x497a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6cec000/0x0/0x4ffc00000, data 0x48aa4e6/0x497a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141557760 unmapped: 10002432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6cec000/0x0/0x4ffc00000, data 0x48aa4e6/0x497a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141557760 unmapped: 10002432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697199 data_alloc: 251658240 data_used: 41992192
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140877824 unmapped: 10682368 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140877824 unmapped: 10682368 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140877824 unmapped: 10682368 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140877824 unmapped: 10682368 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6cf4000/0x0/0x4ffc00000, data 0x48aa4e6/0x497a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140877824 unmapped: 10682368 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1692287 data_alloc: 251658240 data_used: 41992192
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140877824 unmapped: 10682368 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140877824 unmapped: 10682368 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6cf4000/0x0/0x4ffc00000, data 0x48aa4e6/0x497a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c439892c00 session 0x55c437319680
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c439892c00 session 0x55c437aaeb40
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c439164960
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c4371734a0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.983115196s of 13.000985146s, submitted: 2
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2000 session 0x55c439964b40
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b3000 session 0x55c4373b63c0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b3000 session 0x55c4373b01e0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 25296896 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c43967b0e0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6cf4000/0x0/0x4ffc00000, data 0x48aa4e6/0x497a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c4398c2f00
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 25288704 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c439892c00 session 0x55c43ac4d680
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2000 session 0x55c438014d20
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2000 session 0x55c43806da40
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c437aa0960
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c437aa0b40
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c439892c00 session 0x55c43912a5a0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b3000 session 0x55c437bdfc20
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140345344 unmapped: 25911296 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c4398a9a40
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c439101e00
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1777183 data_alloc: 251658240 data_used: 41992192
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 25927680 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628b000/0x0/0x4ffc00000, data 0x53124f6/0x53e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 25927680 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 25927680 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628b000/0x0/0x4ffc00000, data 0x53124f6/0x53e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 25927680 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 25927680 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1777359 data_alloc: 251658240 data_used: 41992192
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 25927680 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 25927680 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c439892c00 session 0x55c4373303c0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.160284996s of 10.497438431s, submitted: 42
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140320768 unmapped: 25935872 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140320768 unmapped: 25935872 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398a5800 session 0x55c43a54e3c0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 25706496 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1807465 data_alloc: 251658240 data_used: 45797376
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 146636800 unmapped: 19619840 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 148963328 unmapped: 17293312 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149340160 unmapped: 16916480 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149561344 unmapped: 16695296 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149561344 unmapped: 16695296 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1857705 data_alloc: 268435456 data_used: 52809728
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149569536 unmapped: 16687104 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149569536 unmapped: 16687104 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149569536 unmapped: 16687104 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149602304 unmapped: 16654336 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149602304 unmapped: 16654336 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1857705 data_alloc: 268435456 data_used: 52809728
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149602304 unmapped: 16654336 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149602304 unmapped: 16654336 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149602304 unmapped: 16654336 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149635072 unmapped: 16621568 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149643264 unmapped: 16613376 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1857705 data_alloc: 268435456 data_used: 52809728
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149643264 unmapped: 16613376 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149643264 unmapped: 16613376 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149643264 unmapped: 16613376 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149643264 unmapped: 16613376 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149684224 unmapped: 16572416 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1857705 data_alloc: 268435456 data_used: 52809728
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149684224 unmapped: 16572416 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149684224 unmapped: 16572416 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149684224 unmapped: 16572416 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149684224 unmapped: 16572416 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 26.395227432s of 26.412460327s, submitted: 3
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149790720 unmapped: 16465920 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1858713 data_alloc: 268435456 data_used: 52813824
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149790720 unmapped: 16465920 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149798912 unmapped: 16457728 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149807104 unmapped: 16449536 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149815296 unmapped: 16441344 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149815296 unmapped: 16441344 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1858713 data_alloc: 268435456 data_used: 52813824
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149815296 unmapped: 16441344 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149823488 unmapped: 16433152 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149823488 unmapped: 16433152 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 151429120 unmapped: 14827520 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.783483505s of 10.946872711s, submitted: 24
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 152084480 unmapped: 14172160 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1889331 data_alloc: 268435456 data_used: 52891648
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 152961024 unmapped: 13295616 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5df9000/0x0/0x4ffc00000, data 0x57a3519/0x5875000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153083904 unmapped: 13172736 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153354240 unmapped: 12902400 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153354240 unmapped: 12902400 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5de9000/0x0/0x4ffc00000, data 0x57b3519/0x5885000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153387008 unmapped: 12869632 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1911755 data_alloc: 268435456 data_used: 53923840
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153387008 unmapped: 12869632 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2000 session 0x55c43ac4d2c0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398a5400 session 0x55c4398583c0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 152756224 unmapped: 13500416 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c4398c32c0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 147660800 unmapped: 18595840 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 147660800 unmapped: 18595840 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f69de000/0x0/0x4ffc00000, data 0x4bbf4e6/0x4c8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 147660800 unmapped: 18595840 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1743569 data_alloc: 251658240 data_used: 43810816
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 147660800 unmapped: 18595840 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f69de000/0x0/0x4ffc00000, data 0x4bbf4e6/0x4c8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4399eec00 session 0x55c43a54ef00
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4399ee800 session 0x55c43802b0e0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 147660800 unmapped: 18595840 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.789140701s of 12.240109444s, submitted: 93
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c437c112c0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f74fb000/0x0/0x4ffc00000, data 0x40a44d6/0x4173000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1625360 data_alloc: 251658240 data_used: 39796736
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f74fb000/0x0/0x4ffc00000, data 0x40a44d6/0x4173000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1625360 data_alloc: 251658240 data_used: 39796736
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f74fb000/0x0/0x4ffc00000, data 0x40a44d6/0x4173000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1625360 data_alloc: 251658240 data_used: 39796736
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1625360 data_alloc: 251658240 data_used: 39796736
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f74fb000/0x0/0x4ffc00000, data 0x40a44d6/0x4173000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.772724152s of 19.821142197s, submitted: 11
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f74fb000/0x0/0x4ffc00000, data 0x40a44d6/0x4173000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f74fb000/0x0/0x4ffc00000, data 0x40a44d6/0x4173000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1625872 data_alloc: 251658240 data_used: 39800832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f74fb000/0x0/0x4ffc00000, data 0x40a44d6/0x4173000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398a5c00 session 0x55c437aaef00
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c439c88c00 session 0x55c4398a8000
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c43ac4c780
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f74fb000/0x0/0x4ffc00000, data 0x40a44d6/0x4173000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 9064 writes, 35K keys, 9064 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 9064 writes, 2319 syncs, 3.91 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1653 writes, 6154 keys, 1653 commit groups, 1.0 writes per commit group, ingest: 6.39 MB, 0.01 MB/s#012Interval WAL: 1653 writes, 687 syncs, 2.41 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143785984 unmapped: 22470656 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: mgrc ms_handle_reset ms_handle_reset con 0x55c437983800
Dec  5 02:27:30 compute-0 ceph-osd[208828]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/858078637
Dec  5 02:27:30 compute-0 ceph-osd[208828]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/858078637,v1:192.168.122.100:6801/858078637]
Dec  5 02:27:30 compute-0 ceph-osd[208828]: mgrc handle_mgr_configure stats_period=5
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143925248 unmapped: 22331392 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143925248 unmapped: 22331392 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 100.267555237s of 100.389305115s, submitted: 23
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c43ac4de00
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398a5400 session 0x55c4398a90e0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c4373292c0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c439e17680
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398a5c00 session 0x55c437319e00
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 26796032 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 26796032 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d26000/0x0/0x4ffc00000, data 0x48794d6/0x4948000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 26796032 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 26796032 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1681860 data_alloc: 251658240 data_used: 37978112
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 26796032 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d26000/0x0/0x4ffc00000, data 0x48794d6/0x4948000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143671296 unmapped: 26787840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143671296 unmapped: 26787840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c439c88c00 session 0x55c43914a3c0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 26763264 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 26763264 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1684438 data_alloc: 251658240 data_used: 37978112
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 26763264 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 26763264 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143704064 unmapped: 26755072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143941632 unmapped: 26517504 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145530880 unmapped: 24928256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1765558 data_alloc: 251658240 data_used: 49324032
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1765558 data_alloc: 251658240 data_used: 49324032
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1765558 data_alloc: 251658240 data_used: 49324032
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4399ee400 session 0x55c439867c20
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145620992 unmapped: 24838144 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145620992 unmapped: 24838144 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145620992 unmapped: 24838144 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145620992 unmapped: 24838144 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145620992 unmapped: 24838144 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1765558 data_alloc: 251658240 data_used: 49324032
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145620992 unmapped: 24838144 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145620992 unmapped: 24838144 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 31.556104660s of 31.715848923s, submitted: 28
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,1])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145670144 unmapped: 24788992 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145735680 unmapped: 24723456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1766950 data_alloc: 251658240 data_used: 49364992
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145793024 unmapped: 24666112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145793024 unmapped: 24666112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145793024 unmapped: 24666112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145793024 unmapped: 24666112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145793024 unmapped: 24666112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1766950 data_alloc: 251658240 data_used: 49364992
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145793024 unmapped: 24666112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145793024 unmapped: 24666112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145793024 unmapped: 24666112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145793024 unmapped: 24666112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145793024 unmapped: 24666112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.810025215s of 12.461395264s, submitted: 108
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1791816 data_alloc: 251658240 data_used: 49373184
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159358976 unmapped: 11100160 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5eab000/0x0/0x4ffc00000, data 0x56ed4f9/0x57bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159612928 unmapped: 10846208 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159825920 unmapped: 10633216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e75000/0x0/0x4ffc00000, data 0x57214f9/0x57f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159825920 unmapped: 10633216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892200 data_alloc: 268435456 data_used: 50962432
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e75000/0x0/0x4ffc00000, data 0x57214f9/0x57f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159858688 unmapped: 10600448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159858688 unmapped: 10600448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159858688 unmapped: 10600448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159858688 unmapped: 10600448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159866880 unmapped: 10592256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892200 data_alloc: 268435456 data_used: 50962432
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159866880 unmapped: 10592256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e75000/0x0/0x4ffc00000, data 0x57214f9/0x57f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159899648 unmapped: 10559488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.857128143s of 12.298008919s, submitted: 144
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159899648 unmapped: 10559488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159899648 unmapped: 10559488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159899648 unmapped: 10559488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892412 data_alloc: 268435456 data_used: 50962432
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e74000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159899648 unmapped: 10559488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159899648 unmapped: 10559488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159907840 unmapped: 10551296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159907840 unmapped: 10551296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e74000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159907840 unmapped: 10551296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892412 data_alloc: 268435456 data_used: 50962432
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159907840 unmapped: 10551296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159907840 unmapped: 10551296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159907840 unmapped: 10551296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159907840 unmapped: 10551296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e74000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159940608 unmapped: 10518528 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892412 data_alloc: 268435456 data_used: 50962432
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159948800 unmapped: 10510336 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159948800 unmapped: 10510336 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159948800 unmapped: 10510336 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159956992 unmapped: 10502144 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e74000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159989760 unmapped: 10469376 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892412 data_alloc: 268435456 data_used: 50962432
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159989760 unmapped: 10469376 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159989760 unmapped: 10469376 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159989760 unmapped: 10469376 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.230848312s of 21.238258362s, submitted: 1
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1885948 data_alloc: 268435456 data_used: 50962432
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1885948 data_alloc: 268435456 data_used: 50962432
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1885948 data_alloc: 268435456 data_used: 50962432
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1885948 data_alloc: 268435456 data_used: 50962432
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.680427551s of 17.691595078s, submitted: 2
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1888652 data_alloc: 268435456 data_used: 51240960
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1888476 data_alloc: 268435456 data_used: 51240960
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.009906769s of 13.027759552s, submitted: 2
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159318016 unmapped: 11141120 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159318016 unmapped: 11141120 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1889356 data_alloc: 268435456 data_used: 51240960
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159318016 unmapped: 11141120 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159318016 unmapped: 11141120 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158703616 unmapped: 11755520 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158703616 unmapped: 11755520 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158703616 unmapped: 11755520 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1889516 data_alloc: 268435456 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158703616 unmapped: 11755520 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158703616 unmapped: 11755520 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158703616 unmapped: 11755520 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 11747328 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 11747328 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1889516 data_alloc: 268435456 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 11747328 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 11747328 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 11747328 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 11747328 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 11747328 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1889516 data_alloc: 268435456 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 11747328 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158720000 unmapped: 11739136 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158720000 unmapped: 11739136 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158720000 unmapped: 11739136 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158720000 unmapped: 11739136 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1889516 data_alloc: 268435456 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158720000 unmapped: 11739136 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 22.845460892s of 22.864625931s, submitted: 4
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158785536 unmapped: 11673600 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158785536 unmapped: 11673600 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158785536 unmapped: 11673600 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158785536 unmapped: 11673600 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158785536 unmapped: 11673600 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158785536 unmapped: 11673600 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158785536 unmapped: 11673600 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158785536 unmapped: 11673600 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158785536 unmapped: 11673600 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158785536 unmapped: 11673600 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158801920 unmapped: 11657216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158801920 unmapped: 11657216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158801920 unmapped: 11657216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158801920 unmapped: 11657216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158801920 unmapped: 11657216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158801920 unmapped: 11657216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158801920 unmapped: 11657216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158801920 unmapped: 11657216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158801920 unmapped: 11657216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158867456 unmapped: 11591680 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 251658240 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 251658240 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 251658240 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 251658240 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 251658240 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 11567104 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 251658240 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 11567104 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 11567104 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 11567104 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 11567104 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 11567104 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 251658240 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158900224 unmapped: 11558912 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158900224 unmapped: 11558912 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158900224 unmapped: 11558912 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158900224 unmapped: 11558912 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158900224 unmapped: 11558912 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 251658240 data_used: 51245056
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 215.814193726s of 215.831085205s, submitted: 14
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159014912 unmapped: 11444224 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159031296 unmapped: 11427840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159031296 unmapped: 11427840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159031296 unmapped: 11427840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159031296 unmapped: 11427840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1894956 data_alloc: 251658240 data_used: 51838976
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159031296 unmapped: 11427840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159031296 unmapped: 11427840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159031296 unmapped: 11427840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159031296 unmapped: 11427840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159031296 unmapped: 11427840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892500 data_alloc: 251658240 data_used: 51838976
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7a000/0x0/0x4ffc00000, data 0x57244f9/0x57f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159031296 unmapped: 11427840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7a000/0x0/0x4ffc00000, data 0x57244f9/0x57f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892500 data_alloc: 251658240 data_used: 51838976
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7a000/0x0/0x4ffc00000, data 0x57244f9/0x57f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892660 data_alloc: 251658240 data_used: 51843072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7a000/0x0/0x4ffc00000, data 0x57244f9/0x57f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 22.156604767s of 22.175872803s, submitted: 2
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892904 data_alloc: 251658240 data_used: 51843072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159055872 unmapped: 11403264 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159064064 unmapped: 11395072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159064064 unmapped: 11395072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159064064 unmapped: 11395072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892904 data_alloc: 251658240 data_used: 51843072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159064064 unmapped: 11395072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159064064 unmapped: 11395072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159064064 unmapped: 11395072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159064064 unmapped: 11395072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159064064 unmapped: 11395072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892904 data_alloc: 251658240 data_used: 51843072
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159064064 unmapped: 11395072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.559044838s of 13.568158150s, submitted: 1
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895352 data_alloc: 251658240 data_used: 51830784
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895352 data_alloc: 251658240 data_used: 51830784
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895352 data_alloc: 251658240 data_used: 51830784
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.334384918s of 15.360384941s, submitted: 14
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159105024 unmapped: 11354112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159105024 unmapped: 11354112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159105024 unmapped: 11354112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159105024 unmapped: 11354112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159105024 unmapped: 11354112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159105024 unmapped: 11354112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159105024 unmapped: 11354112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159113216 unmapped: 11345920 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159113216 unmapped: 11345920 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159113216 unmapped: 11345920 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159113216 unmapped: 11345920 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159113216 unmapped: 11345920 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159113216 unmapped: 11345920 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159113216 unmapped: 11345920 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159113216 unmapped: 11345920 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159145984 unmapped: 11313152 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159145984 unmapped: 11313152 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159145984 unmapped: 11313152 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159145984 unmapped: 11313152 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159154176 unmapped: 11304960 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159154176 unmapped: 11304960 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159154176 unmapped: 11304960 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 11288576 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 11288576 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 11288576 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 11288576 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 11288576 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 11288576 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 11288576 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 11288576 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 11288576 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159178752 unmapped: 11280384 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159178752 unmapped: 11280384 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159178752 unmapped: 11280384 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159178752 unmapped: 11280384 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159178752 unmapped: 11280384 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159178752 unmapped: 11280384 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159178752 unmapped: 11280384 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159178752 unmapped: 11280384 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 11272192 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 11272192 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 11272192 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 11272192 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 11272192 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 11272192 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 11272192 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 11272192 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 11272192 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 11272192 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159203328 unmapped: 11255808 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159203328 unmapped: 11255808 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 11247616 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 11247616 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 11247616 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 11247616 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 11247616 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 11247616 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 11247616 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 11247616 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 11247616 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 11247616 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159219712 unmapped: 11239424 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159219712 unmapped: 11239424 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159219712 unmapped: 11239424 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159219712 unmapped: 11239424 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159219712 unmapped: 11239424 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159219712 unmapped: 11239424 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 9570 writes, 37K keys, 9570 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 9570 writes, 2504 syncs, 3.82 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 506 writes, 1776 keys, 506 commit groups, 1.0 writes per commit group, ingest: 2.56 MB, 0.00 MB/s#012Interval WAL: 506 writes, 185 syncs, 2.74 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159244288 unmapped: 11214848 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159252480 unmapped: 11206656 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159252480 unmapped: 11206656 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159252480 unmapped: 11206656 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159252480 unmapped: 11206656 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159252480 unmapped: 11206656 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159277056 unmapped: 11182080 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159277056 unmapped: 11182080 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159277056 unmapped: 11182080 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159277056 unmapped: 11182080 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159277056 unmapped: 11182080 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159285248 unmapped: 11173888 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159285248 unmapped: 11173888 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159285248 unmapped: 11173888 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159285248 unmapped: 11173888 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159285248 unmapped: 11173888 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 196.116027832s of 196.124725342s, submitted: 1
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c43989d400 session 0x55c43965fa40
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2800 session 0x55c4398c23c0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c439c75c20
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1723092 data_alloc: 234881024 data_used: 44167168
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6eca000/0x0/0x4ffc00000, data 0x46d44f9/0x47a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6eca000/0x0/0x4ffc00000, data 0x46d44f9/0x47a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6eca000/0x0/0x4ffc00000, data 0x46d44f9/0x47a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1723092 data_alloc: 234881024 data_used: 44167168
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6eca000/0x0/0x4ffc00000, data 0x46d44f9/0x47a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4399ee800 session 0x55c4373183c0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.321928978s of 13.440460205s, submitted: 22
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4399eec00 session 0x55c438015680
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1722960 data_alloc: 234881024 data_used: 44167168
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c4380143c0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 27779072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 27779072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 27779072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8860000/0x0/0x4ffc00000, data 0x2d3e4d6/0x2e0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 27779072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 27779072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1438434 data_alloc: 218103808 data_used: 30633984
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 27779072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 27779072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 27779072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8860000/0x0/0x4ffc00000, data 0x2d3e4d6/0x2e0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 27779072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 27779072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1438434 data_alloc: 218103808 data_used: 30633984
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.228912354s of 11.431051254s, submitted: 36
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142688256 unmapped: 27770880 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124952576 unmapped: 45506560 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 139 ms_handle_reset con 0x55c43989d400 session 0x55c439867860
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141778944 unmapped: 45465600 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f9cce000/0x0/0x4ffc00000, data 0x18d00a7/0x19a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 140 ms_handle_reset con 0x55c4398b2800 session 0x55c4373310e0
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125018112 unmapped: 62226432 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262617 data_alloc: 218103808 data_used: 11313152
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 141 ms_handle_reset con 0x55c4399ee800 session 0x55c437c10960
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9cc9000/0x0/0x4ffc00000, data 0x18d3811/0x19a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9cc9000/0x0/0x4ffc00000, data 0x18d3811/0x19a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259196 data_alloc: 218103808 data_used: 11313152
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9cc9000/0x0/0x4ffc00000, data 0x18d3811/0x19a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.351562500s of 11.958790779s, submitted: 94
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc5000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125181952 unmapped: 62062592 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125206528 unmapped: 62038016 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:30 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125403136 unmapped: 61841408 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: do_command 'config diff' '{prefix=config diff}'
Dec  5 02:27:30 compute-0 ceph-osd[208828]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec  5 02:27:30 compute-0 ceph-osd[208828]: do_command 'config show' '{prefix=config show}'
Dec  5 02:27:30 compute-0 ceph-osd[208828]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec  5 02:27:30 compute-0 ceph-osd[208828]: do_command 'counter dump' '{prefix=counter dump}'
Dec  5 02:27:30 compute-0 ceph-osd[208828]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec  5 02:27:30 compute-0 ceph-osd[208828]: do_command 'counter schema' '{prefix=counter schema}'
Dec  5 02:27:30 compute-0 ceph-osd[208828]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125493248 unmapped: 61751296 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 62152704 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:27:30 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125394944 unmapped: 61849600 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:30 compute-0 ceph-osd[208828]: do_command 'log dump' '{prefix=log dump}'
Dec  5 02:27:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2359: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:27:30 compute-0 rsyslogd[188644]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  5 02:27:30 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15601 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  5 02:27:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Dec  5 02:27:31 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2000436730' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec  5 02:27:31 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15605 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  5 02:27:31 compute-0 openstack_network_exporter[366555]: ERROR   02:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:27:31 compute-0 openstack_network_exporter[366555]: ERROR   02:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:27:31 compute-0 openstack_network_exporter[366555]: ERROR   02:27:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:27:31 compute-0 openstack_network_exporter[366555]: ERROR   02:27:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:27:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:27:31 compute-0 openstack_network_exporter[366555]: ERROR   02:27:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:27:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:27:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Dec  5 02:27:31 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2704603559' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec  5 02:27:31 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15609 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec  5 02:27:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec  5 02:27:32 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1203180749' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  5 02:27:32 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15613 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  5 02:27:32 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15615 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec  5 02:27:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Dec  5 02:27:32 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/133351809' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec  5 02:27:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2360: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:27:32 compute-0 nova_compute[349548]: 2025-12-05 02:27:32.832 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:27:32 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15619 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  5 02:27:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Dec  5 02:27:33 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1408480943' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec  5 02:27:33 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15623 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  5 02:27:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:27:34 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Dec  5 02:27:34 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2909200176' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec  5 02:27:34 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15631 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  5 02:27:34 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T02:27:34.289+0000 7f1b09f03640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec  5 02:27:34 compute-0 ceph-mgr[193209]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec  5 02:27:34 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Dec  5 02:27:34 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1139723308' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec  5 02:27:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2361: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:27:34 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Dec  5 02:27:34 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/107936257' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec  5 02:27:34 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Dec  5 02:27:34 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/867654179' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec  5 02:27:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Dec  5 02:27:35 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2436156114' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec  5 02:27:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Dec  5 02:27:35 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/827734816' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec  5 02:27:35 compute-0 nova_compute[349548]: 2025-12-05 02:27:35.471 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f7f27000/0x0/0x4ffc00000, data 0x3a914cc/0x3b57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 7454720 heap: 123363328 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 7454720 heap: 123363328 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1523984 data_alloc: 234881024 data_used: 26050560
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 7454720 heap: 123363328 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f7f27000/0x0/0x4ffc00000, data 0x3a914cc/0x3b57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 7454720 heap: 123363328 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 7454720 heap: 123363328 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f7f27000/0x0/0x4ffc00000, data 0x3a914cc/0x3b57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 7454720 heap: 123363328 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 7454720 heap: 123363328 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1523984 data_alloc: 234881024 data_used: 26050560
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 7454720 heap: 123363328 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f7f27000/0x0/0x4ffc00000, data 0x3a914cc/0x3b57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 7454720 heap: 123363328 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 115908608 unmapped: 7454720 heap: 123363328 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 7446528 heap: 123363328 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f7f27000/0x0/0x4ffc00000, data 0x3a914cc/0x3b57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 7446528 heap: 123363328 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1523984 data_alloc: 234881024 data_used: 26050560
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f7f27000/0x0/0x4ffc00000, data 0x3a914cc/0x3b57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 7446528 heap: 123363328 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 7446528 heap: 123363328 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 7446528 heap: 123363328 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 7446528 heap: 123363328 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 7446528 heap: 123363328 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1523984 data_alloc: 234881024 data_used: 26050560
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 115916800 unmapped: 7446528 heap: 123363328 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 ms_handle_reset con 0x56484af69000 session 0x564847d04780
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 ms_handle_reset con 0x56484804a000 session 0x564848005c20
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 ms_handle_reset con 0x564847ff7000 session 0x56484a71c5a0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f7f27000/0x0/0x4ffc00000, data 0x3a914cc/0x3b57000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 ms_handle_reset con 0x564847e8dc00 session 0x564848a7ad20
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 48.385467529s of 48.404853821s, submitted: 2
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 ms_handle_reset con 0x56484af76800 session 0x56484ab0e000
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 114941952 unmapped: 10526720 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 ms_handle_reset con 0x564847e8dc00 session 0x56484a8450e0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 10518528 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 10518528 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 10518528 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1565757 data_alloc: 234881024 data_used: 26050560
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 10518528 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f7ade000/0x0/0x4ffc00000, data 0x3ed952e/0x3fa0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 10518528 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 10518528 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f7ade000/0x0/0x4ffc00000, data 0x3ed952e/0x3fa0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 10518528 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 114950144 unmapped: 10518528 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1565757 data_alloc: 234881024 data_used: 26050560
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 114499584 unmapped: 10969088 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 117448704 unmapped: 8019968 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118398976 unmapped: 7069696 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118398976 unmapped: 7069696 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f7ade000/0x0/0x4ffc00000, data 0x3ed952e/0x3fa0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118431744 unmapped: 7036928 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1596477 data_alloc: 251658240 data_used: 30371840
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f7ade000/0x0/0x4ffc00000, data 0x3ed952e/0x3fa0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118464512 unmapped: 7004160 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118464512 unmapped: 7004160 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118464512 unmapped: 7004160 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118464512 unmapped: 7004160 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f7ade000/0x0/0x4ffc00000, data 0x3ed952e/0x3fa0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118464512 unmapped: 7004160 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1596477 data_alloc: 251658240 data_used: 30371840
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f7ade000/0x0/0x4ffc00000, data 0x3ed952e/0x3fa0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118472704 unmapped: 6995968 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118472704 unmapped: 6995968 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118472704 unmapped: 6995968 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118472704 unmapped: 6995968 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118472704 unmapped: 6995968 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1596477 data_alloc: 251658240 data_used: 30371840
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118489088 unmapped: 6979584 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f7ade000/0x0/0x4ffc00000, data 0x3ed952e/0x3fa0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118489088 unmapped: 6979584 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f7ade000/0x0/0x4ffc00000, data 0x3ed952e/0x3fa0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.026542664s of 26.152660370s, submitted: 20
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118489088 unmapped: 6979584 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118521856 unmapped: 6946816 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118595584 unmapped: 6873088 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1595197 data_alloc: 251658240 data_used: 30375936
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f76ce000/0x0/0x4ffc00000, data 0x3ed952e/0x3fa0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118595584 unmapped: 6873088 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118595584 unmapped: 6873088 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118603776 unmapped: 6864896 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118603776 unmapped: 6864896 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118603776 unmapped: 6864896 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1595197 data_alloc: 251658240 data_used: 30375936
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f76ce000/0x0/0x4ffc00000, data 0x3ed952e/0x3fa0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118603776 unmapped: 6864896 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118603776 unmapped: 6864896 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f76ce000/0x0/0x4ffc00000, data 0x3ed952e/0x3fa0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118603776 unmapped: 6864896 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f76ce000/0x0/0x4ffc00000, data 0x3ed952e/0x3fa0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118603776 unmapped: 6864896 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118603776 unmapped: 6864896 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1595197 data_alloc: 251658240 data_used: 30375936
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118579200 unmapped: 6889472 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118579200 unmapped: 6889472 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f76ce000/0x0/0x4ffc00000, data 0x3ed952e/0x3fa0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118579200 unmapped: 6889472 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118579200 unmapped: 6889472 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.629758835s of 17.197685242s, submitted: 90
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 119775232 unmapped: 5693440 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1612781 data_alloc: 251658240 data_used: 30408704
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120315904 unmapped: 5152768 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 4620288 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f7253000/0x0/0x4ffc00000, data 0x434952e/0x4410000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121921536 unmapped: 3547136 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121929728 unmapped: 3538944 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121929728 unmapped: 3538944 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1643619 data_alloc: 251658240 data_used: 30416896
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121929728 unmapped: 3538944 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71e7000/0x0/0x4ffc00000, data 0x43c052e/0x4487000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121962496 unmapped: 3506176 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71e7000/0x0/0x4ffc00000, data 0x43c052e/0x4487000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121995264 unmapped: 3473408 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122257408 unmapped: 3211264 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122257408 unmapped: 3211264 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1644331 data_alloc: 251658240 data_used: 30416896
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71ce000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122257408 unmapped: 3211264 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71ce000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122257408 unmapped: 3211264 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71ce000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122257408 unmapped: 3211264 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122257408 unmapped: 3211264 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122257408 unmapped: 3211264 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1644331 data_alloc: 251658240 data_used: 30416896
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122257408 unmapped: 3211264 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122257408 unmapped: 3211264 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122257408 unmapped: 3211264 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71ce000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122257408 unmapped: 3211264 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122257408 unmapped: 3211264 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1644331 data_alloc: 251658240 data_used: 30416896
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122265600 unmapped: 3203072 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71ce000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122265600 unmapped: 3203072 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122273792 unmapped: 3194880 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71ce000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122273792 unmapped: 3194880 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122273792 unmapped: 3194880 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1644331 data_alloc: 251658240 data_used: 30416896
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71ce000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122273792 unmapped: 3194880 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122273792 unmapped: 3194880 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122273792 unmapped: 3194880 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122273792 unmapped: 3194880 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71ce000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71ce000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122273792 unmapped: 3194880 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1644331 data_alloc: 251658240 data_used: 30416896
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122281984 unmapped: 3186688 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122281984 unmapped: 3186688 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 32.531749725s of 33.041004181s, submitted: 99
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123396096 unmapped: 2072576 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123396096 unmapped: 2072576 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123396096 unmapped: 2072576 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123396096 unmapped: 2072576 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123396096 unmapped: 2072576 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 2064384 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 2064384 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 2064384 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 2064384 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 2064384 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 2064384 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 2064384 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 2064384 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 2064384 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 2064384 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 2064384 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123404288 unmapped: 2064384 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 2031616 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 2031616 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 2031616 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 2031616 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 2031616 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 2031616 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 2031616 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 2031616 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 2031616 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 2031616 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 2031616 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 2031616 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 2031616 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 2031616 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 2031616 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 2031616 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123437056 unmapped: 2031616 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 2023424 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 2023424 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 2023424 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 2023424 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 2023424 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 2023424 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 2023424 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 2023424 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 2023424 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 2023424 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 2023424 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 2023424 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 2023424 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 2023424 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 2023424 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 2023424 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 2023424 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123445248 unmapped: 2023424 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123453440 unmapped: 2015232 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123461632 unmapped: 2007040 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123461632 unmapped: 2007040 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123461632 unmapped: 2007040 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123461632 unmapped: 2007040 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123461632 unmapped: 2007040 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123461632 unmapped: 2007040 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123461632 unmapped: 2007040 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123461632 unmapped: 2007040 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123461632 unmapped: 2007040 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123461632 unmapped: 2007040 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123461632 unmapped: 2007040 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123461632 unmapped: 2007040 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123461632 unmapped: 2007040 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 1998848 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 1998848 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 1998848 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 1998848 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 1998848 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 1998848 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 1998848 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 1998848 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 1998848 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 1998848 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 1998848 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 1998848 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 1998848 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 1998848 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 1998848 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123469824 unmapped: 1998848 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123478016 unmapped: 1990656 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123478016 unmapped: 1990656 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123478016 unmapped: 1990656 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123478016 unmapped: 1990656 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123478016 unmapped: 1990656 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123478016 unmapped: 1990656 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123478016 unmapped: 1990656 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123478016 unmapped: 1990656 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123478016 unmapped: 1990656 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123478016 unmapped: 1990656 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 1982464 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 1982464 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 1982464 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 1982464 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 1982464 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 1982464 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 1982464 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 1982464 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 1982464 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 1982464 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 1982464 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 1982464 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 1982464 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 1982464 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 1982464 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 1982464 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123486208 unmapped: 1982464 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 1974272 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 1974272 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 1974272 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 1974272 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 1974272 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 1974272 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 1974272 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 1974272 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 1974272 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 1974272 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 1974272 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 1974272 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123494400 unmapped: 1974272 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123502592 unmapped: 1966080 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123502592 unmapped: 1966080 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123502592 unmapped: 1966080 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123502592 unmapped: 1966080 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123502592 unmapped: 1966080 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123502592 unmapped: 1966080 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123502592 unmapped: 1966080 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123502592 unmapped: 1966080 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123502592 unmapped: 1966080 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123502592 unmapped: 1966080 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123502592 unmapped: 1966080 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123502592 unmapped: 1966080 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123502592 unmapped: 1966080 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123502592 unmapped: 1966080 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123502592 unmapped: 1966080 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123510784 unmapped: 1957888 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123510784 unmapped: 1957888 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123510784 unmapped: 1957888 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123510784 unmapped: 1957888 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123510784 unmapped: 1957888 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123510784 unmapped: 1957888 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123510784 unmapped: 1957888 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123510784 unmapped: 1957888 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123510784 unmapped: 1957888 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123510784 unmapped: 1957888 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645243 data_alloc: 251658240 data_used: 30408704
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123518976 unmapped: 1949696 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123518976 unmapped: 1949696 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123518976 unmapped: 1949696 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123518976 unmapped: 1949696 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123518976 unmapped: 1949696 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 153.424575806s of 153.492935181s, submitted: 10
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1645079 data_alloc: 251658240 data_used: 30408704
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 ms_handle_reset con 0x564847ff6000 session 0x56484acbf2c0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 ms_handle_reset con 0x564848048800 session 0x56484af70000
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 ms_handle_reset con 0x56484af68800 session 0x56484899a780
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f71c9000/0x0/0x4ffc00000, data 0x43d952e/0x44a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123518976 unmapped: 1949696 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120242176 unmapped: 5226496 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 ms_handle_reset con 0x56484af69000 session 0x564848e265a0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 ms_handle_reset con 0x564849ec2400 session 0x56484a71da40
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 ms_handle_reset con 0x564847ff6c00 session 0x564848e272c0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 ms_handle_reset con 0x564847ff7c00 session 0x564847d041e0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 ms_handle_reset con 0x564848e20000 session 0x564849c07680
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1497086 data_alloc: 234881024 data_used: 25481216
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120266752 unmapped: 5201920 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 ms_handle_reset con 0x564848d3a400 session 0x564849c17680
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 119.393753052s of 119.798889160s, submitted: 65
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 ms_handle_reset con 0x564848d3bc00 session 0x56484acbf0e0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 ms_handle_reset con 0x56484ac71000 session 0x56484ab36f00
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120274944 unmapped: 5193728 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f80e6000/0x0/0x4ffc00000, data 0x34c34bc/0x3588000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1229809 data_alloc: 218103808 data_used: 16130048
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 ms_handle_reset con 0x564848e20000 session 0x564848020960
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f9bb7000/0x0/0x4ffc00000, data 0x19f449c/0x1ab7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226977 data_alloc: 218103808 data_used: 16121856
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f9bbb000/0x0/0x4ffc00000, data 0x19f049c/0x1ab3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f9bbb000/0x0/0x4ffc00000, data 0x19f049c/0x1ab3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f9bbb000/0x0/0x4ffc00000, data 0x19f049c/0x1ab3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226977 data_alloc: 218103808 data_used: 16121856
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f9bbb000/0x0/0x4ffc00000, data 0x19f049c/0x1ab3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f9bbb000/0x0/0x4ffc00000, data 0x19f049c/0x1ab3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226977 data_alloc: 218103808 data_used: 16121856
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f9bbb000/0x0/0x4ffc00000, data 0x19f049c/0x1ab3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1226977 data_alloc: 218103808 data_used: 16121856
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f9bbb000/0x0/0x4ffc00000, data 0x19f049c/0x1ab3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.754179001s of 26.179981232s, submitted: 49
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1228951 data_alloc: 218103808 data_used: 16125952
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 12533760 heap: 125468672 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 heartbeat osd_stat(store_statfs(0x4f9bba000/0x0/0x4ffc00000, data 0x19f04ac/0x1ab4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112975872 unmapped: 29278208 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112975872 unmapped: 29278208 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112975872 unmapped: 29278208 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 126 handle_osd_map epochs [126,127], i have 126, src has [1,127]
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287797 data_alloc: 218103808 data_used: 16134144
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 ms_handle_reset con 0x564848048800 session 0x5648480052c0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112984064 unmapped: 29270016 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f93ba000/0x0/0x4ffc00000, data 0x21f04ac/0x22b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112984064 unmapped: 29270016 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112984064 unmapped: 29270016 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112984064 unmapped: 29270016 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112984064 unmapped: 29270016 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287797 data_alloc: 218103808 data_used: 16134144
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112984064 unmapped: 29270016 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112984064 unmapped: 29270016 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f93b6000/0x0/0x4ffc00000, data 0x21f2029/0x22b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112984064 unmapped: 29270016 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 ms_handle_reset con 0x564848d3a400 session 0x56484aafe3c0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 ms_handle_reset con 0x564848d3bc00 session 0x56484aaffe00
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112984064 unmapped: 29270016 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 ms_handle_reset con 0x564848e20000 session 0x56484a5e9e00
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 ms_handle_reset con 0x56484ac71000 session 0x5648492f05a0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 21405696 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 ms_handle_reset con 0x56484af68800 session 0x56484a7ff0e0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1308437 data_alloc: 234881024 data_used: 22953984
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 ms_handle_reset con 0x56484af68800 session 0x5648481c41e0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120832000 unmapped: 21422080 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.666840553s of 15.728053093s, submitted: 4
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 ms_handle_reset con 0x564848d3a400 session 0x5648481c5a40
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 ms_handle_reset con 0x564848d3bc00 session 0x564847f98b40
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 ms_handle_reset con 0x564848e20000 session 0x56484a843680
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 ms_handle_reset con 0x56484ac71000 session 0x564847f9d4a0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120979456 unmapped: 21274624 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 ms_handle_reset con 0x56484ac71000 session 0x56484aa510e0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 ms_handle_reset con 0x564848d3a400 session 0x56484a8450e0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 ms_handle_reset con 0x564848d3bc00 session 0x56484a845860
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 ms_handle_reset con 0x564848e20000 session 0x56484a845680
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8e45000/0x0/0x4ffc00000, data 0x2763039/0x2829000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120979456 unmapped: 21274624 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120979456 unmapped: 21274624 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120979456 unmapped: 21274624 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8e45000/0x0/0x4ffc00000, data 0x2763039/0x2829000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355366 data_alloc: 234881024 data_used: 22953984
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 21430272 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120823808 unmapped: 21430272 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120799232 unmapped: 21454848 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8e45000/0x0/0x4ffc00000, data 0x2763039/0x2829000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [1])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 21446656 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 21446656 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8e45000/0x0/0x4ffc00000, data 0x2763039/0x2829000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1387338 data_alloc: 234881024 data_used: 27414528
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 21446656 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 21446656 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 21446656 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8e45000/0x0/0x4ffc00000, data 0x2763039/0x2829000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 21446656 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 21446656 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1387338 data_alloc: 234881024 data_used: 27414528
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 21446656 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8e45000/0x0/0x4ffc00000, data 0x2763039/0x2829000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8e45000/0x0/0x4ffc00000, data 0x2763039/0x2829000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 21446656 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 21446656 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8e45000/0x0/0x4ffc00000, data 0x2763039/0x2829000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 21446656 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 21446656 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1387338 data_alloc: 234881024 data_used: 27414528
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 21446656 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 21446656 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 heartbeat osd_stat(store_statfs(0x4f8e45000/0x0/0x4ffc00000, data 0x2763039/0x2829000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120807424 unmapped: 21446656 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 ms_handle_reset con 0x56484af68800 session 0x564848e26000
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.050895691s of 22.181135178s, submitted: 21
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 ms_handle_reset con 0x56484af76000 session 0x56484acbf680
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 117817344 unmapped: 24436736 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 ms_handle_reset con 0x564848d3a400 session 0x564848020960
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 117833728 unmapped: 24420352 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313333 data_alloc: 234881024 data_used: 22953984
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 117833728 unmapped: 24420352 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 117833728 unmapped: 24420352 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f93b7000/0x0/0x4ffc00000, data 0x21f2029/0x22b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,1])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 128 ms_handle_reset con 0x564848d3bc00 session 0x564849e341e0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 29466624 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 29466624 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 29466624 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1242578 data_alloc: 218103808 data_used: 16142336
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 29466624 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 29466624 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 29466624 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f9bb4000/0x0/0x4ffc00000, data 0x19f3bea/0x1ab9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 29466624 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 128 heartbeat osd_stat(store_statfs(0x4f9bb4000/0x0/0x4ffc00000, data 0x19f3bea/0x1ab9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 128 handle_osd_map epochs [128,129], i have 128, src has [1,129]
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.768804550s of 11.076835632s, submitted: 50
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 29466624 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245552 data_alloc: 218103808 data_used: 16142336
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 29466624 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f9bb1000/0x0/0x4ffc00000, data 0x19f564d/0x1abc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 29466624 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 29466624 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f9bb1000/0x0/0x4ffc00000, data 0x19f564d/0x1abc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112787456 unmapped: 29466624 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112836608 unmapped: 29417472 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248268 data_alloc: 218103808 data_used: 16142336
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f9bb0000/0x0/0x4ffc00000, data 0x19f567b/0x1abe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112836608 unmapped: 29417472 heap: 142254080 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 38592512 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 38592512 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 129 heartbeat osd_stat(store_statfs(0x4f93b0000/0x0/0x4ffc00000, data 0x21f5680/0x22be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 38592512 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.988888741s of 10.125345230s, submitted: 22
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 130 ms_handle_reset con 0x564848e20000 session 0x564849326d20
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 38592512 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1307193 data_alloc: 218103808 data_used: 16150528
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 38592512 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 38592512 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 130 heartbeat osd_stat(store_statfs(0x4f93ac000/0x0/0x4ffc00000, data 0x21f71fd/0x22c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 38592512 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 38592512 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 38592512 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1307193 data_alloc: 218103808 data_used: 16150528
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 38592512 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 130 heartbeat osd_stat(store_statfs(0x4f93ac000/0x0/0x4ffc00000, data 0x21f71fd/0x22c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 113123328 unmapped: 37527552 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 131 ms_handle_reset con 0x56484ac71000 session 0x564847ceed20
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1255422 data_alloc: 218103808 data_used: 16158720
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 131 heartbeat osd_stat(store_statfs(0x4f9baa000/0x0/0x4ffc00000, data 0x19f8d9b/0x1ac2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 131 handle_osd_map epochs [131,132], i have 131, src has [1,132]
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.831582069s of 15.007717133s, submitted: 31
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.2 total, 600.0 interval#012Cumulative writes: 8925 writes, 35K keys, 8925 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 8925 writes, 2023 syncs, 4.41 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 851 writes, 2760 keys, 851 commit groups, 1.0 writes per commit group, ingest: 1.82 MB, 0.00 MB/s#012Interval WAL: 851 writes, 368 syncs, 2.31 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 38510592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1258220 data_alloc: 218103808 data_used: 16158720
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4f9ba8000/0x0/0x4ffc00000, data 0x19fa7fe/0x1ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 38502400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 ms_handle_reset con 0x56484804a000 session 0x56484af3af00
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 166.313781738s of 166.387649536s, submitted: 20
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 ms_handle_reset con 0x564847ff7000 session 0x56484af3b860
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108568576 unmapped: 42082304 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1152304 data_alloc: 218103808 data_used: 11788288
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 ms_handle_reset con 0x56484ac71000 session 0x564848146d20
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 42090496 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fa4ed000/0x0/0x4ffc00000, data 0x10b279c/0x117c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 42090496 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 42090496 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fa4ed000/0x0/0x4ffc00000, data 0x10b279c/0x117c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 42090496 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 42090496 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149206 data_alloc: 218103808 data_used: 11771904
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fa4ed000/0x0/0x4ffc00000, data 0x10b279c/0x117c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 42090496 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 42090496 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 42090496 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 42090496 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.749129295s of 10.003149033s, submitted: 39
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108658688 unmapped: 41992192 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149030 data_alloc: 218103808 data_used: 11771904
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 41984000 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fa4f2000/0x0/0x4ffc00000, data 0x10b279c/0x117c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108724224 unmapped: 41926656 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108724224 unmapped: 41926656 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fa4f2000/0x0/0x4ffc00000, data 0x10b279c/0x117c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108724224 unmapped: 41926656 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108724224 unmapped: 41926656 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fa4f2000/0x0/0x4ffc00000, data 0x10b279c/0x117c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149030 data_alloc: 218103808 data_used: 11771904
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108724224 unmapped: 41926656 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108724224 unmapped: 41926656 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108724224 unmapped: 41926656 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fa4f2000/0x0/0x4ffc00000, data 0x10b279c/0x117c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108724224 unmapped: 41926656 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.765543938s of 10.422777176s, submitted: 87
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 ms_handle_reset con 0x56484ac72c00 session 0x56484af661e0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 ms_handle_reset con 0x56484887c000 session 0x56484af674a0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108707840 unmapped: 41943040 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1054182 data_alloc: 218103808 data_used: 7081984
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fa4f2000/0x0/0x4ffc00000, data 0x10b279c/0x117c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,1])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 ms_handle_reset con 0x56484abb5000 session 0x56484aeee3c0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 81.139320374s of 81.423446655s, submitted: 52
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051319 data_alloc: 218103808 data_used: 7057408
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104734720 unmapped: 45916160 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 133 ms_handle_reset con 0x56484887c000 session 0x564849e343c0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104742912 unmapped: 45907968 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fa072000/0x0/0x4ffc00000, data 0x1531265/0x15fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104751104 unmapped: 45899776 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 ms_handle_reset con 0x564847ff7000 session 0x56484aafe1e0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 ms_handle_reset con 0x56484804a000 session 0x5648493263c0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 ms_handle_reset con 0x56484ac72000 session 0x564847f9da40
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 ms_handle_reset con 0x564847ff7000 session 0x56484a5e8f00
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104808448 unmapped: 45842432 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 ms_handle_reset con 0x56484804a000 session 0x56484a5e9860
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 59.769268036s of 59.916522980s, submitted: 11
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 42229760 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 ms_handle_reset con 0x56484887c000 session 0x564849e343c0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x56484abb5000 session 0x564848a7a5a0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198393 data_alloc: 218103808 data_used: 11743232
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f9bf9000/0x0/0x4ffc00000, data 0x19a4d92/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x56484ac72400 session 0x564848e26000
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 42246144 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x564847ff7000 session 0x564848ec7c20
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x56484ac72400 session 0x564848e26b40
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x56484804a000 session 0x56484a5e0d20
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x56484887c000 session 0x56484aeed680
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x56484abb5000 session 0x564847ceed20
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x564847ff7000 session 0x56484a845680
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108691456 unmapped: 41959424 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x56484887c000 session 0x56484aeefa40
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x56484804a000 session 0x56484aeee5a0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x56484ac72c00 session 0x56484a842960
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x564848d3bc00 session 0x56484a5ec5a0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x564847ff7000 session 0x564848005e00
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x56484804a000 session 0x56484a78fa40
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 109887488 unmapped: 40763392 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484ac72400 session 0x5648493261e0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 109756416 unmapped: 40894464 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8dc1000/0x0/0x4ffc00000, data 0x27d99c5/0x28ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484887c000 session 0x564847d114a0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484ac72c00 session 0x564847d11c20
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x564847ff7000 session 0x56484ab0e1e0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 109756416 unmapped: 40894464 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484804a000 session 0x56484ab0ef00
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1370612 data_alloc: 218103808 data_used: 11743232
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484887c000 session 0x56484ab0e780
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484af76c00 session 0x56484a845860
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484af76000 session 0x56484a5e0d20
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x564847ff7000 session 0x564848e26b40
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484804a000 session 0x56484a9fe000
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 40402944 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484887c000 session 0x5648474ef4a0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484af76c00 session 0x5648474ee5a0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484aa26400 session 0x564847d10960
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 40402944 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x564847ff7000 session 0x564847d10f00
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484887c000 session 0x56484ab0e3c0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484804a000 session 0x56484ab0eb40
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 110641152 unmapped: 40009728 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f883f000/0x0/0x4ffc00000, data 0x2d5c5d5/0x2e2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x564849db2000 session 0x56484a845a40
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484ac72400 session 0x56484a9fe5a0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 110624768 unmapped: 40026112 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 110624768 unmapped: 40026112 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8813000/0x0/0x4ffc00000, data 0x2d86618/0x2e5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1378599 data_alloc: 218103808 data_used: 11751424
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 110632960 unmapped: 40017920 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.355422020s of 12.096014023s, submitted: 115
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 38879232 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f880f000/0x0/0x4ffc00000, data 0x2d8807b/0x2e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 115818496 unmapped: 34832384 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 32759808 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484887c000 session 0x56484aa50780
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f880f000/0x0/0x4ffc00000, data 0x2d8807b/0x2e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 32759808 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849db2000 session 0x56484813c000
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1485893 data_alloc: 234881024 data_used: 26185728
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 32759808 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484af76c00 session 0x56484ab0f0e0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849bfac00 session 0x56484aeee1e0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564848d3a400 session 0x564849327c20
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 32759808 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484887c000 session 0x564847f98b40
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849bfac00 session 0x56484a8450e0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f87e6000/0x0/0x4ffc00000, data 0x2db208e/0x2e88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 33431552 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 33423360 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 33423360 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1451549 data_alloc: 234881024 data_used: 24010752
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 33423360 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f88ae000/0x0/0x4ffc00000, data 0x2b5fff9/0x2c33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f88ae000/0x0/0x4ffc00000, data 0x2b5fff9/0x2c33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118177792 unmapped: 32473088 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118685696 unmapped: 31965184 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f88ae000/0x0/0x4ffc00000, data 0x2b5fff9/0x2c33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 30597120 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 30597120 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1484509 data_alloc: 234881024 data_used: 28577792
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 30597120 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 30597120 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 30597120 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 30597120 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f88ae000/0x0/0x4ffc00000, data 0x2b5fff9/0x2c33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120094720 unmapped: 30556160 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1484509 data_alloc: 234881024 data_used: 28577792
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120094720 unmapped: 30556160 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120094720 unmapped: 30556160 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120094720 unmapped: 30556160 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120094720 unmapped: 30556160 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f88ae000/0x0/0x4ffc00000, data 0x2b5fff9/0x2c33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120094720 unmapped: 30556160 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1484509 data_alloc: 234881024 data_used: 28577792
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120094720 unmapped: 30556160 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120102912 unmapped: 30547968 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120102912 unmapped: 30547968 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.486093521s of 26.861923218s, submitted: 69
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564848e20000 session 0x564849c06960
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484af77000 session 0x56484813d860
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484af76400 session 0x56484aa512c0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484887c000 session 0x564848ec74a0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121430016 unmapped: 29220864 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564848e20000 session 0x56484a8421e0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8280000/0x0/0x4ffc00000, data 0x331aff9/0x33ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121430016 unmapped: 29220864 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550538 data_alloc: 234881024 data_used: 28581888
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121430016 unmapped: 29220864 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8280000/0x0/0x4ffc00000, data 0x331aff9/0x33ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121430016 unmapped: 29220864 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121430016 unmapped: 29220864 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 127156224 unmapped: 23494656 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 128073728 unmapped: 22577152 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849bfac00 session 0x5648481c5c20
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1620314 data_alloc: 234881024 data_used: 29696000
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 125485056 unmapped: 25165824 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 126754816 unmapped: 23896064 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f78df000/0x0/0x4ffc00000, data 0x3cb3ff9/0x3d87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f78df000/0x0/0x4ffc00000, data 0x3cb3ff9/0x3d87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 127426560 unmapped: 23224320 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131514368 unmapped: 19136512 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133488640 unmapped: 17162240 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1686404 data_alloc: 251658240 data_used: 36605952
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.745903969s of 12.423884392s, submitted: 151
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484af77000 session 0x56484ab36960
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849477c00 session 0x56484a5ec1e0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133488640 unmapped: 17162240 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484887c000 session 0x564849e34f00
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 130637824 unmapped: 20013056 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 130637824 unmapped: 20013056 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f80a7000/0x0/0x4ffc00000, data 0x34f3ff9/0x35c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 134373376 unmapped: 16277504 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132988928 unmapped: 17661952 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1658098 data_alloc: 234881024 data_used: 30609408
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133152768 unmapped: 17498112 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75c4000/0x0/0x4ffc00000, data 0x3fd5ff9/0x40a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,3])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 17391616 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 17391616 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 17391616 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 17391616 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75bc000/0x0/0x4ffc00000, data 0x3fddff9/0x40b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1665698 data_alloc: 234881024 data_used: 31031296
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133267456 unmapped: 17383424 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133267456 unmapped: 17383424 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133275648 unmapped: 17375232 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75bc000/0x0/0x4ffc00000, data 0x3fddff9/0x40b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133275648 unmapped: 17375232 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.259255409s of 13.825000763s, submitted: 121
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132579328 unmapped: 18071552 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1661566 data_alloc: 234881024 data_used: 31031296
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132579328 unmapped: 18071552 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132579328 unmapped: 18071552 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132579328 unmapped: 18071552 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75ba000/0x0/0x4ffc00000, data 0x3fe0ff9/0x40b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132587520 unmapped: 18063360 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132587520 unmapped: 18063360 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1661566 data_alloc: 234881024 data_used: 31031296
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75ba000/0x0/0x4ffc00000, data 0x3fe0ff9/0x40b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132587520 unmapped: 18063360 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132595712 unmapped: 18055168 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132595712 unmapped: 18055168 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132595712 unmapped: 18055168 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75ba000/0x0/0x4ffc00000, data 0x3fe0ff9/0x40b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132595712 unmapped: 18055168 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1661566 data_alloc: 234881024 data_used: 31031296
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132595712 unmapped: 18055168 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75ba000/0x0/0x4ffc00000, data 0x3fe0ff9/0x40b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.511526108s of 12.544201851s, submitted: 4
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132603904 unmapped: 18046976 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132603904 unmapped: 18046976 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132603904 unmapped: 18046976 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75ba000/0x0/0x4ffc00000, data 0x3fe0ff9/0x40b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132603904 unmapped: 18046976 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1662094 data_alloc: 234881024 data_used: 31031296
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75ba000/0x0/0x4ffc00000, data 0x3fe0ff9/0x40b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132603904 unmapped: 18046976 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132603904 unmapped: 18046976 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132603904 unmapped: 18046976 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75ba000/0x0/0x4ffc00000, data 0x3fe0ff9/0x40b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132603904 unmapped: 18046976 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75ba000/0x0/0x4ffc00000, data 0x3fe0ff9/0x40b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132620288 unmapped: 18030592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1661650 data_alloc: 234881024 data_used: 31031296
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75b5000/0x0/0x4ffc00000, data 0x3fe5ff9/0x40b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132620288 unmapped: 18030592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132628480 unmapped: 18022400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.172998428s of 10.241044044s, submitted: 10
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132653056 unmapped: 17997824 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132653056 unmapped: 17997824 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75b5000/0x0/0x4ffc00000, data 0x3fe5ff9/0x40b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132653056 unmapped: 17997824 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1664114 data_alloc: 234881024 data_used: 31019008
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849477800 session 0x5648481472c0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849477400 session 0x564848021e00
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849476400 session 0x56484aeed860
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849476800 session 0x56484aeecb40
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132653056 unmapped: 17997824 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849476800 session 0x56484aeede00
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484887c000 session 0x56484a9b8000
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849476400 session 0x56484a9b9860
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849477400 session 0x564847cef860
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849477800 session 0x5648488f0780
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f6e82000/0x0/0x4ffc00000, data 0x4718009/0x47ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132964352 unmapped: 17686528 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132964352 unmapped: 17686528 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f6e82000/0x0/0x4ffc00000, data 0x4718009/0x47ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132964352 unmapped: 17686528 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132964352 unmapped: 17686528 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849db2000 session 0x56484af3d680
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484af76c00 session 0x56484a9b85a0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1719347 data_alloc: 234881024 data_used: 31019008
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132980736 unmapped: 17670144 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f6e82000/0x0/0x4ffc00000, data 0x4718009/0x47ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132980736 unmapped: 17670144 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.484535217s of 10.727886200s, submitted: 38
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132980736 unmapped: 17670144 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132980736 unmapped: 17670144 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564847ff7000 session 0x564847f9d680
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484804a000 session 0x564848143680
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123420672 unmapped: 27230208 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849476800 session 0x564847f9cb40
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1468530 data_alloc: 234881024 data_used: 17723392
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f842a000/0x0/0x4ffc00000, data 0x3170ff9/0x3244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123420672 unmapped: 27230208 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564847ff7000 session 0x56484a91c000
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 27222016 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484804a000 session 0x56484a91d2c0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 27222016 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f842a000/0x0/0x4ffc00000, data 0x3170ff9/0x3244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849476800 session 0x564848143c20
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849db2000 session 0x5648481434a0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 27746304 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 27746304 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473527 data_alloc: 234881024 data_used: 17731584
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 27746304 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 27746304 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.922307968s of 10.159023285s, submitted: 48
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484af76c00 session 0x5648492f1860
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849477400 session 0x564847f983c0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122912768 unmapped: 27738112 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8406000/0x0/0x4ffc00000, data 0x3194ff9/0x3268000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,1])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564847ff7000 session 0x56484aeec960
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417997 data_alloc: 234881024 data_used: 17723392
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417997 data_alloc: 234881024 data_used: 17723392
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417997 data_alloc: 234881024 data_used: 17723392
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417997 data_alloc: 234881024 data_used: 17723392
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417997 data_alloc: 234881024 data_used: 17723392
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121061376 unmapped: 29589504 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.571750641s of 24.842250824s, submitted: 46
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 29532160 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121217024 unmapped: 29433856 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121421824 unmapped: 29229056 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1435561 data_alloc: 234881024 data_used: 19324928
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1436041 data_alloc: 234881024 data_used: 19337216
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1436041 data_alloc: 234881024 data_used: 19337216
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1436041 data_alloc: 234881024 data_used: 19337216
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.756362915s of 18.791091919s, submitted: 4
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484d2ca000 session 0x56484a845680
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8b59000/0x0/0x4ffc00000, data 0x2a40b66/0x2b14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439863 data_alloc: 234881024 data_used: 19345408
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8b59000/0x0/0x4ffc00000, data 0x2a40b66/0x2b14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af69c00 session 0x56484aa51860
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439863 data_alloc: 234881024 data_used: 19345408
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564848e20800 session 0x56484aeee5a0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564847ff7000 session 0x56484af2c780
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8b59000/0x0/0x4ffc00000, data 0x2a40b66/0x2b14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564848e20800 session 0x564848e265a0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.284442902s of 10.303551674s, submitted: 2
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564847cf8c00 session 0x564849e341e0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564847fcc400 session 0x56484a8443c0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484887c000 session 0x56484ab37a40
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476400 session 0x56484ab370e0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476800 session 0x56484a5ec3c0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122503168 unmapped: 33398784 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68c00 session 0x564847d114a0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af69c00 session 0x564847cef860
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118472704 unmapped: 37429248 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af69400 session 0x56484a9fef00
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118472704 unmapped: 37429248 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356384 data_alloc: 218103808 data_used: 11759616
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476400 session 0x56484a9fe780
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118472704 unmapped: 37429248 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476800 session 0x56484a9fe000
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68c00 session 0x564848e272c0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f85f1000/0x0/0x4ffc00000, data 0x26d9b95/0x27ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118833152 unmapped: 37068800 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118833152 unmapped: 37068800 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8e96000/0x0/0x4ffc00000, data 0x2703bc8/0x27d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118833152 unmapped: 37068800 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118833152 unmapped: 37068800 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1364170 data_alloc: 218103808 data_used: 11759616
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8e96000/0x0/0x4ffc00000, data 0x2703bc8/0x27d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118833152 unmapped: 37068800 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118833152 unmapped: 37068800 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118841344 unmapped: 37060608 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8e96000/0x0/0x4ffc00000, data 0x2703bc8/0x27d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118841344 unmapped: 37060608 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68400 session 0x564848ec7860
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac70800 session 0x56484a842960
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476400 session 0x56484ab0f860
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476800 session 0x56484813dc20
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.737582207s of 13.344229698s, submitted: 87
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68400 session 0x56484ac421e0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68c00 session 0x56484ab36f00
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 39936000 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac71000 session 0x56484a78ef00
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476400 session 0x56484a9b8780
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476800 session 0x56484a8434a0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1535513 data_alloc: 234881024 data_used: 20418560
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 36896768 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 124076032 unmapped: 35504128 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 124076032 unmapped: 35504128 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac71000 session 0x564847d11860
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 124076032 unmapped: 35504128 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68400 session 0x56484ab36b40
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 124076032 unmapped: 35504128 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1571513 data_alloc: 234881024 data_used: 25505792
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68c00 session 0x56484aa50b40
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476400 session 0x56484aa51860
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 35495936 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 35495936 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 124116992 unmapped: 35463168 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 126574592 unmapped: 33005568 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132308992 unmapped: 27271168 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1663144 data_alloc: 251658240 data_used: 37560320
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135913472 unmapped: 23666688 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135913472 unmapped: 23666688 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135913472 unmapped: 23666688 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135954432 unmapped: 23625728 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135962624 unmapped: 23617536 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677864 data_alloc: 251658240 data_used: 38080512
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135962624 unmapped: 23617536 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 23584768 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 23584768 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 23584768 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 23584768 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677864 data_alloc: 251658240 data_used: 38080512
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 23584768 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 23584768 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 23584768 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 136003584 unmapped: 23576576 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 136003584 unmapped: 23576576 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677864 data_alloc: 251658240 data_used: 38080512
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 136003584 unmapped: 23576576 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 136003584 unmapped: 23576576 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 136003584 unmapped: 23576576 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 136003584 unmapped: 23576576 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 136003584 unmapped: 23576576 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677864 data_alloc: 251658240 data_used: 38080512
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 136003584 unmapped: 23576576 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 136028160 unmapped: 23552000 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 32.552471161s of 32.833507538s, submitted: 33
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139296768 unmapped: 20283392 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139304960 unmapped: 20275200 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7387000/0x0/0x4ffc00000, data 0x4212bc8/0x42e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140591104 unmapped: 18989056 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1791954 data_alloc: 251658240 data_used: 38576128
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140591104 unmapped: 18989056 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140591104 unmapped: 18989056 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7378000/0x0/0x4ffc00000, data 0x4221bc8/0x42f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143826944 unmapped: 15753216 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7378000/0x0/0x4ffc00000, data 0x4221bc8/0x42f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 144506880 unmapped: 15073280 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 16146432 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1865102 data_alloc: 251658240 data_used: 38723584
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f6673000/0x0/0x4ffc00000, data 0x4b16bc8/0x4beb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 16130048 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 16130048 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 16130048 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 16130048 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 16130048 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f6673000/0x0/0x4ffc00000, data 0x4b16bc8/0x4beb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1865262 data_alloc: 251658240 data_used: 38727680
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.018227577s of 13.871615410s, submitted: 193
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 16130048 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143458304 unmapped: 16121856 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143458304 unmapped: 16121856 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f6672000/0x0/0x4ffc00000, data 0x4b17bc8/0x4bec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143458304 unmapped: 16121856 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143458304 unmapped: 16121856 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1865686 data_alloc: 251658240 data_used: 38731776
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143458304 unmapped: 16121856 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143458304 unmapped: 16121856 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f6670000/0x0/0x4ffc00000, data 0x4b19bc8/0x4bee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143466496 unmapped: 16113664 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143466496 unmapped: 16113664 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143466496 unmapped: 16113664 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1863118 data_alloc: 251658240 data_used: 38731776
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143466496 unmapped: 16113664 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f6670000/0x0/0x4ffc00000, data 0x4b19bc8/0x4bee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143466496 unmapped: 16113664 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143466496 unmapped: 16113664 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68400 session 0x56484ab0fc20
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac71c00 session 0x56484813cd20
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac70c00 session 0x56484ab374a0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564848048800 session 0x56484a9fe780
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.988618851s of 13.008173943s, submitted: 2
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476400 session 0x564847d114a0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143294464 unmapped: 17899520 heap: 161193984 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac70c00 session 0x56484a845e00
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f5d46000/0x0/0x4ffc00000, data 0x5442bf1/0x5518000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143327232 unmapped: 17866752 heap: 161193984 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1934703 data_alloc: 251658240 data_used: 38731776
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 146489344 unmapped: 14704640 heap: 161193984 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac71c00 session 0x56484a844780
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68400 session 0x56484ab37a40
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e78000/0x0/0x4ffc00000, data 0x630fc53/0x63e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143679488 unmapped: 25387008 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143679488 unmapped: 25387008 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143679488 unmapped: 25387008 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143679488 unmapped: 25387008 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2046908 data_alloc: 251658240 data_used: 38731776
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143679488 unmapped: 25387008 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e78000/0x0/0x4ffc00000, data 0x630fc8c/0x63e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484a7f0400 session 0x56484aa51a40
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476400 session 0x56484a8425a0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143679488 unmapped: 25387008 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143687680 unmapped: 25378816 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac70c00 session 0x56484a843680
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac71c00 session 0x56484a843860
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 144007168 unmapped: 25059328 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.890540123s of 11.288828850s, submitted: 66
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 144015360 unmapped: 25051136 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2050670 data_alloc: 251658240 data_used: 38731776
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 144072704 unmapped: 24993792 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e53000/0x0/0x4ffc00000, data 0x6333c9c/0x640b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 144252928 unmapped: 24813568 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 146710528 unmapped: 22355968 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 153870336 unmapped: 15196160 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 158777344 unmapped: 10289152 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2184302 data_alloc: 268435456 data_used: 56541184
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161619968 unmapped: 7446528 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161619968 unmapped: 7446528 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e53000/0x0/0x4ffc00000, data 0x6333c9c/0x640b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161619968 unmapped: 7446528 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e53000/0x0/0x4ffc00000, data 0x6333c9c/0x640b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161619968 unmapped: 7446528 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e53000/0x0/0x4ffc00000, data 0x6333c9c/0x640b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161619968 unmapped: 7446528 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2193742 data_alloc: 268435456 data_used: 57905152
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161619968 unmapped: 7446528 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161619968 unmapped: 7446528 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161619968 unmapped: 7446528 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161628160 unmapped: 7438336 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161636352 unmapped: 7430144 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e53000/0x0/0x4ffc00000, data 0x6333c9c/0x640b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2193742 data_alloc: 268435456 data_used: 57905152
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161636352 unmapped: 7430144 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161644544 unmapped: 7421952 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161644544 unmapped: 7421952 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e53000/0x0/0x4ffc00000, data 0x6333c9c/0x640b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161644544 unmapped: 7421952 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161644544 unmapped: 7421952 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2193742 data_alloc: 268435456 data_used: 57905152
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161644544 unmapped: 7421952 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161644544 unmapped: 7421952 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e53000/0x0/0x4ffc00000, data 0x6333c9c/0x640b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161644544 unmapped: 7421952 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161660928 unmapped: 7405568 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161660928 unmapped: 7405568 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2193742 data_alloc: 268435456 data_used: 57905152
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161660928 unmapped: 7405568 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161660928 unmapped: 7405568 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e53000/0x0/0x4ffc00000, data 0x6333c9c/0x640b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e53000/0x0/0x4ffc00000, data 0x6333c9c/0x640b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161660928 unmapped: 7405568 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161660928 unmapped: 7405568 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161660928 unmapped: 7405568 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2193742 data_alloc: 268435456 data_used: 57905152
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161660928 unmapped: 7405568 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e53000/0x0/0x4ffc00000, data 0x6333c9c/0x640b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161660928 unmapped: 7405568 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161669120 unmapped: 7397376 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161669120 unmapped: 7397376 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 34.762050629s of 34.795322418s, submitted: 5
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 165208064 unmapped: 3858432 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2233138 data_alloc: 268435456 data_used: 58028032
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4a4e000/0x0/0x4ffc00000, data 0x6738c9c/0x6810000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 165330944 unmapped: 3735552 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 165412864 unmapped: 6807552 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 165462016 unmapped: 6758400 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 166813696 unmapped: 5406720 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 166813696 unmapped: 5406720 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f3c01000/0x0/0x4ffc00000, data 0x7585c9c/0x765d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2345136 data_alloc: 268435456 data_used: 58961920
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 166846464 unmapped: 5373952 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f3c01000/0x0/0x4ffc00000, data 0x7585c9c/0x765d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 166846464 unmapped: 5373952 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484a7f0800 session 0x56484a5e01e0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68400 session 0x56484a9fe000
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 166846464 unmapped: 5373952 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68400 session 0x5648489743c0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161579008 unmapped: 10641408 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161579008 unmapped: 10641408 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2161675 data_alloc: 251658240 data_used: 49319936
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161579008 unmapped: 10641408 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4a3d000/0x0/0x4ffc00000, data 0x6746c2a/0x681c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161579008 unmapped: 10641408 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4a3d000/0x0/0x4ffc00000, data 0x6746c2a/0x681c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.678235054s of 13.497112274s, submitted: 173
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476800 session 0x56484aa505a0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac71000 session 0x564848144d20
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161587200 unmapped: 10633216 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4a42000/0x0/0x4ffc00000, data 0x6746c2a/0x681c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476400 session 0x56484af7c3c0
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:27:35 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:27:35 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1882942 data_alloc: 251658240 data_used: 36667392
Dec  5 02:27:35 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f616b000/0x0/0x4ffc00000, data 0x501dc2a/0x50f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:27:35 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:30:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2463: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:30:59 compute-0 nova_compute[349548]: 2025-12-05 02:30:59.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:30:59 compute-0 nova_compute[349548]: 2025-12-05 02:30:59.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 02:30:59 compute-0 rsyslogd[188644]: imjournal: 16408 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Dec  5 02:30:59 compute-0 podman[158197]: time="2025-12-05T02:30:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:30:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:30:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec  5 02:30:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:30:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8210 "" "Go-http-client/1.1"
Dec  5 02:31:00 compute-0 nova_compute[349548]: 2025-12-05 02:31:00.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:31:00 compute-0 nova_compute[349548]: 2025-12-05 02:31:00.104 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:31:00 compute-0 nova_compute[349548]: 2025-12-05 02:31:00.105 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:31:00 compute-0 nova_compute[349548]: 2025-12-05 02:31:00.105 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:31:00 compute-0 nova_compute[349548]: 2025-12-05 02:31:00.106 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 02:31:00 compute-0 nova_compute[349548]: 2025-12-05 02:31:00.106 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:31:00 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:31:00 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2901075375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:31:00 compute-0 nova_compute[349548]: 2025-12-05 02:31:00.582 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:31:00 compute-0 nova_compute[349548]: 2025-12-05 02:31:00.682 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:31:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2464: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:31:01 compute-0 nova_compute[349548]: 2025-12-05 02:31:01.110 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:31:01 compute-0 nova_compute[349548]: 2025-12-05 02:31:01.111 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3923MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 02:31:01 compute-0 nova_compute[349548]: 2025-12-05 02:31:01.112 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:31:01 compute-0 nova_compute[349548]: 2025-12-05 02:31:01.112 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:31:01 compute-0 nova_compute[349548]: 2025-12-05 02:31:01.194 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 02:31:01 compute-0 nova_compute[349548]: 2025-12-05 02:31:01.195 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 02:31:01 compute-0 openstack_network_exporter[366555]: ERROR   02:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:31:01 compute-0 openstack_network_exporter[366555]: ERROR   02:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:31:01 compute-0 openstack_network_exporter[366555]: ERROR   02:31:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:31:01 compute-0 openstack_network_exporter[366555]: ERROR   02:31:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:31:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:31:01 compute-0 openstack_network_exporter[366555]: ERROR   02:31:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:31:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:31:01 compute-0 nova_compute[349548]: 2025-12-05 02:31:01.464 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:31:01 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:31:01 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2306036107' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:31:01 compute-0 nova_compute[349548]: 2025-12-05 02:31:01.994 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:31:02 compute-0 nova_compute[349548]: 2025-12-05 02:31:02.008 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:31:02 compute-0 nova_compute[349548]: 2025-12-05 02:31:02.030 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:31:02 compute-0 nova_compute[349548]: 2025-12-05 02:31:02.034 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 02:31:02 compute-0 nova_compute[349548]: 2025-12-05 02:31:02.035 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:31:02 compute-0 podman[481981]: 2025-12-05 02:31:02.72830144 +0000 UTC m=+0.130585550 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_id=edpm, distribution-scope=public, name=ubi9, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release=1214.1726694543, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  5 02:31:02 compute-0 podman[481980]: 2025-12-05 02:31:02.743029627 +0000 UTC m=+0.151250769 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  5 02:31:02 compute-0 podman[481982]: 2025-12-05 02:31:02.750806993 +0000 UTC m=+0.146297136 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  5 02:31:02 compute-0 nova_compute[349548]: 2025-12-05 02:31:02.945 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:31:02 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2465: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:31:03 compute-0 nova_compute[349548]: 2025-12-05 02:31:03.036 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:31:03 compute-0 nova_compute[349548]: 2025-12-05 02:31:03.036 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:31:03 compute-0 nova_compute[349548]: 2025-12-05 02:31:03.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:31:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:31:04 compute-0 nova_compute[349548]: 2025-12-05 02:31:04.063 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:31:04 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2466: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:31:05 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Dec  5 02:31:05 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:31:05.358711) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  5 02:31:05 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Dec  5 02:31:05 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901865358791, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 1264, "num_deletes": 251, "total_data_size": 1961945, "memory_usage": 1988416, "flush_reason": "Manual Compaction"}
Dec  5 02:31:05 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Dec  5 02:31:05 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901865376814, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 1932375, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49740, "largest_seqno": 51003, "table_properties": {"data_size": 1926332, "index_size": 3374, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12570, "raw_average_key_size": 19, "raw_value_size": 1914262, "raw_average_value_size": 3014, "num_data_blocks": 152, "num_entries": 635, "num_filter_entries": 635, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764901734, "oldest_key_time": 1764901734, "file_creation_time": 1764901865, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Dec  5 02:31:05 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 18294 microseconds, and 9720 cpu microseconds.
Dec  5 02:31:05 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 02:31:05 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:31:05.377014) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 1932375 bytes OK
Dec  5 02:31:05 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:31:05.377039) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Dec  5 02:31:05 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:31:05.379449) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Dec  5 02:31:05 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:31:05.379469) EVENT_LOG_v1 {"time_micros": 1764901865379462, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  5 02:31:05 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:31:05.379490) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  5 02:31:05 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 1956254, prev total WAL file size 1956254, number of live WAL files 2.
Dec  5 02:31:05 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:31:05 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:31:05.380831) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Dec  5 02:31:05 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  5 02:31:05 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(1887KB)], [119(7260KB)]
Dec  5 02:31:05 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901865380945, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 9367129, "oldest_snapshot_seqno": -1}
Dec  5 02:31:05 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 6513 keys, 7619864 bytes, temperature: kUnknown
Dec  5 02:31:05 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901865443817, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 7619864, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7580480, "index_size": 21994, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16325, "raw_key_size": 170134, "raw_average_key_size": 26, "raw_value_size": 7466898, "raw_average_value_size": 1146, "num_data_blocks": 868, "num_entries": 6513, "num_filter_entries": 6513, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764901865, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Dec  5 02:31:05 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 02:31:05 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:31:05.444207) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 7619864 bytes
Dec  5 02:31:05 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:31:05.446610) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 148.6 rd, 120.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 7.1 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(8.8) write-amplify(3.9) OK, records in: 7027, records dropped: 514 output_compression: NoCompression
Dec  5 02:31:05 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:31:05.446639) EVENT_LOG_v1 {"time_micros": 1764901865446625, "job": 72, "event": "compaction_finished", "compaction_time_micros": 63057, "compaction_time_cpu_micros": 36941, "output_level": 6, "num_output_files": 1, "total_output_size": 7619864, "num_input_records": 7027, "num_output_records": 6513, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  5 02:31:05 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:31:05 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901865447401, "job": 72, "event": "table_file_deletion", "file_number": 121}
Dec  5 02:31:05 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:31:05 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901865450250, "job": 72, "event": "table_file_deletion", "file_number": 119}
Dec  5 02:31:05 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:31:05.380524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:31:05 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:31:05.450389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:31:05 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:31:05.450395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:31:05 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:31:05.450398) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:31:05 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:31:05.450401) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:31:05 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:31:05.450404) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:31:05 compute-0 nova_compute[349548]: 2025-12-05 02:31:05.688 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:31:06 compute-0 nova_compute[349548]: 2025-12-05 02:31:06.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:31:06 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2467: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:31:07 compute-0 nova_compute[349548]: 2025-12-05 02:31:07.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:31:07 compute-0 nova_compute[349548]: 2025-12-05 02:31:07.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:31:07 compute-0 nova_compute[349548]: 2025-12-05 02:31:07.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  5 02:31:07 compute-0 nova_compute[349548]: 2025-12-05 02:31:07.948 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:31:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:31:08 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2468: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:31:10 compute-0 nova_compute[349548]: 2025-12-05 02:31:10.692 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:31:10 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2469: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:31:11 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:31:11 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:31:11 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 02:31:11 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:31:11 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 02:31:11 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:31:12 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 1fc2076e-c47b-412a-8e68-152c6122c378 does not exist
Dec  5 02:31:12 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 8580bfce-e54b-4c0f-ab3f-5b189256300c does not exist
Dec  5 02:31:12 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev cd946155-1d0a-47d8-989f-c67d982be91f does not exist
Dec  5 02:31:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 02:31:12 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 02:31:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 02:31:12 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:31:12 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:31:12 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:31:12 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:31:12 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:31:12 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:31:12 compute-0 nova_compute[349548]: 2025-12-05 02:31:12.951 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:31:12 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2470: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:31:13 compute-0 podman[482303]: 2025-12-05 02:31:13.100207262 +0000 UTC m=+0.080906517 container create 06c2d9685122ed6f7495a0ce5b511f804aa4c299026791c60d8a6c2c536576ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_thompson, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  5 02:31:13 compute-0 podman[482303]: 2025-12-05 02:31:13.070168571 +0000 UTC m=+0.050867886 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:31:13 compute-0 systemd[1]: Started libpod-conmon-06c2d9685122ed6f7495a0ce5b511f804aa4c299026791c60d8a6c2c536576ad.scope.
Dec  5 02:31:13 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:31:13 compute-0 podman[482303]: 2025-12-05 02:31:13.259874775 +0000 UTC m=+0.240574100 container init 06c2d9685122ed6f7495a0ce5b511f804aa4c299026791c60d8a6c2c536576ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_thompson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:31:13 compute-0 podman[482303]: 2025-12-05 02:31:13.280425021 +0000 UTC m=+0.261124286 container start 06c2d9685122ed6f7495a0ce5b511f804aa4c299026791c60d8a6c2c536576ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_thompson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  5 02:31:13 compute-0 podman[482303]: 2025-12-05 02:31:13.286529958 +0000 UTC m=+0.267229193 container attach 06c2d9685122ed6f7495a0ce5b511f804aa4c299026791c60d8a6c2c536576ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_thompson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:31:13 compute-0 admiring_thompson[482318]: 167 167
Dec  5 02:31:13 compute-0 systemd[1]: libpod-06c2d9685122ed6f7495a0ce5b511f804aa4c299026791c60d8a6c2c536576ad.scope: Deactivated successfully.
Dec  5 02:31:13 compute-0 podman[482303]: 2025-12-05 02:31:13.295402025 +0000 UTC m=+0.276101350 container died 06c2d9685122ed6f7495a0ce5b511f804aa4c299026791c60d8a6c2c536576ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_thompson, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:31:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-72a8fe0ec7e4a8d98f154c6cdcc511542ed65e63d22835e1a8e6fe7761a41862-merged.mount: Deactivated successfully.
Dec  5 02:31:13 compute-0 podman[482303]: 2025-12-05 02:31:13.372096011 +0000 UTC m=+0.352795226 container remove 06c2d9685122ed6f7495a0ce5b511f804aa4c299026791c60d8a6c2c536576ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  5 02:31:13 compute-0 systemd[1]: libpod-conmon-06c2d9685122ed6f7495a0ce5b511f804aa4c299026791c60d8a6c2c536576ad.scope: Deactivated successfully.
Dec  5 02:31:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:31:13 compute-0 podman[482341]: 2025-12-05 02:31:13.663233877 +0000 UTC m=+0.094122942 container create b920f6f228c6a95e67eee0cf6da3b321569d71da17e864610311707f8e8ccb41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  5 02:31:13 compute-0 podman[482341]: 2025-12-05 02:31:13.625701858 +0000 UTC m=+0.056590973 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:31:13 compute-0 systemd[1]: Started libpod-conmon-b920f6f228c6a95e67eee0cf6da3b321569d71da17e864610311707f8e8ccb41.scope.
Dec  5 02:31:13 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:31:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f9b094e52b9c2088f89add001c647c955c56234a6ef18462308d5229a9cae82/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:31:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f9b094e52b9c2088f89add001c647c955c56234a6ef18462308d5229a9cae82/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:31:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f9b094e52b9c2088f89add001c647c955c56234a6ef18462308d5229a9cae82/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:31:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f9b094e52b9c2088f89add001c647c955c56234a6ef18462308d5229a9cae82/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:31:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f9b094e52b9c2088f89add001c647c955c56234a6ef18462308d5229a9cae82/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 02:31:13 compute-0 podman[482341]: 2025-12-05 02:31:13.830308955 +0000 UTC m=+0.261198040 container init b920f6f228c6a95e67eee0cf6da3b321569d71da17e864610311707f8e8ccb41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ellis, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  5 02:31:13 compute-0 podman[482341]: 2025-12-05 02:31:13.855877476 +0000 UTC m=+0.286766521 container start b920f6f228c6a95e67eee0cf6da3b321569d71da17e864610311707f8e8ccb41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ellis, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  5 02:31:13 compute-0 podman[482341]: 2025-12-05 02:31:13.862465257 +0000 UTC m=+0.293354342 container attach b920f6f228c6a95e67eee0cf6da3b321569d71da17e864610311707f8e8ccb41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ellis, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:31:13 compute-0 podman[482357]: 2025-12-05 02:31:13.896153785 +0000 UTC m=+0.120768975 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  5 02:31:13 compute-0 podman[482360]: 2025-12-05 02:31:13.911099708 +0000 UTC m=+0.133310898 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, name=ubi9-minimal, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, distribution-scope=public)
Dec  5 02:31:13 compute-0 podman[482361]: 2025-12-05 02:31:13.936680901 +0000 UTC m=+0.148493320 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 02:31:13 compute-0 podman[482378]: 2025-12-05 02:31:13.943769596 +0000 UTC m=+0.134427891 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  5 02:31:14 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2471: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:31:15 compute-0 nova_compute[349548]: 2025-12-05 02:31:15.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:31:15 compute-0 festive_ellis[482358]: --> passed data devices: 0 physical, 3 LVM
Dec  5 02:31:15 compute-0 festive_ellis[482358]: --> relative data size: 1.0
Dec  5 02:31:15 compute-0 festive_ellis[482358]: --> All data devices are unavailable
Dec  5 02:31:15 compute-0 systemd[1]: libpod-b920f6f228c6a95e67eee0cf6da3b321569d71da17e864610311707f8e8ccb41.scope: Deactivated successfully.
Dec  5 02:31:15 compute-0 podman[482341]: 2025-12-05 02:31:15.144937285 +0000 UTC m=+1.575826330 container died b920f6f228c6a95e67eee0cf6da3b321569d71da17e864610311707f8e8ccb41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Dec  5 02:31:15 compute-0 systemd[1]: libpod-b920f6f228c6a95e67eee0cf6da3b321569d71da17e864610311707f8e8ccb41.scope: Consumed 1.236s CPU time.
Dec  5 02:31:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f9b094e52b9c2088f89add001c647c955c56234a6ef18462308d5229a9cae82-merged.mount: Deactivated successfully.
Dec  5 02:31:15 compute-0 podman[482341]: 2025-12-05 02:31:15.255854263 +0000 UTC m=+1.686743338 container remove b920f6f228c6a95e67eee0cf6da3b321569d71da17e864610311707f8e8ccb41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ellis, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:31:15 compute-0 systemd[1]: libpod-conmon-b920f6f228c6a95e67eee0cf6da3b321569d71da17e864610311707f8e8ccb41.scope: Deactivated successfully.
Dec  5 02:31:15 compute-0 nova_compute[349548]: 2025-12-05 02:31:15.697 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:31:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:31:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:31:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:31:16
Dec  5 02:31:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 02:31:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 02:31:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['backups', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data', 'images', 'default.rgw.control']
Dec  5 02:31:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 02:31:16 compute-0 podman[482621]: 2025-12-05 02:31:16.47558462 +0000 UTC m=+0.093522424 container create 02d5514648d1966d8cc9b55845228cefd0246d69aad70ae4fc3a44ed81411859 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ride, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:31:16 compute-0 podman[482621]: 2025-12-05 02:31:16.438783423 +0000 UTC m=+0.056721297 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:31:16 compute-0 systemd[1]: Started libpod-conmon-02d5514648d1966d8cc9b55845228cefd0246d69aad70ae4fc3a44ed81411859.scope.
Dec  5 02:31:16 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:31:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:31:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:31:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:31:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:31:16 compute-0 podman[482621]: 2025-12-05 02:31:16.623398979 +0000 UTC m=+0.241336833 container init 02d5514648d1966d8cc9b55845228cefd0246d69aad70ae4fc3a44ed81411859 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ride, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  5 02:31:16 compute-0 podman[482621]: 2025-12-05 02:31:16.639402502 +0000 UTC m=+0.257340306 container start 02d5514648d1966d8cc9b55845228cefd0246d69aad70ae4fc3a44ed81411859 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ride, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  5 02:31:16 compute-0 podman[482621]: 2025-12-05 02:31:16.646838588 +0000 UTC m=+0.264776462 container attach 02d5514648d1966d8cc9b55845228cefd0246d69aad70ae4fc3a44ed81411859 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ride, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:31:16 compute-0 trusting_ride[482635]: 167 167
Dec  5 02:31:16 compute-0 systemd[1]: libpod-02d5514648d1966d8cc9b55845228cefd0246d69aad70ae4fc3a44ed81411859.scope: Deactivated successfully.
Dec  5 02:31:16 compute-0 podman[482621]: 2025-12-05 02:31:16.650212346 +0000 UTC m=+0.268150150 container died 02d5514648d1966d8cc9b55845228cefd0246d69aad70ae4fc3a44ed81411859 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ride, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:31:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca05d602c15b5f194df7acc5300c228387d36de18a2f116f008de5250da06a2c-merged.mount: Deactivated successfully.
Dec  5 02:31:16 compute-0 podman[482621]: 2025-12-05 02:31:16.729032862 +0000 UTC m=+0.346970666 container remove 02d5514648d1966d8cc9b55845228cefd0246d69aad70ae4fc3a44ed81411859 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_ride, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:31:16 compute-0 systemd[1]: libpod-conmon-02d5514648d1966d8cc9b55845228cefd0246d69aad70ae4fc3a44ed81411859.scope: Deactivated successfully.
Dec  5 02:31:16 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2472: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:31:17 compute-0 podman[482658]: 2025-12-05 02:31:17.028523921 +0000 UTC m=+0.100738923 container create 0e204ca0794fbf3a895033d811a35c1383d94d5f0b22c6ad69689af9817d165e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_elion, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Dec  5 02:31:17 compute-0 podman[482658]: 2025-12-05 02:31:16.991024203 +0000 UTC m=+0.063239245 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:31:17 compute-0 systemd[1]: Started libpod-conmon-0e204ca0794fbf3a895033d811a35c1383d94d5f0b22c6ad69689af9817d165e.scope.
Dec  5 02:31:17 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:31:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb23c203bccc672a716b2de9cdf49df033d0f0c03b02991ff426b2cbd0386756/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:31:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb23c203bccc672a716b2de9cdf49df033d0f0c03b02991ff426b2cbd0386756/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:31:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb23c203bccc672a716b2de9cdf49df033d0f0c03b02991ff426b2cbd0386756/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:31:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb23c203bccc672a716b2de9cdf49df033d0f0c03b02991ff426b2cbd0386756/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:31:17 compute-0 podman[482658]: 2025-12-05 02:31:17.195749673 +0000 UTC m=+0.267964725 container init 0e204ca0794fbf3a895033d811a35c1383d94d5f0b22c6ad69689af9817d165e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_elion, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  5 02:31:17 compute-0 podman[482658]: 2025-12-05 02:31:17.224991121 +0000 UTC m=+0.297206113 container start 0e204ca0794fbf3a895033d811a35c1383d94d5f0b22c6ad69689af9817d165e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  5 02:31:17 compute-0 podman[482658]: 2025-12-05 02:31:17.23253358 +0000 UTC m=+0.304748582 container attach 0e204ca0794fbf3a895033d811a35c1383d94d5f0b22c6ad69689af9817d165e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_elion, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  5 02:31:17 compute-0 gracious_elion[482675]: {
Dec  5 02:31:17 compute-0 gracious_elion[482675]:    "0": [
Dec  5 02:31:17 compute-0 gracious_elion[482675]:        {
Dec  5 02:31:17 compute-0 gracious_elion[482675]:            "devices": [
Dec  5 02:31:17 compute-0 gracious_elion[482675]:                "/dev/loop3"
Dec  5 02:31:17 compute-0 gracious_elion[482675]:            ],
Dec  5 02:31:17 compute-0 gracious_elion[482675]:            "lv_name": "ceph_lv0",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:            "lv_size": "21470642176",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:            "name": "ceph_lv0",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:            "tags": {
Dec  5 02:31:17 compute-0 gracious_elion[482675]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:                "ceph.cluster_name": "ceph",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:                "ceph.crush_device_class": "",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:                "ceph.encrypted": "0",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:                "ceph.osd_id": "0",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:                "ceph.type": "block",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:                "ceph.vdo": "0"
Dec  5 02:31:17 compute-0 gracious_elion[482675]:            },
Dec  5 02:31:17 compute-0 gracious_elion[482675]:            "type": "block",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:            "vg_name": "ceph_vg0"
Dec  5 02:31:17 compute-0 gracious_elion[482675]:        }
Dec  5 02:31:17 compute-0 gracious_elion[482675]:    ],
Dec  5 02:31:17 compute-0 gracious_elion[482675]:    "1": [
Dec  5 02:31:17 compute-0 gracious_elion[482675]:        {
Dec  5 02:31:17 compute-0 gracious_elion[482675]:            "devices": [
Dec  5 02:31:17 compute-0 gracious_elion[482675]:                "/dev/loop4"
Dec  5 02:31:17 compute-0 gracious_elion[482675]:            ],
Dec  5 02:31:17 compute-0 gracious_elion[482675]:            "lv_name": "ceph_lv1",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:            "lv_size": "21470642176",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:            "name": "ceph_lv1",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:            "tags": {
Dec  5 02:31:17 compute-0 gracious_elion[482675]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:                "ceph.cluster_name": "ceph",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:                "ceph.crush_device_class": "",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:                "ceph.encrypted": "0",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:                "ceph.osd_id": "1",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:                "ceph.type": "block",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:                "ceph.vdo": "0"
Dec  5 02:31:17 compute-0 gracious_elion[482675]:            },
Dec  5 02:31:17 compute-0 gracious_elion[482675]:            "type": "block",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:            "vg_name": "ceph_vg1"
Dec  5 02:31:17 compute-0 gracious_elion[482675]:        }
Dec  5 02:31:17 compute-0 gracious_elion[482675]:    ],
Dec  5 02:31:17 compute-0 gracious_elion[482675]:    "2": [
Dec  5 02:31:17 compute-0 gracious_elion[482675]:        {
Dec  5 02:31:17 compute-0 gracious_elion[482675]:            "devices": [
Dec  5 02:31:17 compute-0 gracious_elion[482675]:                "/dev/loop5"
Dec  5 02:31:17 compute-0 gracious_elion[482675]:            ],
Dec  5 02:31:17 compute-0 gracious_elion[482675]:            "lv_name": "ceph_lv2",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:            "lv_size": "21470642176",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:            "name": "ceph_lv2",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:            "tags": {
Dec  5 02:31:17 compute-0 gracious_elion[482675]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:                "ceph.cluster_name": "ceph",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:                "ceph.crush_device_class": "",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:                "ceph.encrypted": "0",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:                "ceph.osd_id": "2",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:                "ceph.type": "block",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:                "ceph.vdo": "0"
Dec  5 02:31:17 compute-0 gracious_elion[482675]:            },
Dec  5 02:31:17 compute-0 gracious_elion[482675]:            "type": "block",
Dec  5 02:31:17 compute-0 gracious_elion[482675]:            "vg_name": "ceph_vg2"
Dec  5 02:31:17 compute-0 gracious_elion[482675]:        }
Dec  5 02:31:17 compute-0 gracious_elion[482675]:    ]
Dec  5 02:31:17 compute-0 gracious_elion[482675]: }
Dec  5 02:31:17 compute-0 nova_compute[349548]: 2025-12-05 02:31:17.953 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:31:17 compute-0 systemd[1]: libpod-0e204ca0794fbf3a895033d811a35c1383d94d5f0b22c6ad69689af9817d165e.scope: Deactivated successfully.
Dec  5 02:31:17 compute-0 podman[482658]: 2025-12-05 02:31:17.973783146 +0000 UTC m=+1.045998138 container died 0e204ca0794fbf3a895033d811a35c1383d94d5f0b22c6ad69689af9817d165e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  5 02:31:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb23c203bccc672a716b2de9cdf49df033d0f0c03b02991ff426b2cbd0386756-merged.mount: Deactivated successfully.
Dec  5 02:31:18 compute-0 podman[482658]: 2025-12-05 02:31:18.066436354 +0000 UTC m=+1.138651326 container remove 0e204ca0794fbf3a895033d811a35c1383d94d5f0b22c6ad69689af9817d165e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_elion, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Dec  5 02:31:18 compute-0 systemd[1]: libpod-conmon-0e204ca0794fbf3a895033d811a35c1383d94d5f0b22c6ad69689af9817d165e.scope: Deactivated successfully.
Dec  5 02:31:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 02:31:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:31:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 02:31:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:31:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:31:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:31:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:31:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:31:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:31:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:31:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:31:18 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2473: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:31:19 compute-0 podman[482837]: 2025-12-05 02:31:19.327545161 +0000 UTC m=+0.082426202 container create 16bec9550120adb446ae040a590119798768c33d4c6dceba6ad919e8fc6b186f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  5 02:31:19 compute-0 podman[482837]: 2025-12-05 02:31:19.29336588 +0000 UTC m=+0.048246981 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:31:19 compute-0 systemd[1]: Started libpod-conmon-16bec9550120adb446ae040a590119798768c33d4c6dceba6ad919e8fc6b186f.scope.
Dec  5 02:31:19 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:31:19 compute-0 podman[482837]: 2025-12-05 02:31:19.475525885 +0000 UTC m=+0.230406926 container init 16bec9550120adb446ae040a590119798768c33d4c6dceba6ad919e8fc6b186f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Dec  5 02:31:19 compute-0 podman[482837]: 2025-12-05 02:31:19.495104313 +0000 UTC m=+0.249985344 container start 16bec9550120adb446ae040a590119798768c33d4c6dceba6ad919e8fc6b186f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_faraday, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Dec  5 02:31:19 compute-0 podman[482837]: 2025-12-05 02:31:19.501357364 +0000 UTC m=+0.256238405 container attach 16bec9550120adb446ae040a590119798768c33d4c6dceba6ad919e8fc6b186f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_faraday, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  5 02:31:19 compute-0 priceless_faraday[482852]: 167 167
Dec  5 02:31:19 compute-0 systemd[1]: libpod-16bec9550120adb446ae040a590119798768c33d4c6dceba6ad919e8fc6b186f.scope: Deactivated successfully.
Dec  5 02:31:19 compute-0 podman[482837]: 2025-12-05 02:31:19.51017736 +0000 UTC m=+0.265058391 container died 16bec9550120adb446ae040a590119798768c33d4c6dceba6ad919e8fc6b186f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_faraday, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:31:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-472af10656b051fe3e2c22690b04752db429c46a9deb6aa9c612ca636874ed7b-merged.mount: Deactivated successfully.
Dec  5 02:31:19 compute-0 podman[482837]: 2025-12-05 02:31:19.597685499 +0000 UTC m=+0.352566530 container remove 16bec9550120adb446ae040a590119798768c33d4c6dceba6ad919e8fc6b186f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_faraday, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:31:19 compute-0 systemd[1]: libpod-conmon-16bec9550120adb446ae040a590119798768c33d4c6dceba6ad919e8fc6b186f.scope: Deactivated successfully.
Dec  5 02:31:19 compute-0 podman[482876]: 2025-12-05 02:31:19.886265791 +0000 UTC m=+0.087480839 container create 2db0ab8a40211cd82cab454d775b8068daf4fbc805e0211054f1ac75039a91cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  5 02:31:19 compute-0 podman[482876]: 2025-12-05 02:31:19.852043688 +0000 UTC m=+0.053258786 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:31:19 compute-0 systemd[1]: Started libpod-conmon-2db0ab8a40211cd82cab454d775b8068daf4fbc805e0211054f1ac75039a91cb.scope.
Dec  5 02:31:20 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:31:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80e3d42ea15e237e4e4069d0e68337390369d365d87ea416e8d5aa5daaf395d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:31:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80e3d42ea15e237e4e4069d0e68337390369d365d87ea416e8d5aa5daaf395d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:31:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80e3d42ea15e237e4e4069d0e68337390369d365d87ea416e8d5aa5daaf395d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:31:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80e3d42ea15e237e4e4069d0e68337390369d365d87ea416e8d5aa5daaf395d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:31:20 compute-0 podman[482876]: 2025-12-05 02:31:20.04649087 +0000 UTC m=+0.247705968 container init 2db0ab8a40211cd82cab454d775b8068daf4fbc805e0211054f1ac75039a91cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jemison, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  5 02:31:20 compute-0 podman[482876]: 2025-12-05 02:31:20.080441955 +0000 UTC m=+0.281656993 container start 2db0ab8a40211cd82cab454d775b8068daf4fbc805e0211054f1ac75039a91cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jemison, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  5 02:31:20 compute-0 podman[482876]: 2025-12-05 02:31:20.08888776 +0000 UTC m=+0.290103858 container attach 2db0ab8a40211cd82cab454d775b8068daf4fbc805e0211054f1ac75039a91cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  5 02:31:20 compute-0 nova_compute[349548]: 2025-12-05 02:31:20.703 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:31:20 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2474: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:31:21 compute-0 elegant_jemison[482892]: {
Dec  5 02:31:21 compute-0 elegant_jemison[482892]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 02:31:21 compute-0 elegant_jemison[482892]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:31:21 compute-0 elegant_jemison[482892]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 02:31:21 compute-0 elegant_jemison[482892]:        "osd_id": 0,
Dec  5 02:31:21 compute-0 elegant_jemison[482892]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:31:21 compute-0 elegant_jemison[482892]:        "type": "bluestore"
Dec  5 02:31:21 compute-0 elegant_jemison[482892]:    },
Dec  5 02:31:21 compute-0 elegant_jemison[482892]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 02:31:21 compute-0 elegant_jemison[482892]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:31:21 compute-0 elegant_jemison[482892]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 02:31:21 compute-0 elegant_jemison[482892]:        "osd_id": 1,
Dec  5 02:31:21 compute-0 elegant_jemison[482892]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:31:21 compute-0 elegant_jemison[482892]:        "type": "bluestore"
Dec  5 02:31:21 compute-0 elegant_jemison[482892]:    },
Dec  5 02:31:21 compute-0 elegant_jemison[482892]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 02:31:21 compute-0 elegant_jemison[482892]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:31:21 compute-0 elegant_jemison[482892]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 02:31:21 compute-0 elegant_jemison[482892]:        "osd_id": 2,
Dec  5 02:31:21 compute-0 elegant_jemison[482892]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:31:21 compute-0 elegant_jemison[482892]:        "type": "bluestore"
Dec  5 02:31:21 compute-0 elegant_jemison[482892]:    }
Dec  5 02:31:21 compute-0 elegant_jemison[482892]: }
Dec  5 02:31:21 compute-0 systemd[1]: libpod-2db0ab8a40211cd82cab454d775b8068daf4fbc805e0211054f1ac75039a91cb.scope: Deactivated successfully.
Dec  5 02:31:21 compute-0 podman[482876]: 2025-12-05 02:31:21.3111766 +0000 UTC m=+1.512391638 container died 2db0ab8a40211cd82cab454d775b8068daf4fbc805e0211054f1ac75039a91cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:31:21 compute-0 systemd[1]: libpod-2db0ab8a40211cd82cab454d775b8068daf4fbc805e0211054f1ac75039a91cb.scope: Consumed 1.237s CPU time.
Dec  5 02:31:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-80e3d42ea15e237e4e4069d0e68337390369d365d87ea416e8d5aa5daaf395d6-merged.mount: Deactivated successfully.
Dec  5 02:31:21 compute-0 podman[482876]: 2025-12-05 02:31:21.416786844 +0000 UTC m=+1.618001882 container remove 2db0ab8a40211cd82cab454d775b8068daf4fbc805e0211054f1ac75039a91cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_jemison, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:31:21 compute-0 systemd[1]: libpod-conmon-2db0ab8a40211cd82cab454d775b8068daf4fbc805e0211054f1ac75039a91cb.scope: Deactivated successfully.
Dec  5 02:31:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 02:31:21 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:31:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 02:31:21 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:31:21 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 3a2e8728-b198-42e9-b61c-6d79700e4474 does not exist
Dec  5 02:31:21 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 8b236710-92a2-4b88-a332-a17980bbdf55 does not exist
Dec  5 02:31:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:31:22 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:31:22 compute-0 nova_compute[349548]: 2025-12-05 02:31:22.958 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:31:22 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2475: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:31:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:31:24 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2476: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:31:25 compute-0 nova_compute[349548]: 2025-12-05 02:31:25.710 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:31:26 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2477: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  5 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec  5 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:31:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 02:31:27 compute-0 nova_compute[349548]: 2025-12-05 02:31:27.960 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:31:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:31:28 compute-0 podman[482988]: 2025-12-05 02:31:28.716776073 +0000 UTC m=+0.124983517 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  5 02:31:28 compute-0 podman[482987]: 2025-12-05 02:31:28.748118142 +0000 UTC m=+0.148976383 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec  5 02:31:28 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2478: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:31:29 compute-0 podman[158197]: time="2025-12-05T02:31:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:31:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:31:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec  5 02:31:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:31:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8216 "" "Go-http-client/1.1"
Dec  5 02:31:30 compute-0 nova_compute[349548]: 2025-12-05 02:31:30.715 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:31:30 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2479: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:31:31 compute-0 openstack_network_exporter[366555]: ERROR   02:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:31:31 compute-0 openstack_network_exporter[366555]: ERROR   02:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:31:31 compute-0 openstack_network_exporter[366555]: ERROR   02:31:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:31:31 compute-0 openstack_network_exporter[366555]: ERROR   02:31:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:31:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:31:31 compute-0 openstack_network_exporter[366555]: ERROR   02:31:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:31:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:31:32 compute-0 nova_compute[349548]: 2025-12-05 02:31:32.964 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:31:32 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2480: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:31:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:31:33 compute-0 podman[483027]: 2025-12-05 02:31:33.711049268 +0000 UTC m=+0.107210901 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125)
Dec  5 02:31:33 compute-0 podman[483028]: 2025-12-05 02:31:33.726744913 +0000 UTC m=+0.116760718 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=base rhel9, managed_by=edpm_ansible, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, version=9.4, name=ubi9, release-0.7.12=, architecture=x86_64, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, build-date=2024-09-18T21:23:30, config_id=edpm, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Dec  5 02:31:33 compute-0 podman[483029]: 2025-12-05 02:31:33.796790926 +0000 UTC m=+0.179136109 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec  5 02:31:34 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2481: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:31:35 compute-0 nova_compute[349548]: 2025-12-05 02:31:35.720 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:31:36 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2482: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:31:37 compute-0 nova_compute[349548]: 2025-12-05 02:31:37.969 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:31:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:31:38 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2483: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:31:40 compute-0 nova_compute[349548]: 2025-12-05 02:31:40.725 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:31:40 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2484: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:31:42 compute-0 nova_compute[349548]: 2025-12-05 02:31:42.974 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:31:42 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2485: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:31:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:31:44 compute-0 podman[483082]: 2025-12-05 02:31:44.696993792 +0000 UTC m=+0.105827002 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:31:44 compute-0 podman[483095]: 2025-12-05 02:31:44.713219192 +0000 UTC m=+0.105738458 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, architecture=x86_64, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  5 02:31:44 compute-0 podman[483083]: 2025-12-05 02:31:44.71587518 +0000 UTC m=+0.122104414 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 02:31:44 compute-0 podman[483084]: 2025-12-05 02:31:44.757128906 +0000 UTC m=+0.151799815 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller)
Dec  5 02:31:44 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2486: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:31:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 02:31:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2851762960' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 02:31:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 02:31:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2851762960' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 02:31:45 compute-0 nova_compute[349548]: 2025-12-05 02:31:45.729 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:31:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:31:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:31:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:31:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:31:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:31:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:31:46 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2487: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:31:47 compute-0 nova_compute[349548]: 2025-12-05 02:31:47.979 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:31:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:31:48 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2488: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:31:50 compute-0 nova_compute[349548]: 2025-12-05 02:31:50.734 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:31:50 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2489: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:31:52 compute-0 nova_compute[349548]: 2025-12-05 02:31:52.980 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:31:52 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2490: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:31:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:31:54 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2491: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:31:55 compute-0 nova_compute[349548]: 2025-12-05 02:31:55.087 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:31:55 compute-0 nova_compute[349548]: 2025-12-05 02:31:55.088 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 02:31:55 compute-0 nova_compute[349548]: 2025-12-05 02:31:55.088 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  5 02:31:55 compute-0 nova_compute[349548]: 2025-12-05 02:31:55.185 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  5 02:31:55 compute-0 nova_compute[349548]: 2025-12-05 02:31:55.739 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:31:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:31:56.234 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:31:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:31:56.235 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:31:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:31:56.235 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:31:56 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2492: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:31:57 compute-0 nova_compute[349548]: 2025-12-05 02:31:57.983 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:31:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:31:58 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2493: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:31:59 compute-0 podman[483168]: 2025-12-05 02:31:59.725218773 +0000 UTC m=+0.129447056 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:31:59 compute-0 podman[483169]: 2025-12-05 02:31:59.733325458 +0000 UTC m=+0.132360311 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 02:31:59 compute-0 podman[158197]: time="2025-12-05T02:31:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:31:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:31:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec  5 02:31:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:31:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8215 "" "Go-http-client/1.1"
Dec  5 02:32:00 compute-0 nova_compute[349548]: 2025-12-05 02:32:00.744 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:32:00 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2494: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:32:01 compute-0 nova_compute[349548]: 2025-12-05 02:32:01.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:32:01 compute-0 nova_compute[349548]: 2025-12-05 02:32:01.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:32:01 compute-0 nova_compute[349548]: 2025-12-05 02:32:01.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 02:32:01 compute-0 openstack_network_exporter[366555]: ERROR   02:32:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:32:01 compute-0 openstack_network_exporter[366555]: ERROR   02:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:32:01 compute-0 openstack_network_exporter[366555]: ERROR   02:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:32:01 compute-0 openstack_network_exporter[366555]: ERROR   02:32:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:32:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:32:01 compute-0 openstack_network_exporter[366555]: ERROR   02:32:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:32:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:32:02 compute-0 nova_compute[349548]: 2025-12-05 02:32:02.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:32:02 compute-0 nova_compute[349548]: 2025-12-05 02:32:02.103 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:32:02 compute-0 nova_compute[349548]: 2025-12-05 02:32:02.104 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:32:02 compute-0 nova_compute[349548]: 2025-12-05 02:32:02.104 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:32:02 compute-0 nova_compute[349548]: 2025-12-05 02:32:02.104 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 02:32:02 compute-0 nova_compute[349548]: 2025-12-05 02:32:02.105 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:32:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:32:02 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1196867583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:32:02 compute-0 nova_compute[349548]: 2025-12-05 02:32:02.597 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:32:02 compute-0 nova_compute[349548]: 2025-12-05 02:32:02.988 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:32:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2495: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:32:03 compute-0 nova_compute[349548]: 2025-12-05 02:32:03.137 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:32:03 compute-0 nova_compute[349548]: 2025-12-05 02:32:03.139 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3932MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 02:32:03 compute-0 nova_compute[349548]: 2025-12-05 02:32:03.140 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:32:03 compute-0 nova_compute[349548]: 2025-12-05 02:32:03.140 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:32:03 compute-0 nova_compute[349548]: 2025-12-05 02:32:03.366 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 02:32:03 compute-0 nova_compute[349548]: 2025-12-05 02:32:03.367 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 02:32:03 compute-0 nova_compute[349548]: 2025-12-05 02:32:03.398 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:32:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:32:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:32:03 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3585697749' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:32:03 compute-0 nova_compute[349548]: 2025-12-05 02:32:03.975 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:32:03 compute-0 nova_compute[349548]: 2025-12-05 02:32:03.986 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:32:04 compute-0 nova_compute[349548]: 2025-12-05 02:32:04.007 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:32:04 compute-0 nova_compute[349548]: 2025-12-05 02:32:04.009 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 02:32:04 compute-0 nova_compute[349548]: 2025-12-05 02:32:04.010 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.870s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:32:04 compute-0 podman[483251]: 2025-12-05 02:32:04.730607129 +0000 UTC m=+0.122009941 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, release=1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release-0.7.12=, version=9.4, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, name=ubi9, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., architecture=x86_64, container_name=kepler, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible)
Dec  5 02:32:04 compute-0 podman[483250]: 2025-12-05 02:32:04.745379107 +0000 UTC m=+0.142507455 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_id=edpm, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  5 02:32:04 compute-0 podman[483252]: 2025-12-05 02:32:04.748781496 +0000 UTC m=+0.135769350 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:32:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2496: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:32:05 compute-0 nova_compute[349548]: 2025-12-05 02:32:05.011 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:32:05 compute-0 nova_compute[349548]: 2025-12-05 02:32:05.012 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:32:05 compute-0 nova_compute[349548]: 2025-12-05 02:32:05.012 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:32:05 compute-0 nova_compute[349548]: 2025-12-05 02:32:05.749 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:32:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2497: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:32:07 compute-0 nova_compute[349548]: 2025-12-05 02:32:07.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:32:07 compute-0 nova_compute[349548]: 2025-12-05 02:32:07.990 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:32:08 compute-0 nova_compute[349548]: 2025-12-05 02:32:08.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:32:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:32:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2498: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:32:10 compute-0 nova_compute[349548]: 2025-12-05 02:32:10.062 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:32:10 compute-0 nova_compute[349548]: 2025-12-05 02:32:10.754 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:32:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2499: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:32:12 compute-0 nova_compute[349548]: 2025-12-05 02:32:12.993 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:32:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2500: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:32:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:32:14 compute-0 podman[483306]: 2025-12-05 02:32:14.875193846 +0000 UTC m=+0.123603847 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3)
Dec  5 02:32:14 compute-0 podman[483308]: 2025-12-05 02:32:14.894856336 +0000 UTC m=+0.132649239 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, build-date=2025-08-20T13:12:41, release=1755695350, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_id=edpm, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7)
Dec  5 02:32:14 compute-0 podman[483307]: 2025-12-05 02:32:14.900245342 +0000 UTC m=+0.131076173 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 02:32:14 compute-0 podman[483325]: 2025-12-05 02:32:14.971475459 +0000 UTC m=+0.162097664 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 02:32:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2501: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:32:15 compute-0 nova_compute[349548]: 2025-12-05 02:32:15.757 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:32:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:32:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:32:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:32:16
Dec  5 02:32:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 02:32:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 02:32:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.log', 'backups', '.rgw.root', 'cephfs.cephfs.meta', '.mgr', 'images', 'vms', 'default.rgw.control', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data']
Dec  5 02:32:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 02:32:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:32:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:32:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:32:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:32:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2502: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:32:17 compute-0 nova_compute[349548]: 2025-12-05 02:32:17.997 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:32:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 02:32:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:32:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 02:32:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:32:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:32:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:32:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:32:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:32:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:32:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:32:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:32:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2503: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:32:20 compute-0 nova_compute[349548]: 2025-12-05 02:32:20.763 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:32:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2504: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:32:23 compute-0 nova_compute[349548]: 2025-12-05 02:32:23.000 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:32:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2505: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:32:23 compute-0 podman[483558]: 2025-12-05 02:32:23.17908428 +0000 UTC m=+0.113861944 container exec aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Dec  5 02:32:23 compute-0 podman[483558]: 2025-12-05 02:32:23.296388624 +0000 UTC m=+0.231166218 container exec_died aab8d24497e0526c860e11e450cd7f94cc43650c160d1be7b3681c185d3263e9 (image=quay.io/ceph/ceph:v18, name=ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:32:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:32:24 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 02:32:24 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:32:24 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 02:32:24 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:32:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2506: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:32:25 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:32:25 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:32:25 compute-0 nova_compute[349548]: 2025-12-05 02:32:25.768 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:32:25 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:32:25 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:32:25 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 02:32:25 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:32:25 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 02:32:25 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:32:25 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 2ca6ab86-75ae-4828-805b-ba47f64c4f95 does not exist
Dec  5 02:32:25 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev d89b6710-ca8f-40bb-822b-ae14fbe743e3 does not exist
Dec  5 02:32:25 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 6d1b2ac2-a9cd-46db-8b82-6b9666eb69a6 does not exist
Dec  5 02:32:25 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 02:32:25 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 02:32:25 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 02:32:25 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:32:25 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:32:25 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:32:26 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:32:26 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:32:26 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:32:26 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 02:32:26 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.0 total, 600.0 interval#012Cumulative writes: 11K writes, 51K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.01 MB/s#012Cumulative WAL: 11K writes, 11K syncs, 1.00 writes per sync, written: 0.07 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1342 writes, 6176 keys, 1342 commit groups, 1.0 writes per commit group, ingest: 8.73 MB, 0.01 MB/s#012Interval WAL: 1342 writes, 1342 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0    103.2      0.62              0.29        36    0.017       0      0       0.0       0.0#012  L6      1/0    7.27 MB   0.0      0.3     0.1      0.2       0.3      0.0       0.0   4.1    127.1    104.7      2.51              1.14        35    0.072    193K    19K       0.0       0.0#012 Sum      1/0    7.27 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.1    102.1    104.4      3.13              1.43        71    0.044    193K    19K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.6    118.0    120.0      0.40              0.21        10    0.040     33K   2548       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.3      0.0       0.0   0.0    127.1    104.7      2.51              1.14        35    0.072    193K    19K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0    104.0      0.61              0.29        35    0.017       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.4      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4800.0 total, 600.0 interval#012Flush(GB): cumulative 0.062, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.32 GB write, 0.07 MB/s write, 0.31 GB read, 0.07 MB/s read, 3.1 seconds#012Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56463779d1f0#2 capacity: 304.00 MB usage: 40.25 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000324 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2775,38.85 MB,12.7811%) FilterBlock(72,542.05 KB,0.174126%) IndexBlock(72,885.92 KB,0.284591%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  5 02:32:26 compute-0 podman[483977]: 2025-12-05 02:32:26.97437691 +0000 UTC m=+0.089840587 container create 65f2c0cb4be7998175a7d35d53ebf9a31ca696f3a1a8814fde1f9665456cdbee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_nightingale, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:32:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2507: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:32:27 compute-0 podman[483977]: 2025-12-05 02:32:26.941331671 +0000 UTC m=+0.056795338 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:32:27 compute-0 systemd[1]: Started libpod-conmon-65f2c0cb4be7998175a7d35d53ebf9a31ca696f3a1a8814fde1f9665456cdbee.scope.
Dec  5 02:32:27 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:32:27 compute-0 podman[483977]: 2025-12-05 02:32:27.134437544 +0000 UTC m=+0.249901271 container init 65f2c0cb4be7998175a7d35d53ebf9a31ca696f3a1a8814fde1f9665456cdbee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Dec  5 02:32:27 compute-0 podman[483977]: 2025-12-05 02:32:27.147554924 +0000 UTC m=+0.263018591 container start 65f2c0cb4be7998175a7d35d53ebf9a31ca696f3a1a8814fde1f9665456cdbee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_nightingale, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  5 02:32:27 compute-0 podman[483977]: 2025-12-05 02:32:27.154371722 +0000 UTC m=+0.269835429 container attach 65f2c0cb4be7998175a7d35d53ebf9a31ca696f3a1a8814fde1f9665456cdbee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_nightingale, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Dec  5 02:32:27 compute-0 crazy_nightingale[483994]: 167 167
Dec  5 02:32:27 compute-0 podman[483977]: 2025-12-05 02:32:27.158489202 +0000 UTC m=+0.273952879 container died 65f2c0cb4be7998175a7d35d53ebf9a31ca696f3a1a8814fde1f9665456cdbee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_nightingale, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  5 02:32:27 compute-0 systemd[1]: libpod-65f2c0cb4be7998175a7d35d53ebf9a31ca696f3a1a8814fde1f9665456cdbee.scope: Deactivated successfully.
Dec  5 02:32:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d16128bf7bbc1297e67d3806d3df142484b10dbf773f007e251977f8396dc65-merged.mount: Deactivated successfully.
Dec  5 02:32:27 compute-0 podman[483977]: 2025-12-05 02:32:27.230185472 +0000 UTC m=+0.345649139 container remove 65f2c0cb4be7998175a7d35d53ebf9a31ca696f3a1a8814fde1f9665456cdbee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_nightingale, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Dec  5 02:32:27 compute-0 systemd[1]: libpod-conmon-65f2c0cb4be7998175a7d35d53ebf9a31ca696f3a1a8814fde1f9665456cdbee.scope: Deactivated successfully.
Dec  5 02:32:27 compute-0 podman[484018]: 2025-12-05 02:32:27.503999496 +0000 UTC m=+0.083616727 container create 7ae83c15933be6120ac1927d9c40ef1d11898219740eb644b66dcad15a378f02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lewin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  5 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  5 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec  5 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:32:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 02:32:27 compute-0 podman[484018]: 2025-12-05 02:32:27.471180424 +0000 UTC m=+0.050797705 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:32:27 compute-0 systemd[1]: Started libpod-conmon-7ae83c15933be6120ac1927d9c40ef1d11898219740eb644b66dcad15a378f02.scope.
Dec  5 02:32:27 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:32:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/886f3be810430158d726cf90e2fa04d74ff7a0ed10e3ecc6ecf40c6e9398ddd1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:32:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/886f3be810430158d726cf90e2fa04d74ff7a0ed10e3ecc6ecf40c6e9398ddd1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:32:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/886f3be810430158d726cf90e2fa04d74ff7a0ed10e3ecc6ecf40c6e9398ddd1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:32:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/886f3be810430158d726cf90e2fa04d74ff7a0ed10e3ecc6ecf40c6e9398ddd1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:32:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/886f3be810430158d726cf90e2fa04d74ff7a0ed10e3ecc6ecf40c6e9398ddd1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 02:32:27 compute-0 podman[484018]: 2025-12-05 02:32:27.649524608 +0000 UTC m=+0.229141839 container init 7ae83c15933be6120ac1927d9c40ef1d11898219740eb644b66dcad15a378f02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lewin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Dec  5 02:32:27 compute-0 podman[484018]: 2025-12-05 02:32:27.66443315 +0000 UTC m=+0.244050351 container start 7ae83c15933be6120ac1927d9c40ef1d11898219740eb644b66dcad15a378f02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lewin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  5 02:32:27 compute-0 podman[484018]: 2025-12-05 02:32:27.669985671 +0000 UTC m=+0.249602902 container attach 7ae83c15933be6120ac1927d9c40ef1d11898219740eb644b66dcad15a378f02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:32:28 compute-0 nova_compute[349548]: 2025-12-05 02:32:28.002 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:32:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:32:28 compute-0 nervous_lewin[484034]: --> passed data devices: 0 physical, 3 LVM
Dec  5 02:32:28 compute-0 nervous_lewin[484034]: --> relative data size: 1.0
Dec  5 02:32:28 compute-0 nervous_lewin[484034]: --> All data devices are unavailable
Dec  5 02:32:28 compute-0 systemd[1]: libpod-7ae83c15933be6120ac1927d9c40ef1d11898219740eb644b66dcad15a378f02.scope: Deactivated successfully.
Dec  5 02:32:28 compute-0 podman[484018]: 2025-12-05 02:32:28.981168781 +0000 UTC m=+1.560785982 container died 7ae83c15933be6120ac1927d9c40ef1d11898219740eb644b66dcad15a378f02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  5 02:32:28 compute-0 systemd[1]: libpod-7ae83c15933be6120ac1927d9c40ef1d11898219740eb644b66dcad15a378f02.scope: Consumed 1.243s CPU time.
Dec  5 02:32:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2508: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:32:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-886f3be810430158d726cf90e2fa04d74ff7a0ed10e3ecc6ecf40c6e9398ddd1-merged.mount: Deactivated successfully.
Dec  5 02:32:29 compute-0 podman[484018]: 2025-12-05 02:32:29.077392493 +0000 UTC m=+1.657009694 container remove 7ae83c15933be6120ac1927d9c40ef1d11898219740eb644b66dcad15a378f02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  5 02:32:29 compute-0 systemd[1]: libpod-conmon-7ae83c15933be6120ac1927d9c40ef1d11898219740eb644b66dcad15a378f02.scope: Deactivated successfully.
Dec  5 02:32:29 compute-0 podman[158197]: time="2025-12-05T02:32:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:32:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:32:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec  5 02:32:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:32:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8212 "" "Go-http-client/1.1"
Dec  5 02:32:30 compute-0 podman[484216]: 2025-12-05 02:32:30.2885087 +0000 UTC m=+0.090995221 container create 5cacc8849655b4c75ced046cba084eb54535f582629b9c369837ddcc38fbaf30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  5 02:32:30 compute-0 podman[484216]: 2025-12-05 02:32:30.255504463 +0000 UTC m=+0.057991034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:32:30 compute-0 systemd[1]: Started libpod-conmon-5cacc8849655b4c75ced046cba084eb54535f582629b9c369837ddcc38fbaf30.scope.
Dec  5 02:32:30 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:32:30 compute-0 podman[484216]: 2025-12-05 02:32:30.431580281 +0000 UTC m=+0.234066822 container init 5cacc8849655b4c75ced046cba084eb54535f582629b9c369837ddcc38fbaf30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_thompson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:32:30 compute-0 podman[484216]: 2025-12-05 02:32:30.446035131 +0000 UTC m=+0.248521632 container start 5cacc8849655b4c75ced046cba084eb54535f582629b9c369837ddcc38fbaf30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  5 02:32:30 compute-0 podman[484216]: 2025-12-05 02:32:30.451940612 +0000 UTC m=+0.254427183 container attach 5cacc8849655b4c75ced046cba084eb54535f582629b9c369837ddcc38fbaf30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_thompson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:32:30 compute-0 jovial_thompson[484234]: 167 167
Dec  5 02:32:30 compute-0 systemd[1]: libpod-5cacc8849655b4c75ced046cba084eb54535f582629b9c369837ddcc38fbaf30.scope: Deactivated successfully.
Dec  5 02:32:30 compute-0 podman[484216]: 2025-12-05 02:32:30.459967565 +0000 UTC m=+0.262454076 container died 5cacc8849655b4c75ced046cba084eb54535f582629b9c369837ddcc38fbaf30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  5 02:32:30 compute-0 podman[484229]: 2025-12-05 02:32:30.484229899 +0000 UTC m=+0.121083344 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Dec  5 02:32:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-54da0405268325e6a4bf4b9149e04de7dc6b6fe9f17d38a6a671465817c502db-merged.mount: Deactivated successfully.
Dec  5 02:32:30 compute-0 podman[484216]: 2025-12-05 02:32:30.516568137 +0000 UTC m=+0.319054628 container remove 5cacc8849655b4c75ced046cba084eb54535f582629b9c369837ddcc38fbaf30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_thompson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Dec  5 02:32:30 compute-0 podman[484233]: 2025-12-05 02:32:30.517477353 +0000 UTC m=+0.148312864 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 02:32:30 compute-0 systemd[1]: libpod-conmon-5cacc8849655b4c75ced046cba084eb54535f582629b9c369837ddcc38fbaf30.scope: Deactivated successfully.
Dec  5 02:32:30 compute-0 nova_compute[349548]: 2025-12-05 02:32:30.772 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:32:30 compute-0 podman[484293]: 2025-12-05 02:32:30.801298538 +0000 UTC m=+0.098602772 container create b65e4cf75b83eb832d7e838205b90243499b4c52909cc959e35d724e2eacc8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  5 02:32:30 compute-0 podman[484293]: 2025-12-05 02:32:30.753553832 +0000 UTC m=+0.050858116 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:32:30 compute-0 systemd[1]: Started libpod-conmon-b65e4cf75b83eb832d7e838205b90243499b4c52909cc959e35d724e2eacc8aa.scope.
Dec  5 02:32:30 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:32:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a96ea874ccdf4806039850a9f720e5e1b4bacc89f40c625536c79fd68d0231f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:32:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a96ea874ccdf4806039850a9f720e5e1b4bacc89f40c625536c79fd68d0231f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:32:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a96ea874ccdf4806039850a9f720e5e1b4bacc89f40c625536c79fd68d0231f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:32:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a96ea874ccdf4806039850a9f720e5e1b4bacc89f40c625536c79fd68d0231f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:32:30 compute-0 podman[484293]: 2025-12-05 02:32:30.946540131 +0000 UTC m=+0.243844365 container init b65e4cf75b83eb832d7e838205b90243499b4c52909cc959e35d724e2eacc8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  5 02:32:30 compute-0 podman[484293]: 2025-12-05 02:32:30.964193554 +0000 UTC m=+0.261497778 container start b65e4cf75b83eb832d7e838205b90243499b4c52909cc959e35d724e2eacc8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Dec  5 02:32:30 compute-0 podman[484293]: 2025-12-05 02:32:30.97095239 +0000 UTC m=+0.268256634 container attach b65e4cf75b83eb832d7e838205b90243499b4c52909cc959e35d724e2eacc8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  5 02:32:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2509: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:32:31 compute-0 openstack_network_exporter[366555]: ERROR   02:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:32:31 compute-0 openstack_network_exporter[366555]: ERROR   02:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:32:31 compute-0 openstack_network_exporter[366555]: ERROR   02:32:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:32:31 compute-0 openstack_network_exporter[366555]: ERROR   02:32:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:32:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:32:31 compute-0 openstack_network_exporter[366555]: ERROR   02:32:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:32:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]: {
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:    "0": [
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:        {
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:            "devices": [
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:                "/dev/loop3"
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:            ],
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:            "lv_name": "ceph_lv0",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:            "lv_size": "21470642176",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:            "name": "ceph_lv0",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:            "tags": {
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:                "ceph.cluster_name": "ceph",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:                "ceph.crush_device_class": "",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:                "ceph.encrypted": "0",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:                "ceph.osd_id": "0",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:                "ceph.type": "block",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:                "ceph.vdo": "0"
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:            },
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:            "type": "block",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:            "vg_name": "ceph_vg0"
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:        }
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:    ],
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:    "1": [
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:        {
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:            "devices": [
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:                "/dev/loop4"
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:            ],
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:            "lv_name": "ceph_lv1",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:            "lv_size": "21470642176",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:            "name": "ceph_lv1",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:            "tags": {
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:                "ceph.cluster_name": "ceph",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:                "ceph.crush_device_class": "",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:                "ceph.encrypted": "0",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:                "ceph.osd_id": "1",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:                "ceph.type": "block",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:                "ceph.vdo": "0"
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:            },
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:            "type": "block",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:            "vg_name": "ceph_vg1"
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:        }
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:    ],
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:    "2": [
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:        {
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:            "devices": [
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:                "/dev/loop5"
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:            ],
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:            "lv_name": "ceph_lv2",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:            "lv_size": "21470642176",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:            "name": "ceph_lv2",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:            "tags": {
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:                "ceph.cluster_name": "ceph",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:                "ceph.crush_device_class": "",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:                "ceph.encrypted": "0",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:                "ceph.osd_id": "2",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:                "ceph.type": "block",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:                "ceph.vdo": "0"
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:            },
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:            "type": "block",
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:            "vg_name": "ceph_vg2"
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:        }
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]:    ]
Dec  5 02:32:31 compute-0 gallant_mclaren[484310]: }
Dec  5 02:32:31 compute-0 podman[484293]: 2025-12-05 02:32:31.860036493 +0000 UTC m=+1.157340707 container died b65e4cf75b83eb832d7e838205b90243499b4c52909cc959e35d724e2eacc8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mclaren, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:32:31 compute-0 systemd[1]: libpod-b65e4cf75b83eb832d7e838205b90243499b4c52909cc959e35d724e2eacc8aa.scope: Deactivated successfully.
Dec  5 02:32:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-a96ea874ccdf4806039850a9f720e5e1b4bacc89f40c625536c79fd68d0231f5-merged.mount: Deactivated successfully.
Dec  5 02:32:31 compute-0 podman[484293]: 2025-12-05 02:32:31.972352312 +0000 UTC m=+1.269656516 container remove b65e4cf75b83eb832d7e838205b90243499b4c52909cc959e35d724e2eacc8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_mclaren, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Dec  5 02:32:31 compute-0 systemd[1]: libpod-conmon-b65e4cf75b83eb832d7e838205b90243499b4c52909cc959e35d724e2eacc8aa.scope: Deactivated successfully.
Dec  5 02:32:33 compute-0 nova_compute[349548]: 2025-12-05 02:32:33.005 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:32:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2510: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:32:33 compute-0 podman[484473]: 2025-12-05 02:32:33.159122443 +0000 UTC m=+0.089751265 container create d7931cee9f0f97b6f1da712de764b370ad968c27713f47d43f5c8e757ffaa979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_tu, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  5 02:32:33 compute-0 systemd[1]: Started libpod-conmon-d7931cee9f0f97b6f1da712de764b370ad968c27713f47d43f5c8e757ffaa979.scope.
Dec  5 02:32:33 compute-0 podman[484473]: 2025-12-05 02:32:33.124128807 +0000 UTC m=+0.054757689 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:32:33 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:32:33 compute-0 podman[484473]: 2025-12-05 02:32:33.273559083 +0000 UTC m=+0.204187935 container init d7931cee9f0f97b6f1da712de764b370ad968c27713f47d43f5c8e757ffaa979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:32:33 compute-0 podman[484473]: 2025-12-05 02:32:33.292494232 +0000 UTC m=+0.223123074 container start d7931cee9f0f97b6f1da712de764b370ad968c27713f47d43f5c8e757ffaa979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_tu, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:32:33 compute-0 podman[484473]: 2025-12-05 02:32:33.298991371 +0000 UTC m=+0.229620263 container attach d7931cee9f0f97b6f1da712de764b370ad968c27713f47d43f5c8e757ffaa979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:32:33 compute-0 confident_tu[484489]: 167 167
Dec  5 02:32:33 compute-0 systemd[1]: libpod-d7931cee9f0f97b6f1da712de764b370ad968c27713f47d43f5c8e757ffaa979.scope: Deactivated successfully.
Dec  5 02:32:33 compute-0 podman[484473]: 2025-12-05 02:32:33.304271844 +0000 UTC m=+0.234900696 container died d7931cee9f0f97b6f1da712de764b370ad968c27713f47d43f5c8e757ffaa979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_tu, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:32:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2c49fcb05fabfae135382fba92814f1937df16112e2cb8051fd64dcaa5eaa69-merged.mount: Deactivated successfully.
Dec  5 02:32:33 compute-0 podman[484473]: 2025-12-05 02:32:33.388372564 +0000 UTC m=+0.319001416 container remove d7931cee9f0f97b6f1da712de764b370ad968c27713f47d43f5c8e757ffaa979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_tu, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:32:33 compute-0 systemd[1]: libpod-conmon-d7931cee9f0f97b6f1da712de764b370ad968c27713f47d43f5c8e757ffaa979.scope: Deactivated successfully.
Dec  5 02:32:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:32:33 compute-0 podman[484511]: 2025-12-05 02:32:33.686086171 +0000 UTC m=+0.102574807 container create 7d2b02233e3f6cdc4e5d8535b564ee011e55a0dde1d99b790c30380b0e5e2420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ardinghelli, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Dec  5 02:32:33 compute-0 podman[484511]: 2025-12-05 02:32:33.645649998 +0000 UTC m=+0.062138684 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:32:33 compute-0 systemd[1]: Started libpod-conmon-7d2b02233e3f6cdc4e5d8535b564ee011e55a0dde1d99b790c30380b0e5e2420.scope.
Dec  5 02:32:33 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:32:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6817b5d0072dcb73462bf59526c3b29b775494e6c50a6f58a6ae8ab65e37bf80/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:32:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6817b5d0072dcb73462bf59526c3b29b775494e6c50a6f58a6ae8ab65e37bf80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:32:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6817b5d0072dcb73462bf59526c3b29b775494e6c50a6f58a6ae8ab65e37bf80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:32:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6817b5d0072dcb73462bf59526c3b29b775494e6c50a6f58a6ae8ab65e37bf80/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:32:33 compute-0 podman[484511]: 2025-12-05 02:32:33.869878894 +0000 UTC m=+0.286367560 container init 7d2b02233e3f6cdc4e5d8535b564ee011e55a0dde1d99b790c30380b0e5e2420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ardinghelli, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:32:33 compute-0 podman[484511]: 2025-12-05 02:32:33.895417174 +0000 UTC m=+0.311905810 container start 7d2b02233e3f6cdc4e5d8535b564ee011e55a0dde1d99b790c30380b0e5e2420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ardinghelli, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  5 02:32:33 compute-0 podman[484511]: 2025-12-05 02:32:33.902211902 +0000 UTC m=+0.318700598 container attach 7d2b02233e3f6cdc4e5d8535b564ee011e55a0dde1d99b790c30380b0e5e2420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ardinghelli, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  5 02:32:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2511: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:32:35 compute-0 quizzical_ardinghelli[484527]: {
Dec  5 02:32:35 compute-0 quizzical_ardinghelli[484527]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 02:32:35 compute-0 quizzical_ardinghelli[484527]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:32:35 compute-0 quizzical_ardinghelli[484527]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 02:32:35 compute-0 quizzical_ardinghelli[484527]:        "osd_id": 0,
Dec  5 02:32:35 compute-0 quizzical_ardinghelli[484527]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:32:35 compute-0 quizzical_ardinghelli[484527]:        "type": "bluestore"
Dec  5 02:32:35 compute-0 quizzical_ardinghelli[484527]:    },
Dec  5 02:32:35 compute-0 quizzical_ardinghelli[484527]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 02:32:35 compute-0 quizzical_ardinghelli[484527]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:32:35 compute-0 quizzical_ardinghelli[484527]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 02:32:35 compute-0 quizzical_ardinghelli[484527]:        "osd_id": 1,
Dec  5 02:32:35 compute-0 quizzical_ardinghelli[484527]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:32:35 compute-0 quizzical_ardinghelli[484527]:        "type": "bluestore"
Dec  5 02:32:35 compute-0 quizzical_ardinghelli[484527]:    },
Dec  5 02:32:35 compute-0 quizzical_ardinghelli[484527]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 02:32:35 compute-0 quizzical_ardinghelli[484527]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:32:35 compute-0 quizzical_ardinghelli[484527]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 02:32:35 compute-0 quizzical_ardinghelli[484527]:        "osd_id": 2,
Dec  5 02:32:35 compute-0 quizzical_ardinghelli[484527]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:32:35 compute-0 quizzical_ardinghelli[484527]:        "type": "bluestore"
Dec  5 02:32:35 compute-0 quizzical_ardinghelli[484527]:    }
Dec  5 02:32:35 compute-0 quizzical_ardinghelli[484527]: }
Dec  5 02:32:35 compute-0 systemd[1]: libpod-7d2b02233e3f6cdc4e5d8535b564ee011e55a0dde1d99b790c30380b0e5e2420.scope: Deactivated successfully.
Dec  5 02:32:35 compute-0 systemd[1]: libpod-7d2b02233e3f6cdc4e5d8535b564ee011e55a0dde1d99b790c30380b0e5e2420.scope: Consumed 1.191s CPU time.
Dec  5 02:32:35 compute-0 podman[484561]: 2025-12-05 02:32:35.164681789 +0000 UTC m=+0.036959523 container died 7d2b02233e3f6cdc4e5d8535b564ee011e55a0dde1d99b790c30380b0e5e2420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ardinghelli, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  5 02:32:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-6817b5d0072dcb73462bf59526c3b29b775494e6c50a6f58a6ae8ab65e37bf80-merged.mount: Deactivated successfully.
Dec  5 02:32:35 compute-0 podman[484561]: 2025-12-05 02:32:35.245469473 +0000 UTC m=+0.117747127 container remove 7d2b02233e3f6cdc4e5d8535b564ee011e55a0dde1d99b790c30380b0e5e2420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ardinghelli, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Dec  5 02:32:35 compute-0 podman[484567]: 2025-12-05 02:32:35.251856598 +0000 UTC m=+0.104548984 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Dec  5 02:32:35 compute-0 systemd[1]: libpod-conmon-7d2b02233e3f6cdc4e5d8535b564ee011e55a0dde1d99b790c30380b0e5e2420.scope: Deactivated successfully.
Dec  5 02:32:35 compute-0 podman[484562]: 2025-12-05 02:32:35.266085721 +0000 UTC m=+0.127014346 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, release=1214.1726694543, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, managed_by=edpm_ansible, container_name=kepler, config_id=edpm, release-0.7.12=, distribution-scope=public, vcs-type=git, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9)
Dec  5 02:32:35 compute-0 podman[484560]: 2025-12-05 02:32:35.274979739 +0000 UTC m=+0.137427058 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Dec  5 02:32:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 02:32:35 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:32:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 02:32:35 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:32:35 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev fb0d0580-4915-49b7-9c98-39f52d7a0f8b does not exist
Dec  5 02:32:35 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev ab1c9d76-ce53-44e9-9d71-f4d5d6d65796 does not exist
Dec  5 02:32:35 compute-0 nova_compute[349548]: 2025-12-05 02:32:35.777 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:32:36 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:32:36 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:32:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2512: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:32:38 compute-0 nova_compute[349548]: 2025-12-05 02:32:38.009 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.330 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.331 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.331 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.332 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.336 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.340 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.341 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.340 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.341 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.342 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.342 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.342 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.344 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.345 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.345 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.346 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.347 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.348 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'power.state': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.349 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.349 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.349 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.350 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.350 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.350 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.350 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.351 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.351 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.351 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.351 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.352 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.352 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.352 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.352 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.353 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.353 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.353 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.353 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.354 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.354 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.354 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.354 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.354 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.354 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.355 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.355 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.355 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.355 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.356 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.356 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.356 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.356 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:32:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:32:38.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:32:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:32:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2513: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:32:40 compute-0 nova_compute[349548]: 2025-12-05 02:32:40.782 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:32:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2514: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:32:43 compute-0 nova_compute[349548]: 2025-12-05 02:32:43.012 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:32:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2515: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:32:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:32:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2516: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:32:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 02:32:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4211330410' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 02:32:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 02:32:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4211330410' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 02:32:45 compute-0 podman[484686]: 2025-12-05 02:32:45.733341581 +0000 UTC m=+0.116824900 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, container_name=openstack_network_exporter, vendor=Red Hat, Inc., vcs-type=git, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec  5 02:32:45 compute-0 podman[484683]: 2025-12-05 02:32:45.741740805 +0000 UTC m=+0.141339972 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 02:32:45 compute-0 podman[484684]: 2025-12-05 02:32:45.744604398 +0000 UTC m=+0.139912521 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  5 02:32:45 compute-0 podman[484685]: 2025-12-05 02:32:45.764379981 +0000 UTC m=+0.149677133 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  5 02:32:45 compute-0 nova_compute[349548]: 2025-12-05 02:32:45.785 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:32:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:32:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:32:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:32:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:32:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:32:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:32:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2517: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:32:48 compute-0 nova_compute[349548]: 2025-12-05 02:32:48.018 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:32:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:32:48 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Dec  5 02:32:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:32:48.664365) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  5 02:32:48 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Dec  5 02:32:48 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901968664477, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 1077, "num_deletes": 257, "total_data_size": 1592659, "memory_usage": 1622944, "flush_reason": "Manual Compaction"}
Dec  5 02:32:48 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Dec  5 02:32:48 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901968684293, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 1555889, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51004, "largest_seqno": 52080, "table_properties": {"data_size": 1550614, "index_size": 2735, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11064, "raw_average_key_size": 19, "raw_value_size": 1540061, "raw_average_value_size": 2687, "num_data_blocks": 123, "num_entries": 573, "num_filter_entries": 573, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764901866, "oldest_key_time": 1764901866, "file_creation_time": 1764901968, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Dec  5 02:32:48 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 20043 microseconds, and 10974 cpu microseconds.
Dec  5 02:32:48 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 02:32:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:32:48.684405) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 1555889 bytes OK
Dec  5 02:32:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:32:48.684440) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Dec  5 02:32:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:32:48.687100) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Dec  5 02:32:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:32:48.687124) EVENT_LOG_v1 {"time_micros": 1764901968687117, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  5 02:32:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:32:48.687149) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  5 02:32:48 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 1587599, prev total WAL file size 1587599, number of live WAL files 2.
Dec  5 02:32:48 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:32:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:32:48.688544) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303039' seq:72057594037927935, type:22 .. '6C6F676D0032323632' seq:0, type:0; will stop at (end)
Dec  5 02:32:48 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  5 02:32:48 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(1519KB)], [122(7441KB)]
Dec  5 02:32:48 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901968688616, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 9175753, "oldest_snapshot_seqno": -1}
Dec  5 02:32:48 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 6560 keys, 9064759 bytes, temperature: kUnknown
Dec  5 02:32:48 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901968760199, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 9064759, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9022938, "index_size": 24301, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16453, "raw_key_size": 172020, "raw_average_key_size": 26, "raw_value_size": 8906439, "raw_average_value_size": 1357, "num_data_blocks": 966, "num_entries": 6560, "num_filter_entries": 6560, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764901968, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Dec  5 02:32:48 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 02:32:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:32:48.760524) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 9064759 bytes
Dec  5 02:32:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:32:48.763388) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 128.0 rd, 126.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 7.3 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(11.7) write-amplify(5.8) OK, records in: 7086, records dropped: 526 output_compression: NoCompression
Dec  5 02:32:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:32:48.763419) EVENT_LOG_v1 {"time_micros": 1764901968763405, "job": 74, "event": "compaction_finished", "compaction_time_micros": 71680, "compaction_time_cpu_micros": 44650, "output_level": 6, "num_output_files": 1, "total_output_size": 9064759, "num_input_records": 7086, "num_output_records": 6560, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  5 02:32:48 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:32:48 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901968764152, "job": 74, "event": "table_file_deletion", "file_number": 124}
Dec  5 02:32:48 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:32:48 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764901968767421, "job": 74, "event": "table_file_deletion", "file_number": 122}
Dec  5 02:32:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:32:48.688326) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:32:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:32:48.767753) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:32:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:32:48.767764) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:32:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:32:48.767767) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:32:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:32:48.767770) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:32:48 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:32:48.767774) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:32:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2518: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:32:50 compute-0 nova_compute[349548]: 2025-12-05 02:32:50.790 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:32:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2519: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:32:53 compute-0 nova_compute[349548]: 2025-12-05 02:32:53.022 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:32:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2520: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:32:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:32:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2521: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:32:55 compute-0 nova_compute[349548]: 2025-12-05 02:32:55.796 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:32:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:32:56.236 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:32:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:32:56.236 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:32:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:32:56.237 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:32:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2522: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:32:57 compute-0 nova_compute[349548]: 2025-12-05 02:32:57.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:32:57 compute-0 nova_compute[349548]: 2025-12-05 02:32:57.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 02:32:57 compute-0 nova_compute[349548]: 2025-12-05 02:32:57.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  5 02:32:57 compute-0 nova_compute[349548]: 2025-12-05 02:32:57.145 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  5 02:32:58 compute-0 nova_compute[349548]: 2025-12-05 02:32:58.023 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:32:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:32:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2523: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:32:59 compute-0 podman[158197]: time="2025-12-05T02:32:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:32:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:32:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec  5 02:32:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:32:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8204 "" "Go-http-client/1.1"
Dec  5 02:33:00 compute-0 podman[484772]: 2025-12-05 02:33:00.711366012 +0000 UTC m=+0.116207882 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Dec  5 02:33:00 compute-0 podman[484773]: 2025-12-05 02:33:00.738741586 +0000 UTC m=+0.135044869 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 02:33:00 compute-0 nova_compute[349548]: 2025-12-05 02:33:00.799 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:33:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2524: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:33:01 compute-0 openstack_network_exporter[366555]: ERROR   02:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:33:01 compute-0 openstack_network_exporter[366555]: ERROR   02:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:33:01 compute-0 openstack_network_exporter[366555]: ERROR   02:33:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:33:01 compute-0 openstack_network_exporter[366555]: ERROR   02:33:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:33:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:33:01 compute-0 openstack_network_exporter[366555]: ERROR   02:33:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:33:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:33:01 compute-0 nova_compute[349548]: 2025-12-05 02:33:01.483 349552 DEBUG oslo_concurrency.processutils [None req-f776971b-def7-42eb-8170-c70550c5a615 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:33:01 compute-0 nova_compute[349548]: 2025-12-05 02:33:01.524 349552 DEBUG oslo_concurrency.processutils [None req-f776971b-def7-42eb-8170-c70550c5a615 ff880837791d4f49a54672b8d0e705ff 6ad982b73954486390215862ee62239f - - default default] CMD "env LANG=C uptime" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:33:02 compute-0 nova_compute[349548]: 2025-12-05 02:33:02.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:33:02 compute-0 nova_compute[349548]: 2025-12-05 02:33:02.099 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:33:02 compute-0 nova_compute[349548]: 2025-12-05 02:33:02.100 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:33:02 compute-0 nova_compute[349548]: 2025-12-05 02:33:02.101 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:33:02 compute-0 nova_compute[349548]: 2025-12-05 02:33:02.101 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 02:33:02 compute-0 nova_compute[349548]: 2025-12-05 02:33:02.102 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:33:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:33:02 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1866606673' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:33:02 compute-0 nova_compute[349548]: 2025-12-05 02:33:02.578 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:33:03 compute-0 nova_compute[349548]: 2025-12-05 02:33:03.026 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:33:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2525: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:33:03 compute-0 nova_compute[349548]: 2025-12-05 02:33:03.083 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:33:03 compute-0 nova_compute[349548]: 2025-12-05 02:33:03.084 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3929MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 02:33:03 compute-0 nova_compute[349548]: 2025-12-05 02:33:03.085 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:33:03 compute-0 nova_compute[349548]: 2025-12-05 02:33:03.085 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:33:03 compute-0 nova_compute[349548]: 2025-12-05 02:33:03.157 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 02:33:03 compute-0 nova_compute[349548]: 2025-12-05 02:33:03.158 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 02:33:03 compute-0 nova_compute[349548]: 2025-12-05 02:33:03.187 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:33:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:33:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:33:03 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4139789969' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:33:03 compute-0 nova_compute[349548]: 2025-12-05 02:33:03.693 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:33:03 compute-0 nova_compute[349548]: 2025-12-05 02:33:03.705 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:33:03 compute-0 nova_compute[349548]: 2025-12-05 02:33:03.727 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:33:03 compute-0 nova_compute[349548]: 2025-12-05 02:33:03.730 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 02:33:03 compute-0 nova_compute[349548]: 2025-12-05 02:33:03.730 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:33:04 compute-0 nova_compute[349548]: 2025-12-05 02:33:04.732 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:33:04 compute-0 nova_compute[349548]: 2025-12-05 02:33:04.733 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:33:04 compute-0 nova_compute[349548]: 2025-12-05 02:33:04.733 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 02:33:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2526: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:33:05 compute-0 nova_compute[349548]: 2025-12-05 02:33:05.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:33:05 compute-0 nova_compute[349548]: 2025-12-05 02:33:05.068 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:33:05 compute-0 podman[484861]: 2025-12-05 02:33:05.724807424 +0000 UTC m=+0.118975043 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Dec  5 02:33:05 compute-0 podman[484860]: 2025-12-05 02:33:05.727380029 +0000 UTC m=+0.127612444 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, container_name=kepler, release-0.7.12=, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, com.redhat.component=ubi9-container, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, maintainer=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  5 02:33:05 compute-0 podman[484859]: 2025-12-05 02:33:05.753794295 +0000 UTC m=+0.161470216 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4)
Dec  5 02:33:05 compute-0 nova_compute[349548]: 2025-12-05 02:33:05.804 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:33:06 compute-0 nova_compute[349548]: 2025-12-05 02:33:06.063 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:33:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2527: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:33:08 compute-0 nova_compute[349548]: 2025-12-05 02:33:08.029 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:33:08 compute-0 nova_compute[349548]: 2025-12-05 02:33:08.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:33:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:33:08 compute-0 nova_compute[349548]: 2025-12-05 02:33:08.879 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:33:08 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:33:08.880 287122 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:c8:c0', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '2a:b5:45:4f:f9:d2'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  5 02:33:08 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:33:08.882 287122 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  5 02:33:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2528: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:33:09 compute-0 nova_compute[349548]: 2025-12-05 02:33:09.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:33:10 compute-0 nova_compute[349548]: 2025-12-05 02:33:10.807 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:33:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2529: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:33:13 compute-0 nova_compute[349548]: 2025-12-05 02:33:13.032 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:33:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2530: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:33:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:33:13 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:33:13.884 287122 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8dd76c1c-ab01-42af-b35e-2e870841b6ad, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  5 02:33:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2531: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:33:15 compute-0 nova_compute[349548]: 2025-12-05 02:33:15.813 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:33:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:33:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:33:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:33:16
Dec  5 02:33:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 02:33:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 02:33:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', 'backups', 'default.rgw.meta', '.mgr', 'volumes', 'vms', 'images']
Dec  5 02:33:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 02:33:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:33:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:33:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:33:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:33:16 compute-0 podman[484917]: 2025-12-05 02:33:16.719466852 +0000 UTC m=+0.115794611 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  5 02:33:16 compute-0 podman[484919]: 2025-12-05 02:33:16.736041893 +0000 UTC m=+0.117863581 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vendor=Red Hat, Inc., version=9.6, architecture=x86_64, release=1755695350, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vcs-type=git, config_id=edpm, container_name=openstack_network_exporter)
Dec  5 02:33:16 compute-0 podman[484916]: 2025-12-05 02:33:16.738749821 +0000 UTC m=+0.139211030 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd)
Dec  5 02:33:16 compute-0 podman[484918]: 2025-12-05 02:33:16.781581894 +0000 UTC m=+0.171419905 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  5 02:33:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2532: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:33:18 compute-0 nova_compute[349548]: 2025-12-05 02:33:18.036 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:33:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 02:33:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:33:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 02:33:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:33:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:33:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:33:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:33:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:33:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:33:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:33:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:33:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2533: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:33:20 compute-0 nova_compute[349548]: 2025-12-05 02:33:20.818 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:33:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2534: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:33:23 compute-0 nova_compute[349548]: 2025-12-05 02:33:23.040 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:33:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2535: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:33:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:33:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2536: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:33:25 compute-0 nova_compute[349548]: 2025-12-05 02:33:25.823 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:33:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2537: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  5 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec  5 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:33:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 02:33:28 compute-0 nova_compute[349548]: 2025-12-05 02:33:28.043 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:33:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:33:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2538: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:33:29 compute-0 podman[158197]: time="2025-12-05T02:33:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:33:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:33:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec  5 02:33:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:33:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8199 "" "Go-http-client/1.1"
Dec  5 02:33:30 compute-0 nova_compute[349548]: 2025-12-05 02:33:30.828 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:33:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2539: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:33:31 compute-0 openstack_network_exporter[366555]: ERROR   02:33:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:33:31 compute-0 openstack_network_exporter[366555]: ERROR   02:33:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:33:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:33:31 compute-0 openstack_network_exporter[366555]: ERROR   02:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:33:31 compute-0 openstack_network_exporter[366555]: ERROR   02:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:33:31 compute-0 openstack_network_exporter[366555]: ERROR   02:33:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:33:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:33:31 compute-0 podman[485005]: 2025-12-05 02:33:31.725211321 +0000 UTC m=+0.130385624 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  5 02:33:31 compute-0 podman[485006]: 2025-12-05 02:33:31.729702011 +0000 UTC m=+0.127028906 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 02:33:33 compute-0 nova_compute[349548]: 2025-12-05 02:33:33.047 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:33:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2540: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:33:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:33:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2541: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:33:35 compute-0 nova_compute[349548]: 2025-12-05 02:33:35.834 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:33:36 compute-0 podman[485095]: 2025-12-05 02:33:36.031232039 +0000 UTC m=+0.108949272 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm)
Dec  5 02:33:36 compute-0 podman[485093]: 2025-12-05 02:33:36.045657588 +0000 UTC m=+0.133128054 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute)
Dec  5 02:33:36 compute-0 podman[485094]: 2025-12-05 02:33:36.045769781 +0000 UTC m=+0.123149134 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, release=1214.1726694543, io.openshift.expose-services=, config_id=edpm, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.tags=base rhel9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, managed_by=edpm_ansible, name=ubi9)
Dec  5 02:33:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:33:36 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:33:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 02:33:36 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:33:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 02:33:36 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:33:36 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev d59dcbc8-1e0a-4f2c-945e-6f5046236e90 does not exist
Dec  5 02:33:36 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 380cf9b3-c9a9-4977-837e-ac52da1f78a2 does not exist
Dec  5 02:33:36 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev c8b1d7a4-987f-488c-99e9-1cb434461c36 does not exist
Dec  5 02:33:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 02:33:36 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 02:33:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 02:33:36 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:33:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:33:36 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:33:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2542: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:33:37 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:33:37 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:33:37 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:33:38 compute-0 nova_compute[349548]: 2025-12-05 02:33:38.050 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:33:38 compute-0 podman[485362]: 2025-12-05 02:33:38.052747197 +0000 UTC m=+0.096014846 container create e5ae5ffbd107b091e7b61477414f03f1b34cee39a5070eea4ccb2959c153dc87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_jennings, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:33:38 compute-0 podman[485362]: 2025-12-05 02:33:38.01525744 +0000 UTC m=+0.058525129 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:33:38 compute-0 systemd[1]: Started libpod-conmon-e5ae5ffbd107b091e7b61477414f03f1b34cee39a5070eea4ccb2959c153dc87.scope.
Dec  5 02:33:38 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:33:38 compute-0 podman[485362]: 2025-12-05 02:33:38.223057888 +0000 UTC m=+0.266325577 container init e5ae5ffbd107b091e7b61477414f03f1b34cee39a5070eea4ccb2959c153dc87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  5 02:33:38 compute-0 podman[485362]: 2025-12-05 02:33:38.239873876 +0000 UTC m=+0.283141525 container start e5ae5ffbd107b091e7b61477414f03f1b34cee39a5070eea4ccb2959c153dc87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  5 02:33:38 compute-0 podman[485362]: 2025-12-05 02:33:38.246148208 +0000 UTC m=+0.289415857 container attach e5ae5ffbd107b091e7b61477414f03f1b34cee39a5070eea4ccb2959c153dc87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  5 02:33:38 compute-0 youthful_jennings[485378]: 167 167
Dec  5 02:33:38 compute-0 systemd[1]: libpod-e5ae5ffbd107b091e7b61477414f03f1b34cee39a5070eea4ccb2959c153dc87.scope: Deactivated successfully.
Dec  5 02:33:38 compute-0 podman[485362]: 2025-12-05 02:33:38.255145509 +0000 UTC m=+0.298413148 container died e5ae5ffbd107b091e7b61477414f03f1b34cee39a5070eea4ccb2959c153dc87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_jennings, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  5 02:33:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-f719ce65fd267afd0cee13e5bfb285715db20374966b33af131c9109bc7ab8ce-merged.mount: Deactivated successfully.
Dec  5 02:33:38 compute-0 podman[485362]: 2025-12-05 02:33:38.350260149 +0000 UTC m=+0.393527758 container remove e5ae5ffbd107b091e7b61477414f03f1b34cee39a5070eea4ccb2959c153dc87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  5 02:33:38 compute-0 systemd[1]: libpod-conmon-e5ae5ffbd107b091e7b61477414f03f1b34cee39a5070eea4ccb2959c153dc87.scope: Deactivated successfully.
Dec  5 02:33:38 compute-0 podman[485401]: 2025-12-05 02:33:38.605727231 +0000 UTC m=+0.096740208 container create ce28cd8078b0f9f24de5b8df6910688a050c1761861db9360e6b0c206b7dc802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:33:38 compute-0 podman[485401]: 2025-12-05 02:33:38.572202328 +0000 UTC m=+0.063215355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:33:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:33:38 compute-0 systemd[1]: Started libpod-conmon-ce28cd8078b0f9f24de5b8df6910688a050c1761861db9360e6b0c206b7dc802.scope.
Dec  5 02:33:38 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:33:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bb64c044fae5e769dd2f56bf1035ed55ad0c6ec840d36c6470b0e0730ce81c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:33:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bb64c044fae5e769dd2f56bf1035ed55ad0c6ec840d36c6470b0e0730ce81c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:33:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bb64c044fae5e769dd2f56bf1035ed55ad0c6ec840d36c6470b0e0730ce81c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:33:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bb64c044fae5e769dd2f56bf1035ed55ad0c6ec840d36c6470b0e0730ce81c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:33:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bb64c044fae5e769dd2f56bf1035ed55ad0c6ec840d36c6470b0e0730ce81c5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 02:33:38 compute-0 podman[485401]: 2025-12-05 02:33:38.800189022 +0000 UTC m=+0.291201999 container init ce28cd8078b0f9f24de5b8df6910688a050c1761861db9360e6b0c206b7dc802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  5 02:33:38 compute-0 podman[485401]: 2025-12-05 02:33:38.825574389 +0000 UTC m=+0.316587356 container start ce28cd8078b0f9f24de5b8df6910688a050c1761861db9360e6b0c206b7dc802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_maxwell, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  5 02:33:38 compute-0 podman[485401]: 2025-12-05 02:33:38.832000115 +0000 UTC m=+0.323013082 container attach ce28cd8078b0f9f24de5b8df6910688a050c1761861db9360e6b0c206b7dc802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_maxwell, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:33:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2543: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:33:39 compute-0 peaceful_maxwell[485417]: --> passed data devices: 0 physical, 3 LVM
Dec  5 02:33:39 compute-0 peaceful_maxwell[485417]: --> relative data size: 1.0
Dec  5 02:33:39 compute-0 peaceful_maxwell[485417]: --> All data devices are unavailable
Dec  5 02:33:40 compute-0 systemd[1]: libpod-ce28cd8078b0f9f24de5b8df6910688a050c1761861db9360e6b0c206b7dc802.scope: Deactivated successfully.
Dec  5 02:33:40 compute-0 systemd[1]: libpod-ce28cd8078b0f9f24de5b8df6910688a050c1761861db9360e6b0c206b7dc802.scope: Consumed 1.122s CPU time.
Dec  5 02:33:40 compute-0 podman[485401]: 2025-12-05 02:33:40.003696478 +0000 UTC m=+1.494709645 container died ce28cd8078b0f9f24de5b8df6910688a050c1761861db9360e6b0c206b7dc802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_maxwell, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:33:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bb64c044fae5e769dd2f56bf1035ed55ad0c6ec840d36c6470b0e0730ce81c5-merged.mount: Deactivated successfully.
Dec  5 02:33:40 compute-0 podman[485401]: 2025-12-05 02:33:40.103316148 +0000 UTC m=+1.594329115 container remove ce28cd8078b0f9f24de5b8df6910688a050c1761861db9360e6b0c206b7dc802 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:33:40 compute-0 systemd[1]: libpod-conmon-ce28cd8078b0f9f24de5b8df6910688a050c1761861db9360e6b0c206b7dc802.scope: Deactivated successfully.
Dec  5 02:33:40 compute-0 nova_compute[349548]: 2025-12-05 02:33:40.840 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:33:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2544: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:33:41 compute-0 podman[485597]: 2025-12-05 02:33:41.346268309 +0000 UTC m=+0.116817790 container create bf6180495013447a7306d928cb82881110b5e39d3936c26ef2d2307857b1de88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Dec  5 02:33:41 compute-0 podman[485597]: 2025-12-05 02:33:41.292493809 +0000 UTC m=+0.063043340 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:33:41 compute-0 systemd[1]: Started libpod-conmon-bf6180495013447a7306d928cb82881110b5e39d3936c26ef2d2307857b1de88.scope.
Dec  5 02:33:41 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:33:41 compute-0 podman[485597]: 2025-12-05 02:33:41.497937949 +0000 UTC m=+0.268487490 container init bf6180495013447a7306d928cb82881110b5e39d3936c26ef2d2307857b1de88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  5 02:33:41 compute-0 podman[485597]: 2025-12-05 02:33:41.518385003 +0000 UTC m=+0.288934494 container start bf6180495013447a7306d928cb82881110b5e39d3936c26ef2d2307857b1de88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_chaplygin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:33:41 compute-0 podman[485597]: 2025-12-05 02:33:41.525264892 +0000 UTC m=+0.295814373 container attach bf6180495013447a7306d928cb82881110b5e39d3936c26ef2d2307857b1de88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Dec  5 02:33:41 compute-0 recursing_chaplygin[485611]: 167 167
Dec  5 02:33:41 compute-0 systemd[1]: libpod-bf6180495013447a7306d928cb82881110b5e39d3936c26ef2d2307857b1de88.scope: Deactivated successfully.
Dec  5 02:33:41 compute-0 podman[485597]: 2025-12-05 02:33:41.532000068 +0000 UTC m=+0.302549569 container died bf6180495013447a7306d928cb82881110b5e39d3936c26ef2d2307857b1de88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_chaplygin, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  5 02:33:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-2bf66bf10d92bb219200e607eb03a3d6c1a789755c3d97adc7428487b864f3db-merged.mount: Deactivated successfully.
Dec  5 02:33:41 compute-0 podman[485597]: 2025-12-05 02:33:41.611607817 +0000 UTC m=+0.382157298 container remove bf6180495013447a7306d928cb82881110b5e39d3936c26ef2d2307857b1de88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  5 02:33:41 compute-0 systemd[1]: libpod-conmon-bf6180495013447a7306d928cb82881110b5e39d3936c26ef2d2307857b1de88.scope: Deactivated successfully.
Dec  5 02:33:41 compute-0 podman[485637]: 2025-12-05 02:33:41.859320214 +0000 UTC m=+0.072550966 container create 8bf87643ba2086b1e6cc4b01545d7d07339f9ea95c0ed4abe5731641906cffa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Dec  5 02:33:41 compute-0 podman[485637]: 2025-12-05 02:33:41.834313398 +0000 UTC m=+0.047544161 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:33:41 compute-0 systemd[1]: Started libpod-conmon-8bf87643ba2086b1e6cc4b01545d7d07339f9ea95c0ed4abe5731641906cffa3.scope.
Dec  5 02:33:41 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:33:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6b3db947a40813ee0c30fffd69f90df7217dcd24f512826eda9fccae68fffa4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:33:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6b3db947a40813ee0c30fffd69f90df7217dcd24f512826eda9fccae68fffa4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:33:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6b3db947a40813ee0c30fffd69f90df7217dcd24f512826eda9fccae68fffa4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:33:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6b3db947a40813ee0c30fffd69f90df7217dcd24f512826eda9fccae68fffa4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:33:42 compute-0 podman[485637]: 2025-12-05 02:33:42.013548769 +0000 UTC m=+0.226779551 container init 8bf87643ba2086b1e6cc4b01545d7d07339f9ea95c0ed4abe5731641906cffa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Dec  5 02:33:42 compute-0 podman[485637]: 2025-12-05 02:33:42.030558972 +0000 UTC m=+0.243789714 container start 8bf87643ba2086b1e6cc4b01545d7d07339f9ea95c0ed4abe5731641906cffa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lamport, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  5 02:33:42 compute-0 podman[485637]: 2025-12-05 02:33:42.03738805 +0000 UTC m=+0.250618822 container attach 8bf87643ba2086b1e6cc4b01545d7d07339f9ea95c0ed4abe5731641906cffa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  5 02:33:42 compute-0 practical_lamport[485653]: {
Dec  5 02:33:42 compute-0 practical_lamport[485653]:    "0": [
Dec  5 02:33:42 compute-0 practical_lamport[485653]:        {
Dec  5 02:33:42 compute-0 practical_lamport[485653]:            "devices": [
Dec  5 02:33:42 compute-0 practical_lamport[485653]:                "/dev/loop3"
Dec  5 02:33:42 compute-0 practical_lamport[485653]:            ],
Dec  5 02:33:42 compute-0 practical_lamport[485653]:            "lv_name": "ceph_lv0",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:            "lv_size": "21470642176",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:            "name": "ceph_lv0",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:            "tags": {
Dec  5 02:33:42 compute-0 practical_lamport[485653]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:                "ceph.cluster_name": "ceph",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:                "ceph.crush_device_class": "",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:                "ceph.encrypted": "0",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:                "ceph.osd_id": "0",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:                "ceph.type": "block",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:                "ceph.vdo": "0"
Dec  5 02:33:42 compute-0 practical_lamport[485653]:            },
Dec  5 02:33:42 compute-0 practical_lamport[485653]:            "type": "block",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:            "vg_name": "ceph_vg0"
Dec  5 02:33:42 compute-0 practical_lamport[485653]:        }
Dec  5 02:33:42 compute-0 practical_lamport[485653]:    ],
Dec  5 02:33:42 compute-0 practical_lamport[485653]:    "1": [
Dec  5 02:33:42 compute-0 practical_lamport[485653]:        {
Dec  5 02:33:42 compute-0 practical_lamport[485653]:            "devices": [
Dec  5 02:33:42 compute-0 practical_lamport[485653]:                "/dev/loop4"
Dec  5 02:33:42 compute-0 practical_lamport[485653]:            ],
Dec  5 02:33:42 compute-0 practical_lamport[485653]:            "lv_name": "ceph_lv1",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:            "lv_size": "21470642176",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:            "name": "ceph_lv1",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:            "tags": {
Dec  5 02:33:42 compute-0 practical_lamport[485653]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:                "ceph.cluster_name": "ceph",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:                "ceph.crush_device_class": "",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:                "ceph.encrypted": "0",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:                "ceph.osd_id": "1",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:                "ceph.type": "block",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:                "ceph.vdo": "0"
Dec  5 02:33:42 compute-0 practical_lamport[485653]:            },
Dec  5 02:33:42 compute-0 practical_lamport[485653]:            "type": "block",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:            "vg_name": "ceph_vg1"
Dec  5 02:33:42 compute-0 practical_lamport[485653]:        }
Dec  5 02:33:42 compute-0 practical_lamport[485653]:    ],
Dec  5 02:33:42 compute-0 practical_lamport[485653]:    "2": [
Dec  5 02:33:42 compute-0 practical_lamport[485653]:        {
Dec  5 02:33:42 compute-0 practical_lamport[485653]:            "devices": [
Dec  5 02:33:42 compute-0 practical_lamport[485653]:                "/dev/loop5"
Dec  5 02:33:42 compute-0 practical_lamport[485653]:            ],
Dec  5 02:33:42 compute-0 practical_lamport[485653]:            "lv_name": "ceph_lv2",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:            "lv_size": "21470642176",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:            "name": "ceph_lv2",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:            "tags": {
Dec  5 02:33:42 compute-0 practical_lamport[485653]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:                "ceph.cluster_name": "ceph",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:                "ceph.crush_device_class": "",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:                "ceph.encrypted": "0",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:                "ceph.osd_id": "2",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:                "ceph.type": "block",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:                "ceph.vdo": "0"
Dec  5 02:33:42 compute-0 practical_lamport[485653]:            },
Dec  5 02:33:42 compute-0 practical_lamport[485653]:            "type": "block",
Dec  5 02:33:42 compute-0 practical_lamport[485653]:            "vg_name": "ceph_vg2"
Dec  5 02:33:42 compute-0 practical_lamport[485653]:        }
Dec  5 02:33:42 compute-0 practical_lamport[485653]:    ]
Dec  5 02:33:42 compute-0 practical_lamport[485653]: }
Dec  5 02:33:42 compute-0 systemd[1]: libpod-8bf87643ba2086b1e6cc4b01545d7d07339f9ea95c0ed4abe5731641906cffa3.scope: Deactivated successfully.
Dec  5 02:33:42 compute-0 podman[485637]: 2025-12-05 02:33:42.852613462 +0000 UTC m=+1.065844214 container died 8bf87643ba2086b1e6cc4b01545d7d07339f9ea95c0ed4abe5731641906cffa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  5 02:33:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6b3db947a40813ee0c30fffd69f90df7217dcd24f512826eda9fccae68fffa4-merged.mount: Deactivated successfully.
Dec  5 02:33:42 compute-0 podman[485637]: 2025-12-05 02:33:42.960305136 +0000 UTC m=+1.173535858 container remove 8bf87643ba2086b1e6cc4b01545d7d07339f9ea95c0ed4abe5731641906cffa3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  5 02:33:42 compute-0 systemd[1]: libpod-conmon-8bf87643ba2086b1e6cc4b01545d7d07339f9ea95c0ed4abe5731641906cffa3.scope: Deactivated successfully.
Dec  5 02:33:43 compute-0 nova_compute[349548]: 2025-12-05 02:33:43.053 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:33:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2545: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:33:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:33:44 compute-0 podman[485810]: 2025-12-05 02:33:44.072171463 +0000 UTC m=+0.071455014 container create aabf914e7a19c52ba66d1508368de680db88a38f8f7d26fa0012e4d852ccec50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_khorana, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:33:44 compute-0 podman[485810]: 2025-12-05 02:33:44.035614663 +0000 UTC m=+0.034898274 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:33:44 compute-0 systemd[1]: Started libpod-conmon-aabf914e7a19c52ba66d1508368de680db88a38f8f7d26fa0012e4d852ccec50.scope.
Dec  5 02:33:44 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:33:44 compute-0 podman[485810]: 2025-12-05 02:33:44.206231583 +0000 UTC m=+0.205515174 container init aabf914e7a19c52ba66d1508368de680db88a38f8f7d26fa0012e4d852ccec50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  5 02:33:44 compute-0 podman[485810]: 2025-12-05 02:33:44.219963971 +0000 UTC m=+0.219247532 container start aabf914e7a19c52ba66d1508368de680db88a38f8f7d26fa0012e4d852ccec50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  5 02:33:44 compute-0 podman[485810]: 2025-12-05 02:33:44.226106109 +0000 UTC m=+0.225389720 container attach aabf914e7a19c52ba66d1508368de680db88a38f8f7d26fa0012e4d852ccec50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_khorana, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Dec  5 02:33:44 compute-0 reverent_khorana[485824]: 167 167
Dec  5 02:33:44 compute-0 systemd[1]: libpod-aabf914e7a19c52ba66d1508368de680db88a38f8f7d26fa0012e4d852ccec50.scope: Deactivated successfully.
Dec  5 02:33:44 compute-0 podman[485810]: 2025-12-05 02:33:44.231510066 +0000 UTC m=+0.230793617 container died aabf914e7a19c52ba66d1508368de680db88a38f8f7d26fa0012e4d852ccec50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:33:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-f265eef36bac6a84777f97d4ecea8c308f4828d7b9880be86566e165edef4f5d-merged.mount: Deactivated successfully.
Dec  5 02:33:44 compute-0 podman[485810]: 2025-12-05 02:33:44.300650232 +0000 UTC m=+0.299933793 container remove aabf914e7a19c52ba66d1508368de680db88a38f8f7d26fa0012e4d852ccec50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_khorana, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:33:44 compute-0 systemd[1]: libpod-conmon-aabf914e7a19c52ba66d1508368de680db88a38f8f7d26fa0012e4d852ccec50.scope: Deactivated successfully.
Dec  5 02:33:44 compute-0 podman[485846]: 2025-12-05 02:33:44.592839769 +0000 UTC m=+0.093738100 container create 95dc9476ec9357b371bfa26c991f42bc59b3cef1de5649044757e472bcff7c6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Dec  5 02:33:44 compute-0 podman[485846]: 2025-12-05 02:33:44.557663389 +0000 UTC m=+0.058561780 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:33:44 compute-0 systemd[1]: Started libpod-conmon-95dc9476ec9357b371bfa26c991f42bc59b3cef1de5649044757e472bcff7c6e.scope.
Dec  5 02:33:44 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:33:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e58c6e5ab86a86ccb18fce0ead6a833969339eb18c55d61441adb0a83bff34ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:33:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e58c6e5ab86a86ccb18fce0ead6a833969339eb18c55d61441adb0a83bff34ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:33:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e58c6e5ab86a86ccb18fce0ead6a833969339eb18c55d61441adb0a83bff34ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:33:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e58c6e5ab86a86ccb18fce0ead6a833969339eb18c55d61441adb0a83bff34ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:33:44 compute-0 podman[485846]: 2025-12-05 02:33:44.799448703 +0000 UTC m=+0.300347044 container init 95dc9476ec9357b371bfa26c991f42bc59b3cef1de5649044757e472bcff7c6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hoover, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  5 02:33:44 compute-0 podman[485846]: 2025-12-05 02:33:44.822932375 +0000 UTC m=+0.323830716 container start 95dc9476ec9357b371bfa26c991f42bc59b3cef1de5649044757e472bcff7c6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hoover, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Dec  5 02:33:44 compute-0 podman[485846]: 2025-12-05 02:33:44.829694961 +0000 UTC m=+0.330593302 container attach 95dc9476ec9357b371bfa26c991f42bc59b3cef1de5649044757e472bcff7c6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:33:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2546: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:33:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 02:33:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1355625633' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 02:33:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 02:33:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1355625633' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 02:33:45 compute-0 nova_compute[349548]: 2025-12-05 02:33:45.846 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:33:45 compute-0 distracted_hoover[485862]: {
Dec  5 02:33:45 compute-0 distracted_hoover[485862]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 02:33:45 compute-0 distracted_hoover[485862]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:33:45 compute-0 distracted_hoover[485862]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 02:33:45 compute-0 distracted_hoover[485862]:        "osd_id": 0,
Dec  5 02:33:45 compute-0 distracted_hoover[485862]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:33:45 compute-0 distracted_hoover[485862]:        "type": "bluestore"
Dec  5 02:33:45 compute-0 distracted_hoover[485862]:    },
Dec  5 02:33:45 compute-0 distracted_hoover[485862]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 02:33:45 compute-0 distracted_hoover[485862]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:33:45 compute-0 distracted_hoover[485862]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 02:33:45 compute-0 distracted_hoover[485862]:        "osd_id": 1,
Dec  5 02:33:45 compute-0 distracted_hoover[485862]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:33:45 compute-0 distracted_hoover[485862]:        "type": "bluestore"
Dec  5 02:33:45 compute-0 distracted_hoover[485862]:    },
Dec  5 02:33:45 compute-0 distracted_hoover[485862]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 02:33:45 compute-0 distracted_hoover[485862]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:33:45 compute-0 distracted_hoover[485862]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 02:33:45 compute-0 distracted_hoover[485862]:        "osd_id": 2,
Dec  5 02:33:45 compute-0 distracted_hoover[485862]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:33:45 compute-0 distracted_hoover[485862]:        "type": "bluestore"
Dec  5 02:33:45 compute-0 distracted_hoover[485862]:    }
Dec  5 02:33:45 compute-0 distracted_hoover[485862]: }
Dec  5 02:33:45 compute-0 systemd[1]: libpod-95dc9476ec9357b371bfa26c991f42bc59b3cef1de5649044757e472bcff7c6e.scope: Deactivated successfully.
Dec  5 02:33:45 compute-0 systemd[1]: libpod-95dc9476ec9357b371bfa26c991f42bc59b3cef1de5649044757e472bcff7c6e.scope: Consumed 1.139s CPU time.
Dec  5 02:33:46 compute-0 podman[485896]: 2025-12-05 02:33:46.041290452 +0000 UTC m=+0.059467707 container died 95dc9476ec9357b371bfa26c991f42bc59b3cef1de5649044757e472bcff7c6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hoover, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:33:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-e58c6e5ab86a86ccb18fce0ead6a833969339eb18c55d61441adb0a83bff34ef-merged.mount: Deactivated successfully.
Dec  5 02:33:46 compute-0 podman[485896]: 2025-12-05 02:33:46.135005261 +0000 UTC m=+0.153182476 container remove 95dc9476ec9357b371bfa26c991f42bc59b3cef1de5649044757e472bcff7c6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_hoover, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  5 02:33:46 compute-0 systemd[1]: libpod-conmon-95dc9476ec9357b371bfa26c991f42bc59b3cef1de5649044757e472bcff7c6e.scope: Deactivated successfully.
Dec  5 02:33:46 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 02:33:46 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:33:46 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 02:33:46 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:33:46 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 68dbbce2-2ee6-496b-9963-df32038e75f3 does not exist
Dec  5 02:33:46 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 884a5939-91a3-41cb-84af-49f649b26c39 does not exist
Dec  5 02:33:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:33:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:33:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:33:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:33:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:33:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:33:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2547: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:33:47 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:33:47 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:33:47 compute-0 podman[485959]: 2025-12-05 02:33:47.72221194 +0000 UTC m=+0.125096680 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  5 02:33:47 compute-0 podman[485960]: 2025-12-05 02:33:47.746547866 +0000 UTC m=+0.145253625 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  5 02:33:47 compute-0 podman[485962]: 2025-12-05 02:33:47.751033356 +0000 UTC m=+0.139512388 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., name=ubi9-minimal, vcs-type=git, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  5 02:33:47 compute-0 podman[485961]: 2025-12-05 02:33:47.774455336 +0000 UTC m=+0.169663284 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  5 02:33:48 compute-0 nova_compute[349548]: 2025-12-05 02:33:48.057 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:33:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:33:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2548: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:33:50 compute-0 nova_compute[349548]: 2025-12-05 02:33:50.853 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:33:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2549: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:33:53 compute-0 nova_compute[349548]: 2025-12-05 02:33:53.062 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:33:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2550: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:33:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:33:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2551: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:33:55 compute-0 nova_compute[349548]: 2025-12-05 02:33:55.859 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:33:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:33:56.237 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:33:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:33:56.237 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:33:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:33:56.237 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:33:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2552: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:33:58 compute-0 nova_compute[349548]: 2025-12-05 02:33:58.066 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:33:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:33:59 compute-0 nova_compute[349548]: 2025-12-05 02:33:59.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:33:59 compute-0 nova_compute[349548]: 2025-12-05 02:33:59.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 02:33:59 compute-0 nova_compute[349548]: 2025-12-05 02:33:59.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  5 02:33:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2553: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:33:59 compute-0 nova_compute[349548]: 2025-12-05 02:33:59.083 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  5 02:33:59 compute-0 podman[158197]: time="2025-12-05T02:33:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:33:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:33:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec  5 02:33:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:33:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8216 "" "Go-http-client/1.1"
Dec  5 02:34:00 compute-0 nova_compute[349548]: 2025-12-05 02:34:00.864 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:34:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2554: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:34:01 compute-0 openstack_network_exporter[366555]: ERROR   02:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:34:01 compute-0 openstack_network_exporter[366555]: ERROR   02:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:34:01 compute-0 openstack_network_exporter[366555]: ERROR   02:34:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:34:01 compute-0 openstack_network_exporter[366555]: ERROR   02:34:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:34:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:34:01 compute-0 openstack_network_exporter[366555]: ERROR   02:34:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:34:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:34:02 compute-0 nova_compute[349548]: 2025-12-05 02:34:02.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:34:02 compute-0 nova_compute[349548]: 2025-12-05 02:34:02.098 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:34:02 compute-0 nova_compute[349548]: 2025-12-05 02:34:02.098 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:34:02 compute-0 nova_compute[349548]: 2025-12-05 02:34:02.098 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:34:02 compute-0 nova_compute[349548]: 2025-12-05 02:34:02.098 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 02:34:02 compute-0 nova_compute[349548]: 2025-12-05 02:34:02.099 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:34:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:34:02 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4223263005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:34:02 compute-0 nova_compute[349548]: 2025-12-05 02:34:02.676 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:34:02 compute-0 podman[486065]: 2025-12-05 02:34:02.699123876 +0000 UTC m=+0.105107551 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  5 02:34:02 compute-0 podman[486064]: 2025-12-05 02:34:02.715163971 +0000 UTC m=+0.136306725 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125)
Dec  5 02:34:03 compute-0 nova_compute[349548]: 2025-12-05 02:34:03.069 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:34:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2555: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:34:03 compute-0 nova_compute[349548]: 2025-12-05 02:34:03.141 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:34:03 compute-0 nova_compute[349548]: 2025-12-05 02:34:03.142 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3906MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 02:34:03 compute-0 nova_compute[349548]: 2025-12-05 02:34:03.142 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:34:03 compute-0 nova_compute[349548]: 2025-12-05 02:34:03.143 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:34:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:34:03 compute-0 nova_compute[349548]: 2025-12-05 02:34:03.969 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 02:34:03 compute-0 nova_compute[349548]: 2025-12-05 02:34:03.970 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 02:34:04 compute-0 nova_compute[349548]: 2025-12-05 02:34:04.451 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing inventories for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  5 02:34:04 compute-0 nova_compute[349548]: 2025-12-05 02:34:04.866 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating ProviderTree inventory for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  5 02:34:04 compute-0 nova_compute[349548]: 2025-12-05 02:34:04.867 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Updating inventory in ProviderTree for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  5 02:34:04 compute-0 nova_compute[349548]: 2025-12-05 02:34:04.889 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing aggregate associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  5 02:34:04 compute-0 nova_compute[349548]: 2025-12-05 02:34:04.938 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Refreshing trait associations for resource provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17, traits: HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,HW_CPU_X86_AESNI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE2,HW_CPU_X86_ABM,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE42,HW_CPU_X86_AVX2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_IDE,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE41,HW_CPU_X86_SHA,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_CLMUL,HW_CPU_X86_BMI2,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_TRUSTED_CERTS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  5 02:34:04 compute-0 nova_compute[349548]: 2025-12-05 02:34:04.962 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:34:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2556: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:34:05 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:34:05 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1365559343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:34:05 compute-0 nova_compute[349548]: 2025-12-05 02:34:05.417 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:34:05 compute-0 nova_compute[349548]: 2025-12-05 02:34:05.430 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:34:05 compute-0 nova_compute[349548]: 2025-12-05 02:34:05.474 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:34:05 compute-0 nova_compute[349548]: 2025-12-05 02:34:05.477 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 02:34:05 compute-0 nova_compute[349548]: 2025-12-05 02:34:05.478 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.335s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:34:05 compute-0 nova_compute[349548]: 2025-12-05 02:34:05.871 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:34:06 compute-0 nova_compute[349548]: 2025-12-05 02:34:06.480 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:34:06 compute-0 nova_compute[349548]: 2025-12-05 02:34:06.481 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:34:06 compute-0 nova_compute[349548]: 2025-12-05 02:34:06.481 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:34:06 compute-0 nova_compute[349548]: 2025-12-05 02:34:06.482 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 02:34:06 compute-0 podman[486130]: 2025-12-05 02:34:06.720775874 +0000 UTC m=+0.123375491 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, io.openshift.expose-services=, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, config_id=edpm, managed_by=edpm_ansible, architecture=x86_64, maintainer=Red Hat, Inc., name=ubi9, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  5 02:34:06 compute-0 podman[486129]: 2025-12-05 02:34:06.764927325 +0000 UTC m=+0.167322336 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125)
Dec  5 02:34:06 compute-0 podman[486131]: 2025-12-05 02:34:06.784664997 +0000 UTC m=+0.175445691 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec  5 02:34:07 compute-0 nova_compute[349548]: 2025-12-05 02:34:07.068 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:34:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2557: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:34:08 compute-0 nova_compute[349548]: 2025-12-05 02:34:08.061 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:34:08 compute-0 nova_compute[349548]: 2025-12-05 02:34:08.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:34:08 compute-0 nova_compute[349548]: 2025-12-05 02:34:08.073 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:34:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:34:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2558: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:34:10 compute-0 nova_compute[349548]: 2025-12-05 02:34:10.877 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:34:11 compute-0 nova_compute[349548]: 2025-12-05 02:34:11.062 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:34:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2559: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:34:11 compute-0 nova_compute[349548]: 2025-12-05 02:34:11.085 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:34:13 compute-0 nova_compute[349548]: 2025-12-05 02:34:13.078 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:34:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2560: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:34:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:34:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2561: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:34:15 compute-0 nova_compute[349548]: 2025-12-05 02:34:15.882 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:34:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:34:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:34:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:34:16
Dec  5 02:34:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 02:34:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 02:34:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['images', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'volumes', 'default.rgw.control', 'backups', '.mgr']
Dec  5 02:34:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 02:34:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:34:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:34:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:34:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:34:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2562: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:34:18 compute-0 nova_compute[349548]: 2025-12-05 02:34:18.079 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:34:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 02:34:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:34:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 02:34:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:34:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:34:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:34:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:34:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:34:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:34:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:34:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:34:18 compute-0 podman[486186]: 2025-12-05 02:34:18.731079388 +0000 UTC m=+0.134514604 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  5 02:34:18 compute-0 podman[486192]: 2025-12-05 02:34:18.737299548 +0000 UTC m=+0.113220206 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, config_id=edpm, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, release=1755695350, version=9.6, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7)
Dec  5 02:34:18 compute-0 podman[486187]: 2025-12-05 02:34:18.756738442 +0000 UTC m=+0.147022946 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 02:34:18 compute-0 podman[486188]: 2025-12-05 02:34:18.776402553 +0000 UTC m=+0.161848447 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  5 02:34:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2563: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:34:20 compute-0 nova_compute[349548]: 2025-12-05 02:34:20.887 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:34:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2564: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:34:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 02:34:22 compute-0 ceph-osd[206647]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 10K writes, 38K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 2909 syncs, 3.56 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 340 writes, 834 keys, 340 commit groups, 1.0 writes per commit group, ingest: 0.30 MB, 0.00 MB/s#012Interval WAL: 340 writes, 160 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  5 02:34:23 compute-0 nova_compute[349548]: 2025-12-05 02:34:23.082 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:34:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2565: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:34:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:34:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2566: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:34:25 compute-0 nova_compute[349548]: 2025-12-05 02:34:25.891 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:34:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2567: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  5 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec  5 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:34:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 02:34:28 compute-0 nova_compute[349548]: 2025-12-05 02:34:28.084 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:34:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:34:29 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 02:34:29 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.2 total, 600.0 interval#012Cumulative writes: 12K writes, 46K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.01 MB/s#012Cumulative WAL: 12K writes, 3420 syncs, 3.58 writes per sync, written: 0.04 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 497 writes, 1263 keys, 497 commit groups, 1.0 writes per commit group, ingest: 0.47 MB, 0.00 MB/s#012Interval WAL: 497 writes, 236 syncs, 2.11 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  5 02:34:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2568: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:34:29 compute-0 podman[158197]: time="2025-12-05T02:34:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:34:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:34:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec  5 02:34:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:34:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8220 "" "Go-http-client/1.1"
Dec  5 02:34:30 compute-0 nova_compute[349548]: 2025-12-05 02:34:30.897 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:34:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2569: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:34:31 compute-0 openstack_network_exporter[366555]: ERROR   02:34:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:34:31 compute-0 openstack_network_exporter[366555]: ERROR   02:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:34:31 compute-0 openstack_network_exporter[366555]: ERROR   02:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:34:31 compute-0 openstack_network_exporter[366555]: ERROR   02:34:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:34:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:34:31 compute-0 openstack_network_exporter[366555]: ERROR   02:34:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:34:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:34:33 compute-0 nova_compute[349548]: 2025-12-05 02:34:33.090 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:34:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2570: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:34:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:34:33 compute-0 podman[486270]: 2025-12-05 02:34:33.717501555 +0000 UTC m=+0.127072418 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  5 02:34:33 compute-0 podman[486271]: 2025-12-05 02:34:33.725050834 +0000 UTC m=+0.127787919 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  5 02:34:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2571: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:34:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 02:34:35 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 2729 syncs, 3.68 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 481 writes, 1444 keys, 481 commit groups, 1.0 writes per commit group, ingest: 0.40 MB, 0.00 MB/s#012Interval WAL: 481 writes, 225 syncs, 2.14 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  5 02:34:35 compute-0 nova_compute[349548]: 2025-12-05 02:34:35.900 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:34:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2572: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:34:37 compute-0 ceph-mgr[193209]: [devicehealth INFO root] Check health
Dec  5 02:34:37 compute-0 podman[486312]: 2025-12-05 02:34:37.723082415 +0000 UTC m=+0.120921729 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, managed_by=edpm_ansible, vcs-type=git, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, distribution-scope=public, release=1214.1726694543, release-0.7.12=, io.buildah.version=1.29.0, name=ubi9, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  5 02:34:37 compute-0 podman[486313]: 2025-12-05 02:34:37.734946509 +0000 UTC m=+0.126132850 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi)
Dec  5 02:34:37 compute-0 podman[486311]: 2025-12-05 02:34:37.761724126 +0000 UTC m=+0.174631977 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec  5 02:34:38 compute-0 nova_compute[349548]: 2025-12-05 02:34:38.094 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.331 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.332 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.332 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.333 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.335 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.340 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.340 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.341 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.341 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.341 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.342 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.342 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.342 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.342 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.342 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.342 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.342 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.343 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.343 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.344 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.344 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.344 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.345 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.345 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.345 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.346 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.346 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.346 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.347 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.347 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.347 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.347 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.347 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.347 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.348 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.348 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.348 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.348 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.349 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.350 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.354 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.354 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.354 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.354 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.354 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:34:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:34:38.355 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:34:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:34:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2573: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:34:40 compute-0 nova_compute[349548]: 2025-12-05 02:34:40.906 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:34:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2574: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:34:43 compute-0 nova_compute[349548]: 2025-12-05 02:34:43.098 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:34:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2575: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:34:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:34:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2576: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:34:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 02:34:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3421371381' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 02:34:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 02:34:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3421371381' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 02:34:45 compute-0 nova_compute[349548]: 2025-12-05 02:34:45.910 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:34:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:34:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:34:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:34:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:34:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:34:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:34:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2577: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:34:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:34:47 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:34:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 02:34:47 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:34:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 02:34:47 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:34:47 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev e6cb9b20-6fda-4924-8121-b1988e6a06b7 does not exist
Dec  5 02:34:47 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev fd67c415-b8ad-4554-9633-66385b65794d does not exist
Dec  5 02:34:47 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 479c2e56-3a1f-48d2-a6a2-d6ede7a7ae06 does not exist
Dec  5 02:34:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 02:34:47 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 02:34:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 02:34:47 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:34:47 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:34:47 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:34:48 compute-0 nova_compute[349548]: 2025-12-05 02:34:48.104 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:34:48 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:34:48 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:34:48 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:34:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:34:48 compute-0 podman[486635]: 2025-12-05 02:34:48.945288073 +0000 UTC m=+0.088766956 container create 80de0e85572e99a01657b4abb2d6767b7672de331dd0a1f8d9027e749115c8a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lamport, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:34:48 compute-0 podman[486635]: 2025-12-05 02:34:48.905817438 +0000 UTC m=+0.049296361 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:34:49 compute-0 systemd[1]: Started libpod-conmon-80de0e85572e99a01657b4abb2d6767b7672de331dd0a1f8d9027e749115c8a4.scope.
Dec  5 02:34:49 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:34:49 compute-0 podman[486635]: 2025-12-05 02:34:49.103959847 +0000 UTC m=+0.247438740 container init 80de0e85572e99a01657b4abb2d6767b7672de331dd0a1f8d9027e749115c8a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lamport, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  5 02:34:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2578: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:34:49 compute-0 podman[486635]: 2025-12-05 02:34:49.11684108 +0000 UTC m=+0.260319933 container start 80de0e85572e99a01657b4abb2d6767b7672de331dd0a1f8d9027e749115c8a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  5 02:34:49 compute-0 podman[486635]: 2025-12-05 02:34:49.121768513 +0000 UTC m=+0.265247366 container attach 80de0e85572e99a01657b4abb2d6767b7672de331dd0a1f8d9027e749115c8a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lamport, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:34:49 compute-0 relaxed_lamport[486671]: 167 167
Dec  5 02:34:49 compute-0 systemd[1]: libpod-80de0e85572e99a01657b4abb2d6767b7672de331dd0a1f8d9027e749115c8a4.scope: Deactivated successfully.
Dec  5 02:34:49 compute-0 podman[486635]: 2025-12-05 02:34:49.128573881 +0000 UTC m=+0.272052734 container died 80de0e85572e99a01657b4abb2d6767b7672de331dd0a1f8d9027e749115c8a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lamport, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  5 02:34:49 compute-0 podman[486654]: 2025-12-05 02:34:49.145307466 +0000 UTC m=+0.104980966 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, version=9.6, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, architecture=x86_64, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  5 02:34:49 compute-0 podman[486652]: 2025-12-05 02:34:49.145545253 +0000 UTC m=+0.121052783 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 02:34:49 compute-0 podman[486649]: 2025-12-05 02:34:49.151223788 +0000 UTC m=+0.135395079 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  5 02:34:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbd2ae7c40916ed493cc381cfc4b00c10a1b3c91d38d32bd6f3fea70941d22d0-merged.mount: Deactivated successfully.
Dec  5 02:34:49 compute-0 podman[486653]: 2025-12-05 02:34:49.177516591 +0000 UTC m=+0.149319713 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  5 02:34:49 compute-0 podman[486635]: 2025-12-05 02:34:49.183083022 +0000 UTC m=+0.326561865 container remove 80de0e85572e99a01657b4abb2d6767b7672de331dd0a1f8d9027e749115c8a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_lamport, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:34:49 compute-0 systemd[1]: libpod-conmon-80de0e85572e99a01657b4abb2d6767b7672de331dd0a1f8d9027e749115c8a4.scope: Deactivated successfully.
Dec  5 02:34:49 compute-0 podman[486761]: 2025-12-05 02:34:49.426509634 +0000 UTC m=+0.095462670 container create 12b82819801b3505b93a2cd0703263c085f246224b93323ec0368333f292c0ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:34:49 compute-0 podman[486761]: 2025-12-05 02:34:49.384733213 +0000 UTC m=+0.053686299 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:34:49 compute-0 systemd[1]: Started libpod-conmon-12b82819801b3505b93a2cd0703263c085f246224b93323ec0368333f292c0ed.scope.
Dec  5 02:34:49 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:34:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96e59e62c8430b02cdfbd915f40581ad47e057e2f88be408953d16e08f6f73b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:34:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96e59e62c8430b02cdfbd915f40581ad47e057e2f88be408953d16e08f6f73b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:34:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96e59e62c8430b02cdfbd915f40581ad47e057e2f88be408953d16e08f6f73b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:34:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96e59e62c8430b02cdfbd915f40581ad47e057e2f88be408953d16e08f6f73b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:34:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96e59e62c8430b02cdfbd915f40581ad47e057e2f88be408953d16e08f6f73b8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 02:34:49 compute-0 podman[486761]: 2025-12-05 02:34:49.607170736 +0000 UTC m=+0.276123822 container init 12b82819801b3505b93a2cd0703263c085f246224b93323ec0368333f292c0ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_yonath, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:34:49 compute-0 podman[486761]: 2025-12-05 02:34:49.623203631 +0000 UTC m=+0.292156677 container start 12b82819801b3505b93a2cd0703263c085f246224b93323ec0368333f292c0ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Dec  5 02:34:49 compute-0 podman[486761]: 2025-12-05 02:34:49.630001848 +0000 UTC m=+0.298954964 container attach 12b82819801b3505b93a2cd0703263c085f246224b93323ec0368333f292c0ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_yonath, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:34:50 compute-0 zen_yonath[486778]: --> passed data devices: 0 physical, 3 LVM
Dec  5 02:34:50 compute-0 zen_yonath[486778]: --> relative data size: 1.0
Dec  5 02:34:50 compute-0 zen_yonath[486778]: --> All data devices are unavailable
Dec  5 02:34:50 compute-0 nova_compute[349548]: 2025-12-05 02:34:50.914 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:34:50 compute-0 systemd[1]: libpod-12b82819801b3505b93a2cd0703263c085f246224b93323ec0368333f292c0ed.scope: Deactivated successfully.
Dec  5 02:34:50 compute-0 systemd[1]: libpod-12b82819801b3505b93a2cd0703263c085f246224b93323ec0368333f292c0ed.scope: Consumed 1.235s CPU time.
Dec  5 02:34:50 compute-0 podman[486761]: 2025-12-05 02:34:50.921639622 +0000 UTC m=+1.590592638 container died 12b82819801b3505b93a2cd0703263c085f246224b93323ec0368333f292c0ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_yonath, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  5 02:34:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-96e59e62c8430b02cdfbd915f40581ad47e057e2f88be408953d16e08f6f73b8-merged.mount: Deactivated successfully.
Dec  5 02:34:51 compute-0 podman[486761]: 2025-12-05 02:34:51.015580167 +0000 UTC m=+1.684533183 container remove 12b82819801b3505b93a2cd0703263c085f246224b93323ec0368333f292c0ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:34:51 compute-0 systemd[1]: libpod-conmon-12b82819801b3505b93a2cd0703263c085f246224b93323ec0368333f292c0ed.scope: Deactivated successfully.
Dec  5 02:34:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2579: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:34:52 compute-0 podman[486955]: 2025-12-05 02:34:52.138214327 +0000 UTC m=+0.073195585 container create 94ed85aa2b942c4c6d9af70f8be9d19dabb4041ca3cdc1cd5a716b85046cf1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lamport, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  5 02:34:52 compute-0 systemd[1]: Started libpod-conmon-94ed85aa2b942c4c6d9af70f8be9d19dabb4041ca3cdc1cd5a716b85046cf1a9.scope.
Dec  5 02:34:52 compute-0 podman[486955]: 2025-12-05 02:34:52.108185495 +0000 UTC m=+0.043166803 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:34:52 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:34:52 compute-0 podman[486955]: 2025-12-05 02:34:52.258186927 +0000 UTC m=+0.193168235 container init 94ed85aa2b942c4c6d9af70f8be9d19dabb4041ca3cdc1cd5a716b85046cf1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lamport, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:34:52 compute-0 podman[486955]: 2025-12-05 02:34:52.277171018 +0000 UTC m=+0.212152276 container start 94ed85aa2b942c4c6d9af70f8be9d19dabb4041ca3cdc1cd5a716b85046cf1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lamport, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  5 02:34:52 compute-0 podman[486955]: 2025-12-05 02:34:52.284083168 +0000 UTC m=+0.219064476 container attach 94ed85aa2b942c4c6d9af70f8be9d19dabb4041ca3cdc1cd5a716b85046cf1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lamport, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:34:52 compute-0 happy_lamport[486971]: 167 167
Dec  5 02:34:52 compute-0 systemd[1]: libpod-94ed85aa2b942c4c6d9af70f8be9d19dabb4041ca3cdc1cd5a716b85046cf1a9.scope: Deactivated successfully.
Dec  5 02:34:52 compute-0 podman[486955]: 2025-12-05 02:34:52.287009303 +0000 UTC m=+0.221990521 container died 94ed85aa2b942c4c6d9af70f8be9d19dabb4041ca3cdc1cd5a716b85046cf1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lamport, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Dec  5 02:34:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-dee03e4bedd022cf511fe9e6361b282b6fcca9b34cd4b20583148ea06aebdc03-merged.mount: Deactivated successfully.
Dec  5 02:34:52 compute-0 podman[486955]: 2025-12-05 02:34:52.360814305 +0000 UTC m=+0.295795563 container remove 94ed85aa2b942c4c6d9af70f8be9d19dabb4041ca3cdc1cd5a716b85046cf1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lamport, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  5 02:34:52 compute-0 systemd[1]: libpod-conmon-94ed85aa2b942c4c6d9af70f8be9d19dabb4041ca3cdc1cd5a716b85046cf1a9.scope: Deactivated successfully.
Dec  5 02:34:52 compute-0 podman[486994]: 2025-12-05 02:34:52.659567522 +0000 UTC m=+0.085187832 container create e814d263d3c7407d68fec74de89f5c0c81dcc69c6625e6d234e6a5a1ca2b7a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:34:52 compute-0 podman[486994]: 2025-12-05 02:34:52.636716989 +0000 UTC m=+0.062337329 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:34:52 compute-0 systemd[1]: Started libpod-conmon-e814d263d3c7407d68fec74de89f5c0c81dcc69c6625e6d234e6a5a1ca2b7a41.scope.
Dec  5 02:34:52 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:34:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a7774ec02517239eb4e51fc1573b16a478c5aec753581142ff9530ed38dd2be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:34:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a7774ec02517239eb4e51fc1573b16a478c5aec753581142ff9530ed38dd2be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:34:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a7774ec02517239eb4e51fc1573b16a478c5aec753581142ff9530ed38dd2be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:34:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a7774ec02517239eb4e51fc1573b16a478c5aec753581142ff9530ed38dd2be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:34:52 compute-0 podman[486994]: 2025-12-05 02:34:52.815563828 +0000 UTC m=+0.241184198 container init e814d263d3c7407d68fec74de89f5c0c81dcc69c6625e6d234e6a5a1ca2b7a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_antonelli, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  5 02:34:52 compute-0 podman[486994]: 2025-12-05 02:34:52.830907933 +0000 UTC m=+0.256528233 container start e814d263d3c7407d68fec74de89f5c0c81dcc69c6625e6d234e6a5a1ca2b7a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Dec  5 02:34:52 compute-0 podman[486994]: 2025-12-05 02:34:52.836416003 +0000 UTC m=+0.262036363 container attach e814d263d3c7407d68fec74de89f5c0c81dcc69c6625e6d234e6a5a1ca2b7a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_antonelli, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:34:53 compute-0 nova_compute[349548]: 2025-12-05 02:34:53.106 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:34:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2580: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]: {
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:    "0": [
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:        {
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:            "devices": [
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:                "/dev/loop3"
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:            ],
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:            "lv_name": "ceph_lv0",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:            "lv_size": "21470642176",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:            "name": "ceph_lv0",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:            "tags": {
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:                "ceph.cluster_name": "ceph",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:                "ceph.crush_device_class": "",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:                "ceph.encrypted": "0",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:                "ceph.osd_id": "0",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:                "ceph.type": "block",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:                "ceph.vdo": "0"
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:            },
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:            "type": "block",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:            "vg_name": "ceph_vg0"
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:        }
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:    ],
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:    "1": [
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:        {
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:            "devices": [
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:                "/dev/loop4"
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:            ],
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:            "lv_name": "ceph_lv1",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:            "lv_size": "21470642176",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:            "name": "ceph_lv1",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:            "tags": {
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:                "ceph.cluster_name": "ceph",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:                "ceph.crush_device_class": "",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:                "ceph.encrypted": "0",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:                "ceph.osd_id": "1",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:                "ceph.type": "block",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:                "ceph.vdo": "0"
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:            },
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:            "type": "block",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:            "vg_name": "ceph_vg1"
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:        }
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:    ],
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:    "2": [
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:        {
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:            "devices": [
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:                "/dev/loop5"
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:            ],
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:            "lv_name": "ceph_lv2",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:            "lv_size": "21470642176",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:            "name": "ceph_lv2",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:            "tags": {
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:                "ceph.cluster_name": "ceph",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:                "ceph.crush_device_class": "",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:                "ceph.encrypted": "0",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:                "ceph.osd_id": "2",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:                "ceph.type": "block",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:                "ceph.vdo": "0"
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:            },
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:            "type": "block",
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:            "vg_name": "ceph_vg2"
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:        }
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]:    ]
Dec  5 02:34:53 compute-0 jolly_antonelli[487009]: }
Dec  5 02:34:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:34:53 compute-0 systemd[1]: libpod-e814d263d3c7407d68fec74de89f5c0c81dcc69c6625e6d234e6a5a1ca2b7a41.scope: Deactivated successfully.
Dec  5 02:34:53 compute-0 podman[486994]: 2025-12-05 02:34:53.713836269 +0000 UTC m=+1.139456599 container died e814d263d3c7407d68fec74de89f5c0c81dcc69c6625e6d234e6a5a1ca2b7a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:34:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a7774ec02517239eb4e51fc1573b16a478c5aec753581142ff9530ed38dd2be-merged.mount: Deactivated successfully.
Dec  5 02:34:53 compute-0 podman[486994]: 2025-12-05 02:34:53.831635247 +0000 UTC m=+1.257255597 container remove e814d263d3c7407d68fec74de89f5c0c81dcc69c6625e6d234e6a5a1ca2b7a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:34:53 compute-0 systemd[1]: libpod-conmon-e814d263d3c7407d68fec74de89f5c0c81dcc69c6625e6d234e6a5a1ca2b7a41.scope: Deactivated successfully.
Dec  5 02:34:55 compute-0 podman[487169]: 2025-12-05 02:34:55.020620913 +0000 UTC m=+0.076569162 container create 72bed6ca347f177490130b11114fc5b986d7ef694d9528c912593e80cbb8967e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_satoshi, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:34:55 compute-0 podman[487169]: 2025-12-05 02:34:54.984521885 +0000 UTC m=+0.040470134 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:34:55 compute-0 systemd[1]: Started libpod-conmon-72bed6ca347f177490130b11114fc5b986d7ef694d9528c912593e80cbb8967e.scope.
Dec  5 02:34:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2581: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:34:55 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:34:55 compute-0 podman[487169]: 2025-12-05 02:34:55.162349504 +0000 UTC m=+0.218297813 container init 72bed6ca347f177490130b11114fc5b986d7ef694d9528c912593e80cbb8967e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  5 02:34:55 compute-0 podman[487169]: 2025-12-05 02:34:55.179023708 +0000 UTC m=+0.234971967 container start 72bed6ca347f177490130b11114fc5b986d7ef694d9528c912593e80cbb8967e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_satoshi, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:34:55 compute-0 intelligent_satoshi[487184]: 167 167
Dec  5 02:34:55 compute-0 podman[487169]: 2025-12-05 02:34:55.185686442 +0000 UTC m=+0.241634731 container attach 72bed6ca347f177490130b11114fc5b986d7ef694d9528c912593e80cbb8967e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_satoshi, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Dec  5 02:34:55 compute-0 systemd[1]: libpod-72bed6ca347f177490130b11114fc5b986d7ef694d9528c912593e80cbb8967e.scope: Deactivated successfully.
Dec  5 02:34:55 compute-0 podman[487169]: 2025-12-05 02:34:55.191391027 +0000 UTC m=+0.247339276 container died 72bed6ca347f177490130b11114fc5b986d7ef694d9528c912593e80cbb8967e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Dec  5 02:34:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-9348fcf3ab7ddf1a24c9ad110a6e7d5236ecfdcd7cd2df40816bbc80aa5ddfda-merged.mount: Deactivated successfully.
Dec  5 02:34:55 compute-0 podman[487169]: 2025-12-05 02:34:55.268224936 +0000 UTC m=+0.324173185 container remove 72bed6ca347f177490130b11114fc5b986d7ef694d9528c912593e80cbb8967e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Dec  5 02:34:55 compute-0 systemd[1]: libpod-conmon-72bed6ca347f177490130b11114fc5b986d7ef694d9528c912593e80cbb8967e.scope: Deactivated successfully.
Dec  5 02:34:55 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Dec  5 02:34:55 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:34:55.455914) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  5 02:34:55 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Dec  5 02:34:55 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764902095455987, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 1227, "num_deletes": 251, "total_data_size": 1859781, "memory_usage": 1892824, "flush_reason": "Manual Compaction"}
Dec  5 02:34:55 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Dec  5 02:34:55 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764902095472637, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 1841996, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52081, "largest_seqno": 53307, "table_properties": {"data_size": 1836072, "index_size": 3255, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12310, "raw_average_key_size": 19, "raw_value_size": 1824286, "raw_average_value_size": 2932, "num_data_blocks": 146, "num_entries": 622, "num_filter_entries": 622, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764901969, "oldest_key_time": 1764901969, "file_creation_time": 1764902095, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Dec  5 02:34:55 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 16829 microseconds, and 7879 cpu microseconds.
Dec  5 02:34:55 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 02:34:55 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:34:55.472740) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 1841996 bytes OK
Dec  5 02:34:55 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:34:55.472772) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Dec  5 02:34:55 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:34:55.475424) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Dec  5 02:34:55 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:34:55.475448) EVENT_LOG_v1 {"time_micros": 1764902095475440, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  5 02:34:55 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:34:55.475478) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  5 02:34:55 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 1854229, prev total WAL file size 1854229, number of live WAL files 2.
Dec  5 02:34:55 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:34:55 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:34:55.477109) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Dec  5 02:34:55 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  5 02:34:55 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(1798KB)], [125(8852KB)]
Dec  5 02:34:55 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764902095477227, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 10906755, "oldest_snapshot_seqno": -1}
Dec  5 02:34:55 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 6668 keys, 9144766 bytes, temperature: kUnknown
Dec  5 02:34:55 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764902095548875, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 9144766, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9102181, "index_size": 24808, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16709, "raw_key_size": 174903, "raw_average_key_size": 26, "raw_value_size": 8983548, "raw_average_value_size": 1347, "num_data_blocks": 982, "num_entries": 6668, "num_filter_entries": 6668, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764902095, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Dec  5 02:34:55 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 02:34:55 compute-0 podman[487209]: 2025-12-05 02:34:55.550585648 +0000 UTC m=+0.098989622 container create 0aaf33170d6d09054a9f215b58812976f18747f6dabcd5d35f2539f523f113f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bhabha, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  5 02:34:55 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:34:55.549336) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 9144766 bytes
Dec  5 02:34:55 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:34:55.552688) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 151.9 rd, 127.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 8.6 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(10.9) write-amplify(5.0) OK, records in: 7182, records dropped: 514 output_compression: NoCompression
Dec  5 02:34:55 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:34:55.552722) EVENT_LOG_v1 {"time_micros": 1764902095552706, "job": 76, "event": "compaction_finished", "compaction_time_micros": 71811, "compaction_time_cpu_micros": 42013, "output_level": 6, "num_output_files": 1, "total_output_size": 9144766, "num_input_records": 7182, "num_output_records": 6668, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  5 02:34:55 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:34:55 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764902095553971, "job": 76, "event": "table_file_deletion", "file_number": 127}
Dec  5 02:34:55 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:34:55 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764902095557530, "job": 76, "event": "table_file_deletion", "file_number": 125}
Dec  5 02:34:55 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:34:55.476805) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:34:55 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:34:55.557779) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:34:55 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:34:55.557788) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:34:55 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:34:55.557792) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:34:55 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:34:55.557795) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:34:55 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:34:55.557799) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:34:55 compute-0 podman[487209]: 2025-12-05 02:34:55.495175721 +0000 UTC m=+0.043579715 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:34:55 compute-0 systemd[1]: Started libpod-conmon-0aaf33170d6d09054a9f215b58812976f18747f6dabcd5d35f2539f523f113f4.scope.
Dec  5 02:34:55 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:34:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20488091599a5006d00d04a4a49a9aa037ebb21eeac34aa94b67c4e5c0d66058/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:34:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20488091599a5006d00d04a4a49a9aa037ebb21eeac34aa94b67c4e5c0d66058/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:34:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20488091599a5006d00d04a4a49a9aa037ebb21eeac34aa94b67c4e5c0d66058/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:34:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20488091599a5006d00d04a4a49a9aa037ebb21eeac34aa94b67c4e5c0d66058/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:34:55 compute-0 podman[487209]: 2025-12-05 02:34:55.725005819 +0000 UTC m=+0.273409813 container init 0aaf33170d6d09054a9f215b58812976f18747f6dabcd5d35f2539f523f113f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bhabha, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Dec  5 02:34:55 compute-0 podman[487209]: 2025-12-05 02:34:55.743451784 +0000 UTC m=+0.291855768 container start 0aaf33170d6d09054a9f215b58812976f18747f6dabcd5d35f2539f523f113f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bhabha, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:34:55 compute-0 podman[487209]: 2025-12-05 02:34:55.750495758 +0000 UTC m=+0.298899802 container attach 0aaf33170d6d09054a9f215b58812976f18747f6dabcd5d35f2539f523f113f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bhabha, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  5 02:34:55 compute-0 nova_compute[349548]: 2025-12-05 02:34:55.920 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:34:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:34:56.238 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:34:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:34:56.240 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:34:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:34:56.241 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:34:56 compute-0 hardcore_bhabha[487225]: {
Dec  5 02:34:56 compute-0 hardcore_bhabha[487225]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 02:34:56 compute-0 hardcore_bhabha[487225]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:34:56 compute-0 hardcore_bhabha[487225]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 02:34:56 compute-0 hardcore_bhabha[487225]:        "osd_id": 0,
Dec  5 02:34:56 compute-0 hardcore_bhabha[487225]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:34:56 compute-0 hardcore_bhabha[487225]:        "type": "bluestore"
Dec  5 02:34:56 compute-0 hardcore_bhabha[487225]:    },
Dec  5 02:34:56 compute-0 hardcore_bhabha[487225]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 02:34:56 compute-0 hardcore_bhabha[487225]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:34:56 compute-0 hardcore_bhabha[487225]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 02:34:56 compute-0 hardcore_bhabha[487225]:        "osd_id": 1,
Dec  5 02:34:56 compute-0 hardcore_bhabha[487225]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:34:56 compute-0 hardcore_bhabha[487225]:        "type": "bluestore"
Dec  5 02:34:56 compute-0 hardcore_bhabha[487225]:    },
Dec  5 02:34:56 compute-0 hardcore_bhabha[487225]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 02:34:56 compute-0 hardcore_bhabha[487225]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:34:56 compute-0 hardcore_bhabha[487225]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 02:34:56 compute-0 hardcore_bhabha[487225]:        "osd_id": 2,
Dec  5 02:34:56 compute-0 hardcore_bhabha[487225]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:34:56 compute-0 hardcore_bhabha[487225]:        "type": "bluestore"
Dec  5 02:34:56 compute-0 hardcore_bhabha[487225]:    }
Dec  5 02:34:56 compute-0 hardcore_bhabha[487225]: }
Dec  5 02:34:56 compute-0 systemd[1]: libpod-0aaf33170d6d09054a9f215b58812976f18747f6dabcd5d35f2539f523f113f4.scope: Deactivated successfully.
Dec  5 02:34:56 compute-0 systemd[1]: libpod-0aaf33170d6d09054a9f215b58812976f18747f6dabcd5d35f2539f523f113f4.scope: Consumed 1.256s CPU time.
Dec  5 02:34:56 compute-0 podman[487209]: 2025-12-05 02:34:56.995682815 +0000 UTC m=+1.544086799 container died 0aaf33170d6d09054a9f215b58812976f18747f6dabcd5d35f2539f523f113f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bhabha, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:34:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-20488091599a5006d00d04a4a49a9aa037ebb21eeac34aa94b67c4e5c0d66058-merged.mount: Deactivated successfully.
Dec  5 02:34:57 compute-0 podman[487209]: 2025-12-05 02:34:57.114347527 +0000 UTC m=+1.662751511 container remove 0aaf33170d6d09054a9f215b58812976f18747f6dabcd5d35f2539f523f113f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_bhabha, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:34:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2582: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:34:57 compute-0 systemd[1]: libpod-conmon-0aaf33170d6d09054a9f215b58812976f18747f6dabcd5d35f2539f523f113f4.scope: Deactivated successfully.
Dec  5 02:34:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 02:34:57 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:34:57 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 02:34:57 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:34:57 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 4e48526e-7e77-45bf-8a9c-b934abd29d1f does not exist
Dec  5 02:34:57 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 96d2e696-05b5-482c-8829-5934c662c59e does not exist
Dec  5 02:34:58 compute-0 nova_compute[349548]: 2025-12-05 02:34:58.110 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:34:58 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:34:58 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:34:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:34:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2583: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:34:59 compute-0 podman[158197]: time="2025-12-05T02:34:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:34:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:34:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec  5 02:34:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:34:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8223 "" "Go-http-client/1.1"
Dec  5 02:35:00 compute-0 nova_compute[349548]: 2025-12-05 02:35:00.926 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:35:01 compute-0 nova_compute[349548]: 2025-12-05 02:35:01.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:35:01 compute-0 nova_compute[349548]: 2025-12-05 02:35:01.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 02:35:01 compute-0 nova_compute[349548]: 2025-12-05 02:35:01.068 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  5 02:35:01 compute-0 nova_compute[349548]: 2025-12-05 02:35:01.091 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  5 02:35:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2584: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:35:01 compute-0 openstack_network_exporter[366555]: ERROR   02:35:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:35:01 compute-0 openstack_network_exporter[366555]: ERROR   02:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:35:01 compute-0 openstack_network_exporter[366555]: ERROR   02:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:35:01 compute-0 openstack_network_exporter[366555]: ERROR   02:35:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:35:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:35:01 compute-0 openstack_network_exporter[366555]: ERROR   02:35:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:35:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:35:02 compute-0 nova_compute[349548]: 2025-12-05 02:35:02.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:35:02 compute-0 nova_compute[349548]: 2025-12-05 02:35:02.146 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:35:02 compute-0 nova_compute[349548]: 2025-12-05 02:35:02.147 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:35:02 compute-0 nova_compute[349548]: 2025-12-05 02:35:02.148 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:35:02 compute-0 nova_compute[349548]: 2025-12-05 02:35:02.149 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 02:35:02 compute-0 nova_compute[349548]: 2025-12-05 02:35:02.150 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:35:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:35:02 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2773503289' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:35:02 compute-0 nova_compute[349548]: 2025-12-05 02:35:02.586 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:35:03 compute-0 nova_compute[349548]: 2025-12-05 02:35:03.113 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:35:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2585: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:35:03 compute-0 nova_compute[349548]: 2025-12-05 02:35:03.130 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:35:03 compute-0 nova_compute[349548]: 2025-12-05 02:35:03.132 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3900MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 02:35:03 compute-0 nova_compute[349548]: 2025-12-05 02:35:03.133 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:35:03 compute-0 nova_compute[349548]: 2025-12-05 02:35:03.134 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:35:03 compute-0 nova_compute[349548]: 2025-12-05 02:35:03.292 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 02:35:03 compute-0 nova_compute[349548]: 2025-12-05 02:35:03.293 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 02:35:03 compute-0 nova_compute[349548]: 2025-12-05 02:35:03.324 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:35:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:35:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:35:03 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1580157474' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:35:03 compute-0 nova_compute[349548]: 2025-12-05 02:35:03.825 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:35:03 compute-0 nova_compute[349548]: 2025-12-05 02:35:03.838 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:35:03 compute-0 nova_compute[349548]: 2025-12-05 02:35:03.977 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:35:03 compute-0 nova_compute[349548]: 2025-12-05 02:35:03.982 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 02:35:03 compute-0 nova_compute[349548]: 2025-12-05 02:35:03.983 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:35:04 compute-0 podman[487374]: 2025-12-05 02:35:04.724057242 +0000 UTC m=+0.119712864 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  5 02:35:04 compute-0 podman[487372]: 2025-12-05 02:35:04.742989241 +0000 UTC m=+0.146491401 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  5 02:35:04 compute-0 nova_compute[349548]: 2025-12-05 02:35:04.983 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:35:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2586: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:35:05 compute-0 nova_compute[349548]: 2025-12-05 02:35:05.931 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:35:06 compute-0 nova_compute[349548]: 2025-12-05 02:35:06.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:35:06 compute-0 nova_compute[349548]: 2025-12-05 02:35:06.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 02:35:07 compute-0 nova_compute[349548]: 2025-12-05 02:35:07.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:35:07 compute-0 nova_compute[349548]: 2025-12-05 02:35:07.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:35:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2587: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:35:08 compute-0 nova_compute[349548]: 2025-12-05 02:35:08.115 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:35:08 compute-0 podman[487415]: 2025-12-05 02:35:08.446635273 +0000 UTC m=+0.127429208 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  5 02:35:08 compute-0 podman[487416]: 2025-12-05 02:35:08.479561568 +0000 UTC m=+0.150578160 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, vcs-type=git, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, managed_by=edpm_ansible, maintainer=Red Hat, Inc., release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., release-0.7.12=, version=9.4, com.redhat.component=ubi9-container)
Dec  5 02:35:08 compute-0 podman[487417]: 2025-12-05 02:35:08.499103945 +0000 UTC m=+0.167481170 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  5 02:35:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:35:09 compute-0 nova_compute[349548]: 2025-12-05 02:35:09.063 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:35:09 compute-0 nova_compute[349548]: 2025-12-05 02:35:09.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:35:09 compute-0 nova_compute[349548]: 2025-12-05 02:35:09.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:35:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2588: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:35:10 compute-0 nova_compute[349548]: 2025-12-05 02:35:10.937 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:35:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2589: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:35:12 compute-0 nova_compute[349548]: 2025-12-05 02:35:12.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:35:13 compute-0 nova_compute[349548]: 2025-12-05 02:35:13.121 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:35:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2590: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:35:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:35:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2591: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:35:15 compute-0 nova_compute[349548]: 2025-12-05 02:35:15.943 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:35:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:35:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:35:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:35:16
Dec  5 02:35:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 02:35:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 02:35:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', 'images', 'vms', '.rgw.root', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta']
Dec  5 02:35:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 02:35:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:35:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:35:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:35:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:35:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2592: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:35:18 compute-0 nova_compute[349548]: 2025-12-05 02:35:18.121 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:35:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 02:35:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:35:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 02:35:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:35:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:35:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:35:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:35:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:35:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:35:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:35:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:35:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2593: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:35:19 compute-0 podman[487478]: 2025-12-05 02:35:19.72698048 +0000 UTC m=+0.106636344 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, version=9.6, config_id=edpm, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  5 02:35:19 compute-0 podman[487476]: 2025-12-05 02:35:19.738387721 +0000 UTC m=+0.132314059 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 02:35:19 compute-0 podman[487475]: 2025-12-05 02:35:19.748868016 +0000 UTC m=+0.145324138 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  5 02:35:19 compute-0 podman[487477]: 2025-12-05 02:35:19.77661039 +0000 UTC m=+0.162767993 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  5 02:35:20 compute-0 nova_compute[349548]: 2025-12-05 02:35:20.948 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:35:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2594: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:35:23 compute-0 nova_compute[349548]: 2025-12-05 02:35:23.126 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:35:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2595: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:35:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:35:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2596: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:35:25 compute-0 nova_compute[349548]: 2025-12-05 02:35:25.953 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:35:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2597: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  5 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec  5 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:35:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 02:35:28 compute-0 nova_compute[349548]: 2025-12-05 02:35:28.127 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:35:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:35:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2598: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:35:29 compute-0 podman[158197]: time="2025-12-05T02:35:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:35:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:35:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec  5 02:35:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:35:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8215 "" "Go-http-client/1.1"
Dec  5 02:35:30 compute-0 nova_compute[349548]: 2025-12-05 02:35:30.958 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:35:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2599: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:35:31 compute-0 openstack_network_exporter[366555]: ERROR   02:35:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:35:31 compute-0 openstack_network_exporter[366555]: ERROR   02:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:35:31 compute-0 openstack_network_exporter[366555]: ERROR   02:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:35:31 compute-0 openstack_network_exporter[366555]: ERROR   02:35:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:35:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:35:31 compute-0 openstack_network_exporter[366555]: ERROR   02:35:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:35:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:35:33 compute-0 nova_compute[349548]: 2025-12-05 02:35:33.130 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:35:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2600: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:35:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:35:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2601: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:35:35 compute-0 podman[487557]: 2025-12-05 02:35:35.712612056 +0000 UTC m=+0.110006302 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  5 02:35:35 compute-0 podman[487556]: 2025-12-05 02:35:35.727948081 +0000 UTC m=+0.123841774 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec  5 02:35:35 compute-0 nova_compute[349548]: 2025-12-05 02:35:35.964 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:35:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2602: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:35:38 compute-0 nova_compute[349548]: 2025-12-05 02:35:38.132 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:35:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:35:38 compute-0 podman[487599]: 2025-12-05 02:35:38.719998848 +0000 UTC m=+0.114783171 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, maintainer=Red Hat, Inc., config_id=edpm, release-0.7.12=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, vcs-type=git, name=ubi9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  5 02:35:38 compute-0 podman[487598]: 2025-12-05 02:35:38.738177745 +0000 UTC m=+0.140126126 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Dec  5 02:35:38 compute-0 podman[487600]: 2025-12-05 02:35:38.759469013 +0000 UTC m=+0.150962091 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  5 02:35:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2603: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:35:40 compute-0 nova_compute[349548]: 2025-12-05 02:35:40.970 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:35:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2604: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:35:43 compute-0 nova_compute[349548]: 2025-12-05 02:35:43.136 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:35:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2605: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:35:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:35:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2606: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:35:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 02:35:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/935678120' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 02:35:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 02:35:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/935678120' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 02:35:45 compute-0 nova_compute[349548]: 2025-12-05 02:35:45.975 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:35:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:35:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:35:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:35:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:35:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:35:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:35:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2607: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:35:48 compute-0 nova_compute[349548]: 2025-12-05 02:35:48.140 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:35:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:35:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2608: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:35:50 compute-0 podman[487652]: 2025-12-05 02:35:50.715075832 +0000 UTC m=+0.117485230 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd)
Dec  5 02:35:50 compute-0 podman[487655]: 2025-12-05 02:35:50.728830881 +0000 UTC m=+0.112343460 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, name=ubi9-minimal, container_name=openstack_network_exporter, distribution-scope=public, version=9.6, maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, io.openshift.expose-services=)
Dec  5 02:35:50 compute-0 podman[487653]: 2025-12-05 02:35:50.745760202 +0000 UTC m=+0.143384031 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  5 02:35:50 compute-0 podman[487654]: 2025-12-05 02:35:50.797824753 +0000 UTC m=+0.190517259 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 02:35:50 compute-0 nova_compute[349548]: 2025-12-05 02:35:50.978 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:35:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2609: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:35:53 compute-0 nova_compute[349548]: 2025-12-05 02:35:53.142 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:35:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2610: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:35:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:35:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2611: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:35:55 compute-0 nova_compute[349548]: 2025-12-05 02:35:55.986 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:35:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:35:56.240 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:35:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:35:56.240 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:35:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:35:56.240 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:35:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2612: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:35:58 compute-0 nova_compute[349548]: 2025-12-05 02:35:58.145 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:35:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:35:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:35:58 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:35:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 02:35:58 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:35:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 02:35:58 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:35:58 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 81e514d0-0cdb-4f71-b2d8-55cc621c9633 does not exist
Dec  5 02:35:58 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 3f746431-bf80-48e7-adaa-0139548eac26 does not exist
Dec  5 02:35:58 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 0abcaef5-31c2-4cf7-abb2-d9da583269fc does not exist
Dec  5 02:35:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 02:35:58 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 02:35:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 02:35:58 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:35:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:35:58 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:35:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2613: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:35:59 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:35:59 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:35:59 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:35:59 compute-0 podman[158197]: time="2025-12-05T02:35:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:35:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:35:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec  5 02:35:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:35:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8219 "" "Go-http-client/1.1"
Dec  5 02:36:00 compute-0 podman[488005]: 2025-12-05 02:36:00.051284924 +0000 UTC m=+0.091415833 container create b7c5944302970a20d21743f1ae37fbb9bc337a4178c2d302c536f838503fc6a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_ride, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  5 02:36:00 compute-0 podman[488005]: 2025-12-05 02:36:00.004584769 +0000 UTC m=+0.044715698 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:36:00 compute-0 systemd[1]: Started libpod-conmon-b7c5944302970a20d21743f1ae37fbb9bc337a4178c2d302c536f838503fc6a3.scope.
Dec  5 02:36:00 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:36:00 compute-0 podman[488005]: 2025-12-05 02:36:00.209753122 +0000 UTC m=+0.249884051 container init b7c5944302970a20d21743f1ae37fbb9bc337a4178c2d302c536f838503fc6a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_ride, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:36:00 compute-0 podman[488005]: 2025-12-05 02:36:00.226368934 +0000 UTC m=+0.266499843 container start b7c5944302970a20d21743f1ae37fbb9bc337a4178c2d302c536f838503fc6a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_ride, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:36:00 compute-0 podman[488005]: 2025-12-05 02:36:00.232500422 +0000 UTC m=+0.272631381 container attach b7c5944302970a20d21743f1ae37fbb9bc337a4178c2d302c536f838503fc6a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_ride, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Dec  5 02:36:00 compute-0 epic_ride[488019]: 167 167
Dec  5 02:36:00 compute-0 systemd[1]: libpod-b7c5944302970a20d21743f1ae37fbb9bc337a4178c2d302c536f838503fc6a3.scope: Deactivated successfully.
Dec  5 02:36:00 compute-0 podman[488005]: 2025-12-05 02:36:00.241558094 +0000 UTC m=+0.281689043 container died b7c5944302970a20d21743f1ae37fbb9bc337a4178c2d302c536f838503fc6a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  5 02:36:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-6be2398cf58b4f5e7378904737dfad404409b358285cd09e9ced1f05af90f0d9-merged.mount: Deactivated successfully.
Dec  5 02:36:00 compute-0 podman[488005]: 2025-12-05 02:36:00.330386211 +0000 UTC m=+0.370517130 container remove b7c5944302970a20d21743f1ae37fbb9bc337a4178c2d302c536f838503fc6a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Dec  5 02:36:00 compute-0 systemd[1]: libpod-conmon-b7c5944302970a20d21743f1ae37fbb9bc337a4178c2d302c536f838503fc6a3.scope: Deactivated successfully.
Dec  5 02:36:00 compute-0 podman[488041]: 2025-12-05 02:36:00.616212654 +0000 UTC m=+0.084122052 container create ad0be05eb48f59a457eb2f21bb8ddc5318b1977df8796af8f1347dbdf980ec62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:36:00 compute-0 podman[488041]: 2025-12-05 02:36:00.584669319 +0000 UTC m=+0.052578747 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:36:00 compute-0 systemd[1]: Started libpod-conmon-ad0be05eb48f59a457eb2f21bb8ddc5318b1977df8796af8f1347dbdf980ec62.scope.
Dec  5 02:36:00 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:36:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5973efe39d538744b7f298fc2aa810d823a00d7ec48211d19a9b54c63704324/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:36:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5973efe39d538744b7f298fc2aa810d823a00d7ec48211d19a9b54c63704324/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:36:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5973efe39d538744b7f298fc2aa810d823a00d7ec48211d19a9b54c63704324/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:36:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5973efe39d538744b7f298fc2aa810d823a00d7ec48211d19a9b54c63704324/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:36:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5973efe39d538744b7f298fc2aa810d823a00d7ec48211d19a9b54c63704324/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 02:36:00 compute-0 podman[488041]: 2025-12-05 02:36:00.826861676 +0000 UTC m=+0.294771124 container init ad0be05eb48f59a457eb2f21bb8ddc5318b1977df8796af8f1347dbdf980ec62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  5 02:36:00 compute-0 podman[488041]: 2025-12-05 02:36:00.847180435 +0000 UTC m=+0.315089823 container start ad0be05eb48f59a457eb2f21bb8ddc5318b1977df8796af8f1347dbdf980ec62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_poitras, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Dec  5 02:36:00 compute-0 podman[488041]: 2025-12-05 02:36:00.85423577 +0000 UTC m=+0.322145208 container attach ad0be05eb48f59a457eb2f21bb8ddc5318b1977df8796af8f1347dbdf980ec62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Dec  5 02:36:00 compute-0 nova_compute[349548]: 2025-12-05 02:36:00.989 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:36:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2614: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:36:01 compute-0 openstack_network_exporter[366555]: ERROR   02:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:36:01 compute-0 openstack_network_exporter[366555]: ERROR   02:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:36:01 compute-0 openstack_network_exporter[366555]: ERROR   02:36:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:36:01 compute-0 openstack_network_exporter[366555]: ERROR   02:36:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:36:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:36:01 compute-0 openstack_network_exporter[366555]: ERROR   02:36:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:36:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:36:02 compute-0 nova_compute[349548]: 2025-12-05 02:36:02.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:36:02 compute-0 nova_compute[349548]: 2025-12-05 02:36:02.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 02:36:02 compute-0 nova_compute[349548]: 2025-12-05 02:36:02.068 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  5 02:36:02 compute-0 nova_compute[349548]: 2025-12-05 02:36:02.092 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  5 02:36:02 compute-0 nova_compute[349548]: 2025-12-05 02:36:02.093 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:36:02 compute-0 nova_compute[349548]: 2025-12-05 02:36:02.123 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:36:02 compute-0 nova_compute[349548]: 2025-12-05 02:36:02.124 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:36:02 compute-0 nova_compute[349548]: 2025-12-05 02:36:02.124 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:36:02 compute-0 nova_compute[349548]: 2025-12-05 02:36:02.125 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 02:36:02 compute-0 nova_compute[349548]: 2025-12-05 02:36:02.126 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:36:02 compute-0 elated_poitras[488057]: --> passed data devices: 0 physical, 3 LVM
Dec  5 02:36:02 compute-0 elated_poitras[488057]: --> relative data size: 1.0
Dec  5 02:36:02 compute-0 elated_poitras[488057]: --> All data devices are unavailable
Dec  5 02:36:02 compute-0 systemd[1]: libpod-ad0be05eb48f59a457eb2f21bb8ddc5318b1977df8796af8f1347dbdf980ec62.scope: Deactivated successfully.
Dec  5 02:36:02 compute-0 podman[488041]: 2025-12-05 02:36:02.192377373 +0000 UTC m=+1.660286751 container died ad0be05eb48f59a457eb2f21bb8ddc5318b1977df8796af8f1347dbdf980ec62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  5 02:36:02 compute-0 systemd[1]: libpod-ad0be05eb48f59a457eb2f21bb8ddc5318b1977df8796af8f1347dbdf980ec62.scope: Consumed 1.278s CPU time.
Dec  5 02:36:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5973efe39d538744b7f298fc2aa810d823a00d7ec48211d19a9b54c63704324-merged.mount: Deactivated successfully.
Dec  5 02:36:02 compute-0 podman[488041]: 2025-12-05 02:36:02.278994166 +0000 UTC m=+1.746903524 container remove ad0be05eb48f59a457eb2f21bb8ddc5318b1977df8796af8f1347dbdf980ec62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_poitras, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:36:02 compute-0 systemd[1]: libpod-conmon-ad0be05eb48f59a457eb2f21bb8ddc5318b1977df8796af8f1347dbdf980ec62.scope: Deactivated successfully.
Dec  5 02:36:02 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:36:02 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3428556063' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:36:02 compute-0 nova_compute[349548]: 2025-12-05 02:36:02.666 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:36:03 compute-0 nova_compute[349548]: 2025-12-05 02:36:03.147 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:36:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2615: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:36:03 compute-0 nova_compute[349548]: 2025-12-05 02:36:03.217 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:36:03 compute-0 nova_compute[349548]: 2025-12-05 02:36:03.219 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3936MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 02:36:03 compute-0 nova_compute[349548]: 2025-12-05 02:36:03.219 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:36:03 compute-0 nova_compute[349548]: 2025-12-05 02:36:03.219 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:36:03 compute-0 nova_compute[349548]: 2025-12-05 02:36:03.339 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 02:36:03 compute-0 nova_compute[349548]: 2025-12-05 02:36:03.340 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 02:36:03 compute-0 nova_compute[349548]: 2025-12-05 02:36:03.368 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:36:03 compute-0 podman[488255]: 2025-12-05 02:36:03.404545601 +0000 UTC m=+0.094544374 container create 99f01bbf25d6c33d7ec99428bbfa42840602f283b2b2322567cc902d7f8704de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  5 02:36:03 compute-0 podman[488255]: 2025-12-05 02:36:03.365714284 +0000 UTC m=+0.055713067 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:36:03 compute-0 systemd[1]: Started libpod-conmon-99f01bbf25d6c33d7ec99428bbfa42840602f283b2b2322567cc902d7f8704de.scope.
Dec  5 02:36:03 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:36:03 compute-0 podman[488255]: 2025-12-05 02:36:03.552841393 +0000 UTC m=+0.242840176 container init 99f01bbf25d6c33d7ec99428bbfa42840602f283b2b2322567cc902d7f8704de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:36:03 compute-0 podman[488255]: 2025-12-05 02:36:03.573108471 +0000 UTC m=+0.263107244 container start 99f01bbf25d6c33d7ec99428bbfa42840602f283b2b2322567cc902d7f8704de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_albattani, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:36:03 compute-0 podman[488255]: 2025-12-05 02:36:03.580101044 +0000 UTC m=+0.270099807 container attach 99f01bbf25d6c33d7ec99428bbfa42840602f283b2b2322567cc902d7f8704de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_albattani, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:36:03 compute-0 vibrant_albattani[488270]: 167 167
Dec  5 02:36:03 compute-0 systemd[1]: libpod-99f01bbf25d6c33d7ec99428bbfa42840602f283b2b2322567cc902d7f8704de.scope: Deactivated successfully.
Dec  5 02:36:03 compute-0 conmon[488270]: conmon 99f01bbf25d6c33d7ec9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-99f01bbf25d6c33d7ec99428bbfa42840602f283b2b2322567cc902d7f8704de.scope/container/memory.events
Dec  5 02:36:03 compute-0 podman[488255]: 2025-12-05 02:36:03.587680274 +0000 UTC m=+0.277679047 container died 99f01bbf25d6c33d7ec99428bbfa42840602f283b2b2322567cc902d7f8704de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_albattani, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:36:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff03be082ea669cf36fb4599f5b0519c6304c2663173a44a207232f4d6cc4bbc-merged.mount: Deactivated successfully.
Dec  5 02:36:03 compute-0 podman[488255]: 2025-12-05 02:36:03.667737827 +0000 UTC m=+0.357736570 container remove 99f01bbf25d6c33d7ec99428bbfa42840602f283b2b2322567cc902d7f8704de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_albattani, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  5 02:36:03 compute-0 systemd[1]: libpod-conmon-99f01bbf25d6c33d7ec99428bbfa42840602f283b2b2322567cc902d7f8704de.scope: Deactivated successfully.
Dec  5 02:36:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:36:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:36:03 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3583223027' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:36:03 compute-0 nova_compute[349548]: 2025-12-05 02:36:03.895 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:36:03 compute-0 nova_compute[349548]: 2025-12-05 02:36:03.906 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:36:03 compute-0 nova_compute[349548]: 2025-12-05 02:36:03.925 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:36:03 compute-0 nova_compute[349548]: 2025-12-05 02:36:03.928 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 02:36:03 compute-0 nova_compute[349548]: 2025-12-05 02:36:03.929 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:36:03 compute-0 nova_compute[349548]: 2025-12-05 02:36:03.930 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:36:03 compute-0 nova_compute[349548]: 2025-12-05 02:36:03.931 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  5 02:36:03 compute-0 nova_compute[349548]: 2025-12-05 02:36:03.959 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  5 02:36:04 compute-0 podman[488313]: 2025-12-05 02:36:04.011640894 +0000 UTC m=+0.100667431 container create 7aea174fdf3003ce02c8ce67c36a6c26cf1ba508bbf67e71a5673775ef80093d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bhaskara, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:36:04 compute-0 podman[488313]: 2025-12-05 02:36:03.974862807 +0000 UTC m=+0.063889404 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:36:04 compute-0 systemd[1]: Started libpod-conmon-7aea174fdf3003ce02c8ce67c36a6c26cf1ba508bbf67e71a5673775ef80093d.scope.
Dec  5 02:36:04 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:36:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e0adf3345e301cf7163c338e8b9eb1e192e9657cc1d4e80389b8a6ec5d4aa56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:36:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e0adf3345e301cf7163c338e8b9eb1e192e9657cc1d4e80389b8a6ec5d4aa56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:36:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e0adf3345e301cf7163c338e8b9eb1e192e9657cc1d4e80389b8a6ec5d4aa56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:36:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e0adf3345e301cf7163c338e8b9eb1e192e9657cc1d4e80389b8a6ec5d4aa56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:36:04 compute-0 podman[488313]: 2025-12-05 02:36:04.200608947 +0000 UTC m=+0.289635464 container init 7aea174fdf3003ce02c8ce67c36a6c26cf1ba508bbf67e71a5673775ef80093d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bhaskara, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:36:04 compute-0 podman[488313]: 2025-12-05 02:36:04.213088869 +0000 UTC m=+0.302115356 container start 7aea174fdf3003ce02c8ce67c36a6c26cf1ba508bbf67e71a5673775ef80093d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Dec  5 02:36:04 compute-0 podman[488313]: 2025-12-05 02:36:04.217272081 +0000 UTC m=+0.306298618 container attach 7aea174fdf3003ce02c8ce67c36a6c26cf1ba508bbf67e71a5673775ef80093d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bhaskara, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]: {
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:    "0": [
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:        {
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:            "devices": [
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:                "/dev/loop3"
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:            ],
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:            "lv_name": "ceph_lv0",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:            "lv_size": "21470642176",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:            "name": "ceph_lv0",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:            "tags": {
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:                "ceph.cluster_name": "ceph",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:                "ceph.crush_device_class": "",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:                "ceph.encrypted": "0",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:                "ceph.osd_id": "0",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:                "ceph.type": "block",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:                "ceph.vdo": "0"
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:            },
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:            "type": "block",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:            "vg_name": "ceph_vg0"
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:        }
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:    ],
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:    "1": [
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:        {
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:            "devices": [
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:                "/dev/loop4"
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:            ],
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:            "lv_name": "ceph_lv1",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:            "lv_size": "21470642176",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:            "name": "ceph_lv1",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:            "tags": {
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:                "ceph.cluster_name": "ceph",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:                "ceph.crush_device_class": "",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:                "ceph.encrypted": "0",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:                "ceph.osd_id": "1",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:                "ceph.type": "block",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:                "ceph.vdo": "0"
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:            },
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:            "type": "block",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:            "vg_name": "ceph_vg1"
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:        }
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:    ],
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:    "2": [
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:        {
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:            "devices": [
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:                "/dev/loop5"
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:            ],
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:            "lv_name": "ceph_lv2",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:            "lv_size": "21470642176",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:            "name": "ceph_lv2",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:            "tags": {
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:                "ceph.cluster_name": "ceph",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:                "ceph.crush_device_class": "",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:                "ceph.encrypted": "0",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:                "ceph.osd_id": "2",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:                "ceph.type": "block",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:                "ceph.vdo": "0"
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:            },
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:            "type": "block",
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:            "vg_name": "ceph_vg2"
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:        }
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]:    ]
Dec  5 02:36:05 compute-0 reverent_bhaskara[488328]: }
Dec  5 02:36:05 compute-0 systemd[1]: libpod-7aea174fdf3003ce02c8ce67c36a6c26cf1ba508bbf67e71a5673775ef80093d.scope: Deactivated successfully.
Dec  5 02:36:05 compute-0 podman[488313]: 2025-12-05 02:36:05.056014505 +0000 UTC m=+1.145041052 container died 7aea174fdf3003ce02c8ce67c36a6c26cf1ba508bbf67e71a5673775ef80093d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bhaskara, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Dec  5 02:36:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e0adf3345e301cf7163c338e8b9eb1e192e9657cc1d4e80389b8a6ec5d4aa56-merged.mount: Deactivated successfully.
Dec  5 02:36:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2616: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:36:05 compute-0 podman[488313]: 2025-12-05 02:36:05.175186292 +0000 UTC m=+1.264212819 container remove 7aea174fdf3003ce02c8ce67c36a6c26cf1ba508bbf67e71a5673775ef80093d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:36:05 compute-0 systemd[1]: libpod-conmon-7aea174fdf3003ce02c8ce67c36a6c26cf1ba508bbf67e71a5673775ef80093d.scope: Deactivated successfully.
Dec  5 02:36:05 compute-0 podman[488450]: 2025-12-05 02:36:05.9954431 +0000 UTC m=+0.137010586 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  5 02:36:05 compute-0 nova_compute[349548]: 2025-12-05 02:36:05.995 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:36:06 compute-0 podman[488449]: 2025-12-05 02:36:06.004530613 +0000 UTC m=+0.148929071 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec  5 02:36:06 compute-0 podman[488525]: 2025-12-05 02:36:06.407634939 +0000 UTC m=+0.079118827 container create 089a1ae7cd011b2ea377f67abb9eca2d2d5e5a950380c0963fba7039fde4d488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_taussig, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  5 02:36:06 compute-0 podman[488525]: 2025-12-05 02:36:06.382113828 +0000 UTC m=+0.053597766 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:36:06 compute-0 systemd[1]: Started libpod-conmon-089a1ae7cd011b2ea377f67abb9eca2d2d5e5a950380c0963fba7039fde4d488.scope.
Dec  5 02:36:06 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:36:06 compute-0 podman[488525]: 2025-12-05 02:36:06.54347585 +0000 UTC m=+0.214959758 container init 089a1ae7cd011b2ea377f67abb9eca2d2d5e5a950380c0963fba7039fde4d488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:36:06 compute-0 podman[488525]: 2025-12-05 02:36:06.55593153 +0000 UTC m=+0.227415418 container start 089a1ae7cd011b2ea377f67abb9eca2d2d5e5a950380c0963fba7039fde4d488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:36:06 compute-0 podman[488525]: 2025-12-05 02:36:06.56042061 +0000 UTC m=+0.231904498 container attach 089a1ae7cd011b2ea377f67abb9eca2d2d5e5a950380c0963fba7039fde4d488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_taussig, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:36:06 compute-0 keen_taussig[488540]: 167 167
Dec  5 02:36:06 compute-0 systemd[1]: libpod-089a1ae7cd011b2ea377f67abb9eca2d2d5e5a950380c0963fba7039fde4d488.scope: Deactivated successfully.
Dec  5 02:36:06 compute-0 podman[488525]: 2025-12-05 02:36:06.566531388 +0000 UTC m=+0.238015286 container died 089a1ae7cd011b2ea377f67abb9eca2d2d5e5a950380c0963fba7039fde4d488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:36:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-8418a7655ce8b4574589b2da50fd315ad95a4608937ff01e6da2b2cede4228b4-merged.mount: Deactivated successfully.
Dec  5 02:36:06 compute-0 podman[488525]: 2025-12-05 02:36:06.6248582 +0000 UTC m=+0.296342098 container remove 089a1ae7cd011b2ea377f67abb9eca2d2d5e5a950380c0963fba7039fde4d488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:36:06 compute-0 systemd[1]: libpod-conmon-089a1ae7cd011b2ea377f67abb9eca2d2d5e5a950380c0963fba7039fde4d488.scope: Deactivated successfully.
Dec  5 02:36:06 compute-0 podman[488562]: 2025-12-05 02:36:06.894669458 +0000 UTC m=+0.089860328 container create 19f596c617150c2594559bb3c2da76cfa8eb8f3935a6aabc62836bd9269ea6c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  5 02:36:06 compute-0 nova_compute[349548]: 2025-12-05 02:36:06.933 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:36:06 compute-0 nova_compute[349548]: 2025-12-05 02:36:06.934 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:36:06 compute-0 nova_compute[349548]: 2025-12-05 02:36:06.935 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 02:36:06 compute-0 podman[488562]: 2025-12-05 02:36:06.862144414 +0000 UTC m=+0.057335364 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:36:06 compute-0 systemd[1]: Started libpod-conmon-19f596c617150c2594559bb3c2da76cfa8eb8f3935a6aabc62836bd9269ea6c5.scope.
Dec  5 02:36:07 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:36:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da10ebf04ff64fc5e490a083edfdf15f790648309b02d7ccf2f3802de754e389/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:36:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da10ebf04ff64fc5e490a083edfdf15f790648309b02d7ccf2f3802de754e389/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:36:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da10ebf04ff64fc5e490a083edfdf15f790648309b02d7ccf2f3802de754e389/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:36:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da10ebf04ff64fc5e490a083edfdf15f790648309b02d7ccf2f3802de754e389/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:36:07 compute-0 nova_compute[349548]: 2025-12-05 02:36:07.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:36:07 compute-0 podman[488562]: 2025-12-05 02:36:07.081131487 +0000 UTC m=+0.276322387 container init 19f596c617150c2594559bb3c2da76cfa8eb8f3935a6aabc62836bd9269ea6c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  5 02:36:07 compute-0 podman[488562]: 2025-12-05 02:36:07.112320752 +0000 UTC m=+0.307511642 container start 19f596c617150c2594559bb3c2da76cfa8eb8f3935a6aabc62836bd9269ea6c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:36:07 compute-0 podman[488562]: 2025-12-05 02:36:07.119073728 +0000 UTC m=+0.314264768 container attach 19f596c617150c2594559bb3c2da76cfa8eb8f3935a6aabc62836bd9269ea6c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_germain, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Dec  5 02:36:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2617: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 21 op/s
Dec  5 02:36:08 compute-0 nova_compute[349548]: 2025-12-05 02:36:08.151 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:36:08 compute-0 relaxed_germain[488578]: {
Dec  5 02:36:08 compute-0 relaxed_germain[488578]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 02:36:08 compute-0 relaxed_germain[488578]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:36:08 compute-0 relaxed_germain[488578]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 02:36:08 compute-0 relaxed_germain[488578]:        "osd_id": 0,
Dec  5 02:36:08 compute-0 relaxed_germain[488578]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:36:08 compute-0 relaxed_germain[488578]:        "type": "bluestore"
Dec  5 02:36:08 compute-0 relaxed_germain[488578]:    },
Dec  5 02:36:08 compute-0 relaxed_germain[488578]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 02:36:08 compute-0 relaxed_germain[488578]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:36:08 compute-0 relaxed_germain[488578]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 02:36:08 compute-0 relaxed_germain[488578]:        "osd_id": 1,
Dec  5 02:36:08 compute-0 relaxed_germain[488578]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:36:08 compute-0 relaxed_germain[488578]:        "type": "bluestore"
Dec  5 02:36:08 compute-0 relaxed_germain[488578]:    },
Dec  5 02:36:08 compute-0 relaxed_germain[488578]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 02:36:08 compute-0 relaxed_germain[488578]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:36:08 compute-0 relaxed_germain[488578]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 02:36:08 compute-0 relaxed_germain[488578]:        "osd_id": 2,
Dec  5 02:36:08 compute-0 relaxed_germain[488578]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:36:08 compute-0 relaxed_germain[488578]:        "type": "bluestore"
Dec  5 02:36:08 compute-0 relaxed_germain[488578]:    }
Dec  5 02:36:08 compute-0 relaxed_germain[488578]: }
Dec  5 02:36:08 compute-0 systemd[1]: libpod-19f596c617150c2594559bb3c2da76cfa8eb8f3935a6aabc62836bd9269ea6c5.scope: Deactivated successfully.
Dec  5 02:36:08 compute-0 systemd[1]: libpod-19f596c617150c2594559bb3c2da76cfa8eb8f3935a6aabc62836bd9269ea6c5.scope: Consumed 1.178s CPU time.
Dec  5 02:36:08 compute-0 conmon[488578]: conmon 19f596c617150c259455 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-19f596c617150c2594559bb3c2da76cfa8eb8f3935a6aabc62836bd9269ea6c5.scope/container/memory.events
Dec  5 02:36:08 compute-0 podman[488562]: 2025-12-05 02:36:08.30483916 +0000 UTC m=+1.500030020 container died 19f596c617150c2594559bb3c2da76cfa8eb8f3935a6aabc62836bd9269ea6c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Dec  5 02:36:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-da10ebf04ff64fc5e490a083edfdf15f790648309b02d7ccf2f3802de754e389-merged.mount: Deactivated successfully.
Dec  5 02:36:08 compute-0 podman[488562]: 2025-12-05 02:36:08.409761294 +0000 UTC m=+1.604952164 container remove 19f596c617150c2594559bb3c2da76cfa8eb8f3935a6aabc62836bd9269ea6c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_germain, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Dec  5 02:36:08 compute-0 systemd[1]: libpod-conmon-19f596c617150c2594559bb3c2da76cfa8eb8f3935a6aabc62836bd9269ea6c5.scope: Deactivated successfully.
Dec  5 02:36:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 02:36:08 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:36:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 02:36:08 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:36:08 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev a46209fb-bdae-41d2-8faf-6f2f0361daf2 does not exist
Dec  5 02:36:08 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 99404964-396b-4a3d-a0dc-b97ced1880d9 does not exist
Dec  5 02:36:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:36:08 compute-0 podman[488673]: 2025-12-05 02:36:08.964684504 +0000 UTC m=+0.121407453 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  5 02:36:08 compute-0 podman[488671]: 2025-12-05 02:36:08.975096026 +0000 UTC m=+0.141071484 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  5 02:36:08 compute-0 podman[488672]: 2025-12-05 02:36:08.990552184 +0000 UTC m=+0.150398354 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, container_name=kepler, name=ubi9, config_id=edpm, release=1214.1726694543, vcs-type=git, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., version=9.4, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  5 02:36:09 compute-0 nova_compute[349548]: 2025-12-05 02:36:09.062 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:36:09 compute-0 nova_compute[349548]: 2025-12-05 02:36:09.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:36:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2618: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 21 op/s
Dec  5 02:36:09 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:36:09 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:36:11 compute-0 nova_compute[349548]: 2025-12-05 02:36:11.001 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:36:11 compute-0 nova_compute[349548]: 2025-12-05 02:36:11.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:36:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2619: 321 pgs: 321 active+clean; 57 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 43 op/s
Dec  5 02:36:13 compute-0 nova_compute[349548]: 2025-12-05 02:36:13.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:36:13 compute-0 nova_compute[349548]: 2025-12-05 02:36:13.157 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:36:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2620: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  5 02:36:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:36:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2621: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  5 02:36:16 compute-0 nova_compute[349548]: 2025-12-05 02:36:16.007 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:36:16 compute-0 nova_compute[349548]: 2025-12-05 02:36:16.062 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:36:16 compute-0 nova_compute[349548]: 2025-12-05 02:36:16.107 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:36:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:36:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:36:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:36:16
Dec  5 02:36:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 02:36:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 02:36:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'images', '.mgr', 'default.rgw.log', 'backups', 'volumes']
Dec  5 02:36:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 02:36:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:36:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:36:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:36:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:36:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2622: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec  5 02:36:18 compute-0 nova_compute[349548]: 2025-12-05 02:36:18.159 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:36:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 02:36:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:36:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 02:36:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:36:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:36:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:36:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:36:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:36:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:36:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:36:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:36:19 compute-0 nova_compute[349548]: 2025-12-05 02:36:19.079 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:36:19 compute-0 nova_compute[349548]: 2025-12-05 02:36:19.079 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  5 02:36:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2623: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 37 op/s
Dec  5 02:36:21 compute-0 nova_compute[349548]: 2025-12-05 02:36:21.011 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:36:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2624: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 37 op/s
Dec  5 02:36:21 compute-0 podman[488729]: 2025-12-05 02:36:21.720858197 +0000 UTC m=+0.116291325 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  5 02:36:21 compute-0 podman[488731]: 2025-12-05 02:36:21.73681925 +0000 UTC m=+0.116090489 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_id=edpm, name=ubi9-minimal, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, release=1755695350)
Dec  5 02:36:21 compute-0 podman[488728]: 2025-12-05 02:36:21.745721928 +0000 UTC m=+0.144467192 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Dec  5 02:36:21 compute-0 podman[488730]: 2025-12-05 02:36:21.777712976 +0000 UTC m=+0.162573327 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  5 02:36:23 compute-0 nova_compute[349548]: 2025-12-05 02:36:23.162 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:36:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2625: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail; 9.5 KiB/s rd, 0 B/s wr, 15 op/s
Dec  5 02:36:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:36:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2626: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:36:26 compute-0 nova_compute[349548]: 2025-12-05 02:36:26.015 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:36:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2627: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  5 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec  5 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:36:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 02:36:28 compute-0 nova_compute[349548]: 2025-12-05 02:36:28.165 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:36:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:36:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2628: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:36:29 compute-0 podman[158197]: time="2025-12-05T02:36:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:36:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:36:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec  5 02:36:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:36:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8219 "" "Go-http-client/1.1"
Dec  5 02:36:30 compute-0 nova_compute[349548]: 2025-12-05 02:36:30.128 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:36:31 compute-0 nova_compute[349548]: 2025-12-05 02:36:31.020 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:36:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2629: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:36:31 compute-0 openstack_network_exporter[366555]: ERROR   02:36:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:36:31 compute-0 openstack_network_exporter[366555]: ERROR   02:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:36:31 compute-0 openstack_network_exporter[366555]: ERROR   02:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:36:31 compute-0 openstack_network_exporter[366555]: ERROR   02:36:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:36:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:36:31 compute-0 openstack_network_exporter[366555]: ERROR   02:36:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:36:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:36:33 compute-0 nova_compute[349548]: 2025-12-05 02:36:33.168 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:36:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2630: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:36:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:36:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2631: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:36:36 compute-0 nova_compute[349548]: 2025-12-05 02:36:36.024 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:36:36 compute-0 podman[488814]: 2025-12-05 02:36:36.70450585 +0000 UTC m=+0.112700591 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  5 02:36:36 compute-0 podman[488813]: 2025-12-05 02:36:36.740433832 +0000 UTC m=+0.141443565 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec  5 02:36:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2632: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:36:38 compute-0 nova_compute[349548]: 2025-12-05 02:36:38.173 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.332 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.332 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.337 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.338 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.340 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.341 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.341 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.342 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.342 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.339 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.343 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.342 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.344 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.344 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.345 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.346 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.346 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{'disk.root.size': [], 'disk.device.capacity': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.device.read.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.347 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.348 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.348 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.348 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.348 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.348 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.348 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.349 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.349 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.349 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.349 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.349 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.350 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.350 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.350 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.350 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.350 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.351 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.351 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.351 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.351 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.351 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.351 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.352 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.352 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.352 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.352 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.352 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.353 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.353 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.353 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.353 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.353 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.354 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.354 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.354 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.354 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.355 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.355 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.355 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.355 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.356 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.356 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.356 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.356 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.357 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:36:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:36:38.361 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:36:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:36:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2633: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:36:39 compute-0 podman[488856]: 2025-12-05 02:36:39.689200703 +0000 UTC m=+0.097665045 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute)
Dec  5 02:36:39 compute-0 podman[488857]: 2025-12-05 02:36:39.699362857 +0000 UTC m=+0.098310022 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, managed_by=edpm_ansible, io.openshift.tags=base rhel9, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, version=9.4)
Dec  5 02:36:39 compute-0 podman[488858]: 2025-12-05 02:36:39.710637665 +0000 UTC m=+0.105611015 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  5 02:36:41 compute-0 nova_compute[349548]: 2025-12-05 02:36:41.028 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:36:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2634: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:36:43 compute-0 nova_compute[349548]: 2025-12-05 02:36:43.173 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:36:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2635: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:36:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:36:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2636: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:36:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 02:36:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1885167744' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 02:36:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 02:36:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1885167744' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 02:36:46 compute-0 nova_compute[349548]: 2025-12-05 02:36:46.032 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:36:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:36:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:36:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:36:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:36:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:36:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:36:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2637: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:36:48 compute-0 nova_compute[349548]: 2025-12-05 02:36:48.176 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:36:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:36:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2638: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:36:51 compute-0 nova_compute[349548]: 2025-12-05 02:36:51.038 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:36:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2639: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:36:52 compute-0 podman[488912]: 2025-12-05 02:36:52.740496431 +0000 UTC m=+0.144145693 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd)
Dec  5 02:36:52 compute-0 podman[488915]: 2025-12-05 02:36:52.743352144 +0000 UTC m=+0.120732504 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vendor=Red Hat, Inc., container_name=openstack_network_exporter, release=1755695350, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, architecture=x86_64, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  5 02:36:52 compute-0 podman[488913]: 2025-12-05 02:36:52.754034914 +0000 UTC m=+0.147851501 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  5 02:36:52 compute-0 podman[488914]: 2025-12-05 02:36:52.786397733 +0000 UTC m=+0.173738042 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Dec  5 02:36:53 compute-0 nova_compute[349548]: 2025-12-05 02:36:53.179 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:36:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2640: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:36:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:36:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2641: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:36:56 compute-0 nova_compute[349548]: 2025-12-05 02:36:56.041 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:36:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:36:56.241 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:36:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:36:56.241 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:36:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:36:56.242 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:36:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2642: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:36:58 compute-0 nova_compute[349548]: 2025-12-05 02:36:58.183 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:36:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:36:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2643: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:36:59 compute-0 podman[158197]: time="2025-12-05T02:36:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:36:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:36:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec  5 02:36:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:36:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8223 "" "Go-http-client/1.1"
Dec  5 02:37:00 compute-0 nova_compute[349548]: 2025-12-05 02:37:00.082 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:37:01 compute-0 nova_compute[349548]: 2025-12-05 02:37:01.045 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:37:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2644: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:37:01 compute-0 openstack_network_exporter[366555]: ERROR   02:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:37:01 compute-0 openstack_network_exporter[366555]: ERROR   02:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:37:01 compute-0 openstack_network_exporter[366555]: ERROR   02:37:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:37:01 compute-0 openstack_network_exporter[366555]: ERROR   02:37:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:37:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:37:01 compute-0 openstack_network_exporter[366555]: ERROR   02:37:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:37:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:37:03 compute-0 nova_compute[349548]: 2025-12-05 02:37:03.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:37:03 compute-0 nova_compute[349548]: 2025-12-05 02:37:03.066 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 02:37:03 compute-0 nova_compute[349548]: 2025-12-05 02:37:03.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  5 02:37:03 compute-0 nova_compute[349548]: 2025-12-05 02:37:03.117 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  5 02:37:03 compute-0 nova_compute[349548]: 2025-12-05 02:37:03.118 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:37:03 compute-0 nova_compute[349548]: 2025-12-05 02:37:03.157 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:37:03 compute-0 nova_compute[349548]: 2025-12-05 02:37:03.158 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:37:03 compute-0 nova_compute[349548]: 2025-12-05 02:37:03.158 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:37:03 compute-0 nova_compute[349548]: 2025-12-05 02:37:03.159 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 02:37:03 compute-0 nova_compute[349548]: 2025-12-05 02:37:03.159 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:37:03 compute-0 nova_compute[349548]: 2025-12-05 02:37:03.188 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:37:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2645: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:37:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:37:03 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1736455509' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:37:03 compute-0 nova_compute[349548]: 2025-12-05 02:37:03.658 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:37:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:37:04 compute-0 nova_compute[349548]: 2025-12-05 02:37:04.116 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:37:04 compute-0 nova_compute[349548]: 2025-12-05 02:37:04.118 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3944MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 02:37:04 compute-0 nova_compute[349548]: 2025-12-05 02:37:04.118 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:37:04 compute-0 nova_compute[349548]: 2025-12-05 02:37:04.119 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:37:04 compute-0 nova_compute[349548]: 2025-12-05 02:37:04.192 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 02:37:04 compute-0 nova_compute[349548]: 2025-12-05 02:37:04.193 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 02:37:04 compute-0 nova_compute[349548]: 2025-12-05 02:37:04.237 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:37:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:37:04 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/63555516' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:37:04 compute-0 nova_compute[349548]: 2025-12-05 02:37:04.710 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:37:04 compute-0 nova_compute[349548]: 2025-12-05 02:37:04.725 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:37:04 compute-0 nova_compute[349548]: 2025-12-05 02:37:04.797 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:37:04 compute-0 nova_compute[349548]: 2025-12-05 02:37:04.801 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 02:37:04 compute-0 nova_compute[349548]: 2025-12-05 02:37:04.802 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:37:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2646: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:37:05 compute-0 podman[158197]: time="2025-12-05T02:37:05Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:37:05 compute-0 podman[158197]: @ - - [05/Dec/2025:02:37:05 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 43259 "" "Go-http-client/1.1"
Dec  5 02:37:06 compute-0 nova_compute[349548]: 2025-12-05 02:37:06.049 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:37:06 compute-0 nova_compute[349548]: 2025-12-05 02:37:06.752 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:37:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2647: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:37:07 compute-0 podman[489038]: 2025-12-05 02:37:07.714849408 +0000 UTC m=+0.116353507 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:37:07 compute-0 podman[489039]: 2025-12-05 02:37:07.739631547 +0000 UTC m=+0.134532645 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  5 02:37:08 compute-0 nova_compute[349548]: 2025-12-05 02:37:08.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:37:08 compute-0 nova_compute[349548]: 2025-12-05 02:37:08.066 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 02:37:08 compute-0 nova_compute[349548]: 2025-12-05 02:37:08.187 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:37:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:37:09 compute-0 nova_compute[349548]: 2025-12-05 02:37:09.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:37:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2648: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:37:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:37:10 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:37:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 02:37:10 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:37:10 compute-0 nova_compute[349548]: 2025-12-05 02:37:10.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:37:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 02:37:10 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:37:10 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev e63a8bb8-810e-44dd-af9c-05813104530b does not exist
Dec  5 02:37:10 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev ae0cb3df-9921-444d-8929-654a21dc3c14 does not exist
Dec  5 02:37:10 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 03c98e2e-0c37-4f94-811c-9c26ca6cde86 does not exist
Dec  5 02:37:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 02:37:10 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 02:37:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 02:37:10 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:37:10 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:37:10 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:37:10 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:37:10 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:37:10 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:37:10 compute-0 podman[489238]: 2025-12-05 02:37:10.416224421 +0000 UTC m=+0.133529305 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm)
Dec  5 02:37:10 compute-0 podman[489237]: 2025-12-05 02:37:10.420430233 +0000 UTC m=+0.147197982 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, distribution-scope=public, version=9.4, io.buildah.version=1.29.0, name=ubi9, release=1214.1726694543, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git)
Dec  5 02:37:10 compute-0 podman[489236]: 2025-12-05 02:37:10.422594665 +0000 UTC m=+0.151629350 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Dec  5 02:37:11 compute-0 nova_compute[349548]: 2025-12-05 02:37:11.052 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:37:11 compute-0 nova_compute[349548]: 2025-12-05 02:37:11.061 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:37:11 compute-0 nova_compute[349548]: 2025-12-05 02:37:11.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:37:11 compute-0 podman[489405]: 2025-12-05 02:37:11.175085806 +0000 UTC m=+0.093889805 container create 049e7dad698f4b831ab7c2ac0014ccd9d59afcd8d18f8acade41a3c48620c29a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:37:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2649: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:37:11 compute-0 podman[489405]: 2025-12-05 02:37:11.140583735 +0000 UTC m=+0.059387874 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:37:11 compute-0 systemd[1]: Started libpod-conmon-049e7dad698f4b831ab7c2ac0014ccd9d59afcd8d18f8acade41a3c48620c29a.scope.
Dec  5 02:37:11 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:37:11 compute-0 podman[489405]: 2025-12-05 02:37:11.323863812 +0000 UTC m=+0.242667821 container init 049e7dad698f4b831ab7c2ac0014ccd9d59afcd8d18f8acade41a3c48620c29a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_perlman, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Dec  5 02:37:11 compute-0 podman[489405]: 2025-12-05 02:37:11.34274215 +0000 UTC m=+0.261546139 container start 049e7dad698f4b831ab7c2ac0014ccd9d59afcd8d18f8acade41a3c48620c29a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_perlman, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:37:11 compute-0 podman[489405]: 2025-12-05 02:37:11.349439804 +0000 UTC m=+0.268243813 container attach 049e7dad698f4b831ab7c2ac0014ccd9d59afcd8d18f8acade41a3c48620c29a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:37:11 compute-0 eloquent_perlman[489421]: 167 167
Dec  5 02:37:11 compute-0 systemd[1]: libpod-049e7dad698f4b831ab7c2ac0014ccd9d59afcd8d18f8acade41a3c48620c29a.scope: Deactivated successfully.
Dec  5 02:37:11 compute-0 podman[489405]: 2025-12-05 02:37:11.355036577 +0000 UTC m=+0.273840576 container died 049e7dad698f4b831ab7c2ac0014ccd9d59afcd8d18f8acade41a3c48620c29a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  5 02:37:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4bd43943694570aac397ed5225c14b2dba046edfcf459ed7a8a225be95a17ad-merged.mount: Deactivated successfully.
Dec  5 02:37:11 compute-0 podman[489405]: 2025-12-05 02:37:11.432790813 +0000 UTC m=+0.351594782 container remove 049e7dad698f4b831ab7c2ac0014ccd9d59afcd8d18f8acade41a3c48620c29a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  5 02:37:11 compute-0 systemd[1]: libpod-conmon-049e7dad698f4b831ab7c2ac0014ccd9d59afcd8d18f8acade41a3c48620c29a.scope: Deactivated successfully.
Dec  5 02:37:11 compute-0 podman[489444]: 2025-12-05 02:37:11.740009366 +0000 UTC m=+0.086930023 container create b15319d84ffa9ecab4b01fc0d661d1e7b79cedc2aacd085bec793456b1199d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_fermat, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  5 02:37:11 compute-0 podman[489444]: 2025-12-05 02:37:11.704590788 +0000 UTC m=+0.051511505 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:37:11 compute-0 systemd[1]: Started libpod-conmon-b15319d84ffa9ecab4b01fc0d661d1e7b79cedc2aacd085bec793456b1199d50.scope.
Dec  5 02:37:11 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:37:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7eec744628028bff7c2313b05aa0f01b5472c8e6a4c19748711c464f4dd55e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:37:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7eec744628028bff7c2313b05aa0f01b5472c8e6a4c19748711c464f4dd55e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:37:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7eec744628028bff7c2313b05aa0f01b5472c8e6a4c19748711c464f4dd55e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:37:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7eec744628028bff7c2313b05aa0f01b5472c8e6a4c19748711c464f4dd55e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:37:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7eec744628028bff7c2313b05aa0f01b5472c8e6a4c19748711c464f4dd55e2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 02:37:11 compute-0 podman[489444]: 2025-12-05 02:37:11.935015673 +0000 UTC m=+0.281936410 container init b15319d84ffa9ecab4b01fc0d661d1e7b79cedc2aacd085bec793456b1199d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_fermat, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:37:11 compute-0 podman[489444]: 2025-12-05 02:37:11.965098546 +0000 UTC m=+0.312019183 container start b15319d84ffa9ecab4b01fc0d661d1e7b79cedc2aacd085bec793456b1199d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_fermat, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  5 02:37:11 compute-0 podman[489444]: 2025-12-05 02:37:11.971378938 +0000 UTC m=+0.318299605 container attach b15319d84ffa9ecab4b01fc0d661d1e7b79cedc2aacd085bec793456b1199d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:37:13 compute-0 nova_compute[349548]: 2025-12-05 02:37:13.065 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:37:13 compute-0 nova_compute[349548]: 2025-12-05 02:37:13.189 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:37:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2650: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:37:13 compute-0 frosty_fermat[489461]: --> passed data devices: 0 physical, 3 LVM
Dec  5 02:37:13 compute-0 frosty_fermat[489461]: --> relative data size: 1.0
Dec  5 02:37:13 compute-0 frosty_fermat[489461]: --> All data devices are unavailable
Dec  5 02:37:13 compute-0 systemd[1]: libpod-b15319d84ffa9ecab4b01fc0d661d1e7b79cedc2aacd085bec793456b1199d50.scope: Deactivated successfully.
Dec  5 02:37:13 compute-0 podman[489444]: 2025-12-05 02:37:13.298145671 +0000 UTC m=+1.645066328 container died b15319d84ffa9ecab4b01fc0d661d1e7b79cedc2aacd085bec793456b1199d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  5 02:37:13 compute-0 systemd[1]: libpod-b15319d84ffa9ecab4b01fc0d661d1e7b79cedc2aacd085bec793456b1199d50.scope: Consumed 1.285s CPU time.
Dec  5 02:37:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7eec744628028bff7c2313b05aa0f01b5472c8e6a4c19748711c464f4dd55e2-merged.mount: Deactivated successfully.
Dec  5 02:37:13 compute-0 podman[489444]: 2025-12-05 02:37:13.384173307 +0000 UTC m=+1.731093934 container remove b15319d84ffa9ecab4b01fc0d661d1e7b79cedc2aacd085bec793456b1199d50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_fermat, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Dec  5 02:37:13 compute-0 systemd[1]: libpod-conmon-b15319d84ffa9ecab4b01fc0d661d1e7b79cedc2aacd085bec793456b1199d50.scope: Deactivated successfully.
Dec  5 02:37:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:37:13 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #129. Immutable memtables: 0.
Dec  5 02:37:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:37:13.738074) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  5 02:37:13 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 129
Dec  5 02:37:13 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764902233738134, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 1321, "num_deletes": 250, "total_data_size": 2050765, "memory_usage": 2078336, "flush_reason": "Manual Compaction"}
Dec  5 02:37:13 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #130: started
Dec  5 02:37:13 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764902233753272, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 130, "file_size": 1192564, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 53308, "largest_seqno": 54628, "table_properties": {"data_size": 1187829, "index_size": 2130, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12280, "raw_average_key_size": 20, "raw_value_size": 1177581, "raw_average_value_size": 1982, "num_data_blocks": 97, "num_entries": 594, "num_filter_entries": 594, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764902096, "oldest_key_time": 1764902096, "file_creation_time": 1764902233, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 130, "seqno_to_time_mapping": "N/A"}}
Dec  5 02:37:13 compute-0 ceph-mon[192914]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 15295 microseconds, and 8413 cpu microseconds.
Dec  5 02:37:13 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 02:37:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:37:13.753367) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #130: 1192564 bytes OK
Dec  5 02:37:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:37:13.753394) [db/memtable_list.cc:519] [default] Level-0 commit table #130 started
Dec  5 02:37:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:37:13.756033) [db/memtable_list.cc:722] [default] Level-0 commit table #130: memtable #1 done
Dec  5 02:37:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:37:13.756062) EVENT_LOG_v1 {"time_micros": 1764902233756053, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  5 02:37:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:37:13.756120) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  5 02:37:13 compute-0 ceph-mon[192914]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 2044868, prev total WAL file size 2044868, number of live WAL files 2.
Dec  5 02:37:13 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000126.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:37:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:37:13.757556) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323533' seq:72057594037927935, type:22 .. '6D6772737461740032353034' seq:0, type:0; will stop at (end)
Dec  5 02:37:13 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  5 02:37:13 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [130(1164KB)], [128(8930KB)]
Dec  5 02:37:13 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764902233757606, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [130], "files_L6": [128], "score": -1, "input_data_size": 10337330, "oldest_snapshot_seqno": -1}
Dec  5 02:37:13 compute-0 ceph-mon[192914]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #131: 6813 keys, 7865586 bytes, temperature: kUnknown
Dec  5 02:37:13 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764902233824068, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 131, "file_size": 7865586, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7824950, "index_size": 22475, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17093, "raw_key_size": 178050, "raw_average_key_size": 26, "raw_value_size": 7706698, "raw_average_value_size": 1131, "num_data_blocks": 890, "num_entries": 6813, "num_filter_entries": 6813, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764897142, "oldest_key_time": 0, "file_creation_time": 1764902233, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d2a3e37e-222f-447f-af23-2a52f135922f", "db_session_id": "4QDKSXZ9659NG2VXPQ9P", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Dec  5 02:37:13 compute-0 ceph-mon[192914]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  5 02:37:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:37:13.824409) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 7865586 bytes
Dec  5 02:37:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:37:13.826785) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.2 rd, 118.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 8.7 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(15.3) write-amplify(6.6) OK, records in: 7262, records dropped: 449 output_compression: NoCompression
Dec  5 02:37:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:37:13.826815) EVENT_LOG_v1 {"time_micros": 1764902233826801, "job": 78, "event": "compaction_finished", "compaction_time_micros": 66592, "compaction_time_cpu_micros": 36646, "output_level": 6, "num_output_files": 1, "total_output_size": 7865586, "num_input_records": 7262, "num_output_records": 6813, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  5 02:37:13 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000130.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:37:13 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764902233827459, "job": 78, "event": "table_file_deletion", "file_number": 130}
Dec  5 02:37:13 compute-0 ceph-mon[192914]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  5 02:37:13 compute-0 ceph-mon[192914]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764902233832543, "job": 78, "event": "table_file_deletion", "file_number": 128}
Dec  5 02:37:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:37:13.757304) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:37:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:37:13.832722) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:37:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:37:13.832731) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:37:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:37:13.832734) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:37:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:37:13.832736) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:37:13 compute-0 ceph-mon[192914]: rocksdb: (Original Log Time 2025/12/05-02:37:13.832738) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  5 02:37:14 compute-0 podman[489638]: 2025-12-05 02:37:14.544199572 +0000 UTC m=+0.066419218 container create bf931a445d428a0d1635102bd9140cf493e25318080054bbc130042515b231cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_saha, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:37:14 compute-0 podman[489638]: 2025-12-05 02:37:14.510746451 +0000 UTC m=+0.032966157 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:37:14 compute-0 systemd[1]: Started libpod-conmon-bf931a445d428a0d1635102bd9140cf493e25318080054bbc130042515b231cc.scope.
Dec  5 02:37:14 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:37:14 compute-0 podman[489638]: 2025-12-05 02:37:14.696615443 +0000 UTC m=+0.218835149 container init bf931a445d428a0d1635102bd9140cf493e25318080054bbc130042515b231cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_saha, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  5 02:37:14 compute-0 podman[489638]: 2025-12-05 02:37:14.713140813 +0000 UTC m=+0.235360469 container start bf931a445d428a0d1635102bd9140cf493e25318080054bbc130042515b231cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:37:14 compute-0 podman[489638]: 2025-12-05 02:37:14.720755634 +0000 UTC m=+0.242975360 container attach bf931a445d428a0d1635102bd9140cf493e25318080054bbc130042515b231cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_saha, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:37:14 compute-0 romantic_saha[489653]: 167 167
Dec  5 02:37:14 compute-0 systemd[1]: libpod-bf931a445d428a0d1635102bd9140cf493e25318080054bbc130042515b231cc.scope: Deactivated successfully.
Dec  5 02:37:14 compute-0 podman[489638]: 2025-12-05 02:37:14.726113849 +0000 UTC m=+0.248333535 container died bf931a445d428a0d1635102bd9140cf493e25318080054bbc130042515b231cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_saha, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  5 02:37:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-773c81cc43e355514af9ca0851138f229d481544da94f5f7b644875be3eaec27-merged.mount: Deactivated successfully.
Dec  5 02:37:14 compute-0 podman[489638]: 2025-12-05 02:37:14.80335945 +0000 UTC m=+0.325579106 container remove bf931a445d428a0d1635102bd9140cf493e25318080054bbc130042515b231cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_saha, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:37:14 compute-0 systemd[1]: libpod-conmon-bf931a445d428a0d1635102bd9140cf493e25318080054bbc130042515b231cc.scope: Deactivated successfully.
Dec  5 02:37:15 compute-0 podman[489676]: 2025-12-05 02:37:15.077954027 +0000 UTC m=+0.087478949 container create 97ff5e2eaa0e9a925e3fc052d022daf1c497a41390ed6e1e3a4c9782cb74d33c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Dec  5 02:37:15 compute-0 podman[489676]: 2025-12-05 02:37:15.046725151 +0000 UTC m=+0.056250113 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:37:15 compute-0 systemd[1]: Started libpod-conmon-97ff5e2eaa0e9a925e3fc052d022daf1c497a41390ed6e1e3a4c9782cb74d33c.scope.
Dec  5 02:37:15 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:37:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2651: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:37:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba29bf67165a349d2ce6b09b8d86f910289168b96b6f45566b9b7d1b346c4ac3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:37:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba29bf67165a349d2ce6b09b8d86f910289168b96b6f45566b9b7d1b346c4ac3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:37:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba29bf67165a349d2ce6b09b8d86f910289168b96b6f45566b9b7d1b346c4ac3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:37:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba29bf67165a349d2ce6b09b8d86f910289168b96b6f45566b9b7d1b346c4ac3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:37:15 compute-0 podman[489676]: 2025-12-05 02:37:15.256832937 +0000 UTC m=+0.266357939 container init 97ff5e2eaa0e9a925e3fc052d022daf1c497a41390ed6e1e3a4c9782cb74d33c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kare, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  5 02:37:15 compute-0 podman[489676]: 2025-12-05 02:37:15.276214079 +0000 UTC m=+0.285739021 container start 97ff5e2eaa0e9a925e3fc052d022daf1c497a41390ed6e1e3a4c9782cb74d33c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kare, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Dec  5 02:37:15 compute-0 podman[489676]: 2025-12-05 02:37:15.282655986 +0000 UTC m=+0.292180998 container attach 97ff5e2eaa0e9a925e3fc052d022daf1c497a41390ed6e1e3a4c9782cb74d33c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kare, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  5 02:37:16 compute-0 nova_compute[349548]: 2025-12-05 02:37:16.059 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:37:16 compute-0 upbeat_kare[489692]: {
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:    "0": [
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:        {
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:            "devices": [
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:                "/dev/loop3"
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:            ],
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:            "lv_name": "ceph_lv0",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:            "lv_size": "21470642176",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:            "name": "ceph_lv0",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:            "tags": {
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:                "ceph.cluster_name": "ceph",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:                "ceph.crush_device_class": "",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:                "ceph.encrypted": "0",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:                "ceph.osd_id": "0",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:                "ceph.type": "block",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:                "ceph.vdo": "0"
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:            },
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:            "type": "block",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:            "vg_name": "ceph_vg0"
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:        }
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:    ],
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:    "1": [
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:        {
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:            "devices": [
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:                "/dev/loop4"
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:            ],
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:            "lv_name": "ceph_lv1",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:            "lv_size": "21470642176",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:            "name": "ceph_lv1",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:            "tags": {
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:                "ceph.cluster_name": "ceph",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:                "ceph.crush_device_class": "",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:                "ceph.encrypted": "0",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:                "ceph.osd_id": "1",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:                "ceph.type": "block",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:                "ceph.vdo": "0"
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:            },
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:            "type": "block",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:            "vg_name": "ceph_vg1"
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:        }
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:    ],
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:    "2": [
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:        {
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:            "devices": [
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:                "/dev/loop5"
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:            ],
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:            "lv_name": "ceph_lv2",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:            "lv_size": "21470642176",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:            "name": "ceph_lv2",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:            "tags": {
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:                "ceph.cluster_name": "ceph",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:                "ceph.crush_device_class": "",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:                "ceph.encrypted": "0",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:                "ceph.osd_id": "2",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:                "ceph.type": "block",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:                "ceph.vdo": "0"
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:            },
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:            "type": "block",
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:            "vg_name": "ceph_vg2"
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:        }
Dec  5 02:37:16 compute-0 upbeat_kare[489692]:    ]
Dec  5 02:37:16 compute-0 upbeat_kare[489692]: }
Dec  5 02:37:16 compute-0 systemd[1]: libpod-97ff5e2eaa0e9a925e3fc052d022daf1c497a41390ed6e1e3a4c9782cb74d33c.scope: Deactivated successfully.
Dec  5 02:37:16 compute-0 podman[489676]: 2025-12-05 02:37:16.111960876 +0000 UTC m=+1.121485828 container died 97ff5e2eaa0e9a925e3fc052d022daf1c497a41390ed6e1e3a4c9782cb74d33c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  5 02:37:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba29bf67165a349d2ce6b09b8d86f910289168b96b6f45566b9b7d1b346c4ac3-merged.mount: Deactivated successfully.
Dec  5 02:37:16 compute-0 podman[489676]: 2025-12-05 02:37:16.215653254 +0000 UTC m=+1.225178206 container remove 97ff5e2eaa0e9a925e3fc052d022daf1c497a41390ed6e1e3a4c9782cb74d33c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_kare, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  5 02:37:16 compute-0 systemd[1]: libpod-conmon-97ff5e2eaa0e9a925e3fc052d022daf1c497a41390ed6e1e3a4c9782cb74d33c.scope: Deactivated successfully.
Dec  5 02:37:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:37:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:37:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:37:16
Dec  5 02:37:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 02:37:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 02:37:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'backups', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', '.mgr', 'vms']
Dec  5 02:37:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 02:37:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:37:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:37:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:37:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:37:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2652: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:37:17 compute-0 podman[489853]: 2025-12-05 02:37:17.447111062 +0000 UTC m=+0.099285132 container create 81adf68ee9fdedb6e39cd5c4475ce812bb7869d4ab9c26a828b2fff1dfdea498 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_torvalds, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  5 02:37:17 compute-0 podman[489853]: 2025-12-05 02:37:17.417948006 +0000 UTC m=+0.070122156 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:37:17 compute-0 systemd[1]: Started libpod-conmon-81adf68ee9fdedb6e39cd5c4475ce812bb7869d4ab9c26a828b2fff1dfdea498.scope.
Dec  5 02:37:17 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:37:17 compute-0 podman[489853]: 2025-12-05 02:37:17.577491584 +0000 UTC m=+0.229665684 container init 81adf68ee9fdedb6e39cd5c4475ce812bb7869d4ab9c26a828b2fff1dfdea498 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_torvalds, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:37:17 compute-0 podman[489853]: 2025-12-05 02:37:17.593363575 +0000 UTC m=+0.245537645 container start 81adf68ee9fdedb6e39cd5c4475ce812bb7869d4ab9c26a828b2fff1dfdea498 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  5 02:37:17 compute-0 podman[489853]: 2025-12-05 02:37:17.600254605 +0000 UTC m=+0.252428695 container attach 81adf68ee9fdedb6e39cd5c4475ce812bb7869d4ab9c26a828b2fff1dfdea498 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:37:17 compute-0 charming_torvalds[489869]: 167 167
Dec  5 02:37:17 compute-0 systemd[1]: libpod-81adf68ee9fdedb6e39cd5c4475ce812bb7869d4ab9c26a828b2fff1dfdea498.scope: Deactivated successfully.
Dec  5 02:37:17 compute-0 podman[489853]: 2025-12-05 02:37:17.604287592 +0000 UTC m=+0.256461712 container died 81adf68ee9fdedb6e39cd5c4475ce812bb7869d4ab9c26a828b2fff1dfdea498 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Dec  5 02:37:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-60f287d55666313ef9199d5e4780852ac981aa896278b1c990806919ec7652d5-merged.mount: Deactivated successfully.
Dec  5 02:37:17 compute-0 podman[489853]: 2025-12-05 02:37:17.69142065 +0000 UTC m=+0.343594750 container remove 81adf68ee9fdedb6e39cd5c4475ce812bb7869d4ab9c26a828b2fff1dfdea498 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Dec  5 02:37:17 compute-0 systemd[1]: libpod-conmon-81adf68ee9fdedb6e39cd5c4475ce812bb7869d4ab9c26a828b2fff1dfdea498.scope: Deactivated successfully.
Dec  5 02:37:17 compute-0 podman[489892]: 2025-12-05 02:37:17.979208039 +0000 UTC m=+0.084825922 container create 8d16c2035d87a7a2f31d2a502d48b2df713d68c516fd58539b98c2a0eccdb731 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hodgkin, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Dec  5 02:37:18 compute-0 podman[489892]: 2025-12-05 02:37:17.944872763 +0000 UTC m=+0.050490696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:37:18 compute-0 systemd[1]: Started libpod-conmon-8d16c2035d87a7a2f31d2a502d48b2df713d68c516fd58539b98c2a0eccdb731.scope.
Dec  5 02:37:18 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/272e076b75b6d2d3970460941ae550776c9e4544e10b0d22972c2dd535be2803/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/272e076b75b6d2d3970460941ae550776c9e4544e10b0d22972c2dd535be2803/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/272e076b75b6d2d3970460941ae550776c9e4544e10b0d22972c2dd535be2803/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:37:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/272e076b75b6d2d3970460941ae550776c9e4544e10b0d22972c2dd535be2803/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:37:18 compute-0 podman[489892]: 2025-12-05 02:37:18.145252595 +0000 UTC m=+0.250870528 container init 8d16c2035d87a7a2f31d2a502d48b2df713d68c516fd58539b98c2a0eccdb731 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hodgkin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  5 02:37:18 compute-0 podman[489892]: 2025-12-05 02:37:18.178575822 +0000 UTC m=+0.284193695 container start 8d16c2035d87a7a2f31d2a502d48b2df713d68c516fd58539b98c2a0eccdb731 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Dec  5 02:37:18 compute-0 podman[489892]: 2025-12-05 02:37:18.185109322 +0000 UTC m=+0.290727255 container attach 8d16c2035d87a7a2f31d2a502d48b2df713d68c516fd58539b98c2a0eccdb731 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hodgkin, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Dec  5 02:37:18 compute-0 nova_compute[349548]: 2025-12-05 02:37:18.192 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:37:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 02:37:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:37:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 02:37:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:37:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:37:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:37:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:37:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:37:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:37:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:37:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:37:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2653: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:37:19 compute-0 festive_hodgkin[489906]: {
Dec  5 02:37:19 compute-0 festive_hodgkin[489906]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 02:37:19 compute-0 festive_hodgkin[489906]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:37:19 compute-0 festive_hodgkin[489906]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 02:37:19 compute-0 festive_hodgkin[489906]:        "osd_id": 0,
Dec  5 02:37:19 compute-0 festive_hodgkin[489906]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:37:19 compute-0 festive_hodgkin[489906]:        "type": "bluestore"
Dec  5 02:37:19 compute-0 festive_hodgkin[489906]:    },
Dec  5 02:37:19 compute-0 festive_hodgkin[489906]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 02:37:19 compute-0 festive_hodgkin[489906]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:37:19 compute-0 festive_hodgkin[489906]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 02:37:19 compute-0 festive_hodgkin[489906]:        "osd_id": 1,
Dec  5 02:37:19 compute-0 festive_hodgkin[489906]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:37:19 compute-0 festive_hodgkin[489906]:        "type": "bluestore"
Dec  5 02:37:19 compute-0 festive_hodgkin[489906]:    },
Dec  5 02:37:19 compute-0 festive_hodgkin[489906]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 02:37:19 compute-0 festive_hodgkin[489906]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:37:19 compute-0 festive_hodgkin[489906]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 02:37:19 compute-0 festive_hodgkin[489906]:        "osd_id": 2,
Dec  5 02:37:19 compute-0 festive_hodgkin[489906]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:37:19 compute-0 festive_hodgkin[489906]:        "type": "bluestore"
Dec  5 02:37:19 compute-0 festive_hodgkin[489906]:    }
Dec  5 02:37:19 compute-0 festive_hodgkin[489906]: }
Dec  5 02:37:19 compute-0 systemd[1]: libpod-8d16c2035d87a7a2f31d2a502d48b2df713d68c516fd58539b98c2a0eccdb731.scope: Deactivated successfully.
Dec  5 02:37:19 compute-0 podman[489892]: 2025-12-05 02:37:19.392362237 +0000 UTC m=+1.497980110 container died 8d16c2035d87a7a2f31d2a502d48b2df713d68c516fd58539b98c2a0eccdb731 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hodgkin, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  5 02:37:19 compute-0 systemd[1]: libpod-8d16c2035d87a7a2f31d2a502d48b2df713d68c516fd58539b98c2a0eccdb731.scope: Consumed 1.209s CPU time.
Dec  5 02:37:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-272e076b75b6d2d3970460941ae550776c9e4544e10b0d22972c2dd535be2803-merged.mount: Deactivated successfully.
Dec  5 02:37:19 compute-0 podman[489892]: 2025-12-05 02:37:19.497654152 +0000 UTC m=+1.603272035 container remove 8d16c2035d87a7a2f31d2a502d48b2df713d68c516fd58539b98c2a0eccdb731 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hodgkin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:37:19 compute-0 systemd[1]: libpod-conmon-8d16c2035d87a7a2f31d2a502d48b2df713d68c516fd58539b98c2a0eccdb731.scope: Deactivated successfully.
Dec  5 02:37:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 02:37:19 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:37:19 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 02:37:19 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:37:19 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev ba6f8717-8ceb-4d72-b90f-7932a88da4c5 does not exist
Dec  5 02:37:19 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 58e26a78-d765-4458-999b-2f20f3982d37 does not exist
Dec  5 02:37:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:37:20 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:37:21 compute-0 nova_compute[349548]: 2025-12-05 02:37:21.063 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:37:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2654: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:37:23 compute-0 nova_compute[349548]: 2025-12-05 02:37:23.197 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:37:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2655: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:37:23 compute-0 podman[490002]: 2025-12-05 02:37:23.726150051 +0000 UTC m=+0.120645881 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 02:37:23 compute-0 podman[490001]: 2025-12-05 02:37:23.737498911 +0000 UTC m=+0.131068444 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  5 02:37:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:37:23 compute-0 podman[490004]: 2025-12-05 02:37:23.763933738 +0000 UTC m=+0.146756849 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, name=ubi9-minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  5 02:37:23 compute-0 podman[490003]: 2025-12-05 02:37:23.769716645 +0000 UTC m=+0.160735624 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec  5 02:37:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2656: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:37:26 compute-0 nova_compute[349548]: 2025-12-05 02:37:26.068 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:37:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2657: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  5 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec  5 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:37:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 02:37:28 compute-0 nova_compute[349548]: 2025-12-05 02:37:28.198 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:37:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:37:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2658: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:37:29 compute-0 podman[158197]: time="2025-12-05T02:37:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:37:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:37:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec  5 02:37:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:37:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8217 "" "Go-http-client/1.1"
Dec  5 02:37:31 compute-0 nova_compute[349548]: 2025-12-05 02:37:31.071 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:37:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2659: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:37:31 compute-0 openstack_network_exporter[366555]: ERROR   02:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:37:31 compute-0 openstack_network_exporter[366555]: ERROR   02:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:37:31 compute-0 openstack_network_exporter[366555]: ERROR   02:37:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:37:31 compute-0 openstack_network_exporter[366555]: ERROR   02:37:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:37:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:37:31 compute-0 openstack_network_exporter[366555]: ERROR   02:37:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:37:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:37:33 compute-0 nova_compute[349548]: 2025-12-05 02:37:33.201 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:37:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2660: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:37:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:37:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2661: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:37:36 compute-0 nova_compute[349548]: 2025-12-05 02:37:36.074 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:37:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2662: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:37:38 compute-0 nova_compute[349548]: 2025-12-05 02:37:38.205 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:37:38 compute-0 podman[490088]: 2025-12-05 02:37:38.689127329 +0000 UTC m=+0.105414299 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  5 02:37:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:37:38 compute-0 podman[490089]: 2025-12-05 02:37:38.746610557 +0000 UTC m=+0.147749287 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  5 02:37:39 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2663: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:37:40 compute-0 podman[490130]: 2025-12-05 02:37:40.729108543 +0000 UTC m=+0.130580179 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  5 02:37:40 compute-0 podman[490132]: 2025-12-05 02:37:40.744207672 +0000 UTC m=+0.135221765 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  5 02:37:40 compute-0 podman[490131]: 2025-12-05 02:37:40.758500906 +0000 UTC m=+0.162385482 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, managed_by=edpm_ansible, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, container_name=kepler, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, io.buildah.version=1.29.0, version=9.4, io.openshift.expose-services=)
Dec  5 02:37:41 compute-0 nova_compute[349548]: 2025-12-05 02:37:41.078 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:37:41 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2664: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:37:43 compute-0 nova_compute[349548]: 2025-12-05 02:37:43.208 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:37:43 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2665: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:37:43 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:37:45 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2666: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:37:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Dec  5 02:37:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3647953052' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Dec  5 02:37:45 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Dec  5 02:37:45 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3647953052' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Dec  5 02:37:46 compute-0 nova_compute[349548]: 2025-12-05 02:37:46.082 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:37:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:37:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:37:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:37:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:37:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:37:46 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:37:47 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2667: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:37:48 compute-0 nova_compute[349548]: 2025-12-05 02:37:48.212 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:37:48 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:37:49 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2668: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:37:51 compute-0 nova_compute[349548]: 2025-12-05 02:37:51.087 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:37:51 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2669: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:37:53 compute-0 nova_compute[349548]: 2025-12-05 02:37:53.215 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:37:53 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2670: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:37:53 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:37:54 compute-0 podman[490185]: 2025-12-05 02:37:54.719646067 +0000 UTC m=+0.113234216 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  5 02:37:54 compute-0 podman[490184]: 2025-12-05 02:37:54.728642978 +0000 UTC m=+0.129321963 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=multipathd)
Dec  5 02:37:54 compute-0 podman[490187]: 2025-12-05 02:37:54.733315834 +0000 UTC m=+0.114078611 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=openstack_network_exporter, maintainer=Red Hat, Inc.)
Dec  5 02:37:54 compute-0 podman[490186]: 2025-12-05 02:37:54.774802737 +0000 UTC m=+0.157792379 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec  5 02:37:55 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2671: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:37:56 compute-0 nova_compute[349548]: 2025-12-05 02:37:56.091 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:37:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:37:56.241 287122 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:37:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:37:56.242 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:37:56 compute-0 ovn_metadata_agent[287107]: 2025-12-05 02:37:56.242 287122 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:37:57 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2672: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:37:58 compute-0 nova_compute[349548]: 2025-12-05 02:37:58.219 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:37:58 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:37:59 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2673: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:37:59 compute-0 podman[158197]: time="2025-12-05T02:37:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:37:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:37:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 42580 "" "Go-http-client/1.1"
Dec  5 02:37:59 compute-0 podman[158197]: @ - - [05/Dec/2025:02:37:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8217 "" "Go-http-client/1.1"
Dec  5 02:38:01 compute-0 nova_compute[349548]: 2025-12-05 02:38:01.095 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:38:01 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2674: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:38:01 compute-0 openstack_network_exporter[366555]: ERROR   02:38:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:38:01 compute-0 openstack_network_exporter[366555]: ERROR   02:38:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:38:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:38:01 compute-0 openstack_network_exporter[366555]: ERROR   02:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:38:01 compute-0 openstack_network_exporter[366555]: ERROR   02:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:38:01 compute-0 openstack_network_exporter[366555]: ERROR   02:38:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:38:01 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:38:03 compute-0 nova_compute[349548]: 2025-12-05 02:38:03.222 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:38:03 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2675: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:38:03 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:38:04 compute-0 nova_compute[349548]: 2025-12-05 02:38:04.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:38:04 compute-0 nova_compute[349548]: 2025-12-05 02:38:04.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  5 02:38:04 compute-0 nova_compute[349548]: 2025-12-05 02:38:04.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  5 02:38:04 compute-0 nova_compute[349548]: 2025-12-05 02:38:04.099 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  5 02:38:04 compute-0 nova_compute[349548]: 2025-12-05 02:38:04.099 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:38:04 compute-0 nova_compute[349548]: 2025-12-05 02:38:04.263 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:38:04 compute-0 nova_compute[349548]: 2025-12-05 02:38:04.264 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:38:04 compute-0 nova_compute[349548]: 2025-12-05 02:38:04.264 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:38:04 compute-0 nova_compute[349548]: 2025-12-05 02:38:04.264 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  5 02:38:04 compute-0 nova_compute[349548]: 2025-12-05 02:38:04.264 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:38:04 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:38:04 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3472407716' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:38:04 compute-0 nova_compute[349548]: 2025-12-05 02:38:04.792 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:38:05 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2676: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:38:05 compute-0 nova_compute[349548]: 2025-12-05 02:38:05.414 349552 WARNING nova.virt.libvirt.driver [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  5 02:38:05 compute-0 nova_compute[349548]: 2025-12-05 02:38:05.416 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3908MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  5 02:38:05 compute-0 nova_compute[349548]: 2025-12-05 02:38:05.417 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  5 02:38:05 compute-0 nova_compute[349548]: 2025-12-05 02:38:05.418 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  5 02:38:05 compute-0 nova_compute[349548]: 2025-12-05 02:38:05.487 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  5 02:38:05 compute-0 nova_compute[349548]: 2025-12-05 02:38:05.488 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  5 02:38:05 compute-0 nova_compute[349548]: 2025-12-05 02:38:05.504 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  5 02:38:05 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Dec  5 02:38:05 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1949335426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Dec  5 02:38:05 compute-0 nova_compute[349548]: 2025-12-05 02:38:05.957 349552 DEBUG oslo_concurrency.processutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  5 02:38:05 compute-0 nova_compute[349548]: 2025-12-05 02:38:05.972 349552 DEBUG nova.compute.provider_tree [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed in ProviderTree for provider: acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  5 02:38:05 compute-0 nova_compute[349548]: 2025-12-05 02:38:05.992 349552 DEBUG nova.scheduler.client.report [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Inventory has not changed for provider acf26aa2-2fef-4a53-8a44-6cfa2eb15d17 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  5 02:38:05 compute-0 nova_compute[349548]: 2025-12-05 02:38:05.995 349552 DEBUG nova.compute.resource_tracker [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  5 02:38:05 compute-0 nova_compute[349548]: 2025-12-05 02:38:05.996 349552 DEBUG oslo_concurrency.lockutils [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  5 02:38:06 compute-0 nova_compute[349548]: 2025-12-05 02:38:06.098 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:38:06 compute-0 nova_compute[349548]: 2025-12-05 02:38:06.964 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:38:07 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2677: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:38:08 compute-0 nova_compute[349548]: 2025-12-05 02:38:08.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:38:08 compute-0 nova_compute[349548]: 2025-12-05 02:38:08.067 349552 DEBUG nova.compute.manager [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  5 02:38:08 compute-0 nova_compute[349548]: 2025-12-05 02:38:08.225 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:38:08 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:38:09 compute-0 nova_compute[349548]: 2025-12-05 02:38:09.067 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:38:09 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2678: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:38:09 compute-0 podman[490311]: 2025-12-05 02:38:09.72466403 +0000 UTC m=+0.117668697 container health_status b49baea34570e78c68068add804749a86babe170fbfa05ac86b965dfd23fd7cc (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  5 02:38:09 compute-0 podman[490310]: 2025-12-05 02:38:09.75908822 +0000 UTC m=+0.156593954 container health_status 33ff8670f5768a691d6e2b39def69eecfc99eefac206190daf73f166a971a638 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec  5 02:38:11 compute-0 nova_compute[349548]: 2025-12-05 02:38:11.100 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:38:11 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2679: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:38:11 compute-0 podman[490352]: 2025-12-05 02:38:11.716518204 +0000 UTC m=+0.112484331 container health_status 88e42465dfb3a5f5c64c3d8468edb954189cc66550ea8f4d79f645cae6d3a335 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi)
Dec  5 02:38:11 compute-0 podman[490351]: 2025-12-05 02:38:11.721087233 +0000 UTC m=+0.120944379 container health_status 088cb6fe988cfe207701d3d7c9567535893165bd389d990900cd821e81322b54 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc.)
Dec  5 02:38:11 compute-0 podman[490350]: 2025-12-05 02:38:11.763810827 +0000 UTC m=+0.169326983 container health_status 01c29469e5eddd78c047fa21d977a5c0b928e5ae51a2a0d8ae09b4e2348e4424 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  5 02:38:12 compute-0 nova_compute[349548]: 2025-12-05 02:38:12.061 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:38:12 compute-0 nova_compute[349548]: 2025-12-05 02:38:12.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:38:12 compute-0 nova_compute[349548]: 2025-12-05 02:38:12.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:38:13 compute-0 nova_compute[349548]: 2025-12-05 02:38:13.229 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:38:13 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2680: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:38:13 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:38:15 compute-0 nova_compute[349548]: 2025-12-05 02:38:15.066 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:38:15 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2681: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:38:16 compute-0 nova_compute[349548]: 2025-12-05 02:38:16.106 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:38:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:38:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:38:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Optimize plan auto_2025-12-05_02:38:16
Dec  5 02:38:16 compute-0 ceph-mgr[193209]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  5 02:38:16 compute-0 ceph-mgr[193209]: [balancer INFO root] do_upmap
Dec  5 02:38:16 compute-0 ceph-mgr[193209]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'volumes', 'backups', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'vms', '.mgr']
Dec  5 02:38:16 compute-0 ceph-mgr[193209]: [balancer INFO root] prepared 0/10 changes
Dec  5 02:38:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:38:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:38:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] scanning for idle connections..
Dec  5 02:38:16 compute-0 ceph-mgr[193209]: [volumes INFO mgr_util] cleaning up connections: []
Dec  5 02:38:16 compute-0 systemd-logind[792]: New session 67 of user zuul.
Dec  5 02:38:16 compute-0 systemd[1]: Started Session 67 of User zuul.
Dec  5 02:38:17 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2682: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:38:18 compute-0 nova_compute[349548]: 2025-12-05 02:38:18.231 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:38:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  5 02:38:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:38:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  5 02:38:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  5 02:38:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:38:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  5 02:38:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:38:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  5 02:38:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:38:18 compute-0 ceph-mgr[193209]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  5 02:38:18 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:38:19 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2683: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:38:20 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15879 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec  5 02:38:21 compute-0 nova_compute[349548]: 2025-12-05 02:38:21.062 349552 DEBUG oslo_service.periodic_task [None req-953097c9-8d1b-48c8-918f-7d1db43e8e51 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  5 02:38:21 compute-0 nova_compute[349548]: 2025-12-05 02:38:21.112 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:38:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:38:21 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:38:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Dec  5 02:38:21 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:38:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Dec  5 02:38:21 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:38:21 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev c6b284c2-2224-42e8-9ddd-61ceb1b93540 does not exist
Dec  5 02:38:21 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 18fc9123-a481-42fb-870a-3a053c9ba51f does not exist
Dec  5 02:38:21 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev fb29df09-c097-4a72-b6fb-aba2a8a62520 does not exist
Dec  5 02:38:21 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15881 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  5 02:38:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Dec  5 02:38:21 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  5 02:38:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Dec  5 02:38:21 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:38:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:38:21 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:38:21 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  5 02:38:21 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:38:21 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  5 02:38:21 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2684: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:38:21 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Dec  5 02:38:21 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2273979279' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec  5 02:38:22 compute-0 podman[490921]: 2025-12-05 02:38:22.320236482 +0000 UTC m=+0.102704255 container create 231641cee290a00313b1f675a3940094fd5dd0298cd408ebae698a23085380b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hertz, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:38:22 compute-0 podman[490921]: 2025-12-05 02:38:22.273942117 +0000 UTC m=+0.056409900 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:38:22 compute-0 systemd[1]: Started libpod-conmon-231641cee290a00313b1f675a3940094fd5dd0298cd408ebae698a23085380b3.scope.
Dec  5 02:38:22 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:38:22 compute-0 podman[490921]: 2025-12-05 02:38:22.447398656 +0000 UTC m=+0.229866509 container init 231641cee290a00313b1f675a3940094fd5dd0298cd408ebae698a23085380b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  5 02:38:22 compute-0 podman[490921]: 2025-12-05 02:38:22.465970259 +0000 UTC m=+0.248438072 container start 231641cee290a00313b1f675a3940094fd5dd0298cd408ebae698a23085380b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:38:22 compute-0 podman[490921]: 2025-12-05 02:38:22.474275493 +0000 UTC m=+0.256743296 container attach 231641cee290a00313b1f675a3940094fd5dd0298cd408ebae698a23085380b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hertz, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Dec  5 02:38:22 compute-0 frosty_hertz[490943]: 167 167
Dec  5 02:38:22 compute-0 systemd[1]: libpod-231641cee290a00313b1f675a3940094fd5dd0298cd408ebae698a23085380b3.scope: Deactivated successfully.
Dec  5 02:38:22 compute-0 podman[490921]: 2025-12-05 02:38:22.479590183 +0000 UTC m=+0.262057986 container died 231641cee290a00313b1f675a3940094fd5dd0298cd408ebae698a23085380b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:38:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-c15fe9051b497c8e572e16586cb612808aae0f015f5ffdb2c620221228401d2d-merged.mount: Deactivated successfully.
Dec  5 02:38:22 compute-0 podman[490921]: 2025-12-05 02:38:22.542567038 +0000 UTC m=+0.325034801 container remove 231641cee290a00313b1f675a3940094fd5dd0298cd408ebae698a23085380b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_hertz, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:38:22 compute-0 systemd[1]: libpod-conmon-231641cee290a00313b1f675a3940094fd5dd0298cd408ebae698a23085380b3.scope: Deactivated successfully.
Dec  5 02:38:22 compute-0 podman[490967]: 2025-12-05 02:38:22.793329834 +0000 UTC m=+0.079913903 container create e14c9ab73f417af1b0f9af095f0cb7d7ec33e147fc9ba7682dc2ecd5304b7c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  5 02:38:22 compute-0 podman[490967]: 2025-12-05 02:38:22.76479269 +0000 UTC m=+0.051376789 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:38:22 compute-0 systemd[1]: Started libpod-conmon-e14c9ab73f417af1b0f9af095f0cb7d7ec33e147fc9ba7682dc2ecd5304b7c3c.scope.
Dec  5 02:38:22 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1d6a7839f0d224d90e8bea2c9bde6b7f2272d1023f7059e305e581413bdcfef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1d6a7839f0d224d90e8bea2c9bde6b7f2272d1023f7059e305e581413bdcfef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1d6a7839f0d224d90e8bea2c9bde6b7f2272d1023f7059e305e581413bdcfef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1d6a7839f0d224d90e8bea2c9bde6b7f2272d1023f7059e305e581413bdcfef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:38:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1d6a7839f0d224d90e8bea2c9bde6b7f2272d1023f7059e305e581413bdcfef/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  5 02:38:22 compute-0 podman[490967]: 2025-12-05 02:38:22.967042159 +0000 UTC m=+0.253626238 container init e14c9ab73f417af1b0f9af095f0cb7d7ec33e147fc9ba7682dc2ecd5304b7c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ganguly, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:38:22 compute-0 podman[490967]: 2025-12-05 02:38:22.979419718 +0000 UTC m=+0.266003777 container start e14c9ab73f417af1b0f9af095f0cb7d7ec33e147fc9ba7682dc2ecd5304b7c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ganguly, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  5 02:38:22 compute-0 podman[490967]: 2025-12-05 02:38:22.983784921 +0000 UTC m=+0.270368980 container attach e14c9ab73f417af1b0f9af095f0cb7d7ec33e147fc9ba7682dc2ecd5304b7c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ganguly, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:38:23 compute-0 nova_compute[349548]: 2025-12-05 02:38:23.232 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:38:23 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2685: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:38:23 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:38:24 compute-0 funny_ganguly[490983]: --> passed data devices: 0 physical, 3 LVM
Dec  5 02:38:24 compute-0 funny_ganguly[490983]: --> relative data size: 1.0
Dec  5 02:38:24 compute-0 funny_ganguly[490983]: --> All data devices are unavailable
Dec  5 02:38:24 compute-0 systemd[1]: libpod-e14c9ab73f417af1b0f9af095f0cb7d7ec33e147fc9ba7682dc2ecd5304b7c3c.scope: Deactivated successfully.
Dec  5 02:38:24 compute-0 systemd[1]: libpod-e14c9ab73f417af1b0f9af095f0cb7d7ec33e147fc9ba7682dc2ecd5304b7c3c.scope: Consumed 1.274s CPU time.
Dec  5 02:38:24 compute-0 podman[490967]: 2025-12-05 02:38:24.307824735 +0000 UTC m=+1.594408834 container died e14c9ab73f417af1b0f9af095f0cb7d7ec33e147fc9ba7682dc2ecd5304b7c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  5 02:38:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1d6a7839f0d224d90e8bea2c9bde6b7f2272d1023f7059e305e581413bdcfef-merged.mount: Deactivated successfully.
Dec  5 02:38:24 compute-0 podman[490967]: 2025-12-05 02:38:24.428340952 +0000 UTC m=+1.714925011 container remove e14c9ab73f417af1b0f9af095f0cb7d7ec33e147fc9ba7682dc2ecd5304b7c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_ganguly, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  5 02:38:24 compute-0 systemd[1]: libpod-conmon-e14c9ab73f417af1b0f9af095f0cb7d7ec33e147fc9ba7682dc2ecd5304b7c3c.scope: Deactivated successfully.
Dec  5 02:38:24 compute-0 ovs-vsctl[491130]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec  5 02:38:24 compute-0 podman[491098]: 2025-12-05 02:38:24.908993457 +0000 UTC m=+0.121609458 container health_status 4b650b296b7a2b28da70f904bfd3049734fc880af03729747ae1e14054e450ee (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:38:24 compute-0 podman[491099]: 2025-12-05 02:38:24.934158767 +0000 UTC m=+0.133496814 container health_status 602d63770272add5d229903ad8317e5241e18dca8598e5e824b2f08d5f28bed9 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  5 02:38:24 compute-0 podman[491100]: 2025-12-05 02:38:24.938027946 +0000 UTC m=+0.136958841 container health_status fe895ec28075e62de3cae61e877391ba55fcce1fe8223612a0944d7386fb26f4 (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, com.redhat.component=ubi9-minimal-container, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.openshift.expose-services=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-type=git)
Dec  5 02:38:25 compute-0 podman[491189]: 2025-12-05 02:38:25.05493472 +0000 UTC m=+0.131020243 container health_status d5a14c82ca79b2b5e7095a692cfba39e0bbcb13006c5aa3eeb722fe8778aac5d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Dec  5 02:38:25 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2686: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:38:25 compute-0 podman[491307]: 2025-12-05 02:38:25.549867488 +0000 UTC m=+0.094826773 container create 4ed4eba216ee27a88d7a529a9990823bb00a5cd6aa0116550380afe3094b9c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jemison, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:38:25 compute-0 podman[491307]: 2025-12-05 02:38:25.512574107 +0000 UTC m=+0.057533452 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:38:25 compute-0 systemd[1]: Started libpod-conmon-4ed4eba216ee27a88d7a529a9990823bb00a5cd6aa0116550380afe3094b9c6c.scope.
Dec  5 02:38:25 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:38:25 compute-0 podman[491307]: 2025-12-05 02:38:25.730292693 +0000 UTC m=+0.275251968 container init 4ed4eba216ee27a88d7a529a9990823bb00a5cd6aa0116550380afe3094b9c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Dec  5 02:38:25 compute-0 podman[491307]: 2025-12-05 02:38:25.750149773 +0000 UTC m=+0.295109068 container start 4ed4eba216ee27a88d7a529a9990823bb00a5cd6aa0116550380afe3094b9c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jemison, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  5 02:38:25 compute-0 podman[491307]: 2025-12-05 02:38:25.757548961 +0000 UTC m=+0.302508216 container attach 4ed4eba216ee27a88d7a529a9990823bb00a5cd6aa0116550380afe3094b9c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:38:25 compute-0 interesting_jemison[491332]: 167 167
Dec  5 02:38:25 compute-0 systemd[1]: libpod-4ed4eba216ee27a88d7a529a9990823bb00a5cd6aa0116550380afe3094b9c6c.scope: Deactivated successfully.
Dec  5 02:38:25 compute-0 podman[491340]: 2025-12-05 02:38:25.838654667 +0000 UTC m=+0.059341724 container died 4ed4eba216ee27a88d7a529a9990823bb00a5cd6aa0116550380afe3094b9c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jemison, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:38:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d2322a8c2eed5e7aa0d1e019ee0707e37f0ec60cf97c5bce9a7abeb29745ec2-merged.mount: Deactivated successfully.
Dec  5 02:38:25 compute-0 podman[491340]: 2025-12-05 02:38:25.915228475 +0000 UTC m=+0.135915522 container remove 4ed4eba216ee27a88d7a529a9990823bb00a5cd6aa0116550380afe3094b9c6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:38:25 compute-0 systemd[1]: libpod-conmon-4ed4eba216ee27a88d7a529a9990823bb00a5cd6aa0116550380afe3094b9c6c.scope: Deactivated successfully.
Dec  5 02:38:26 compute-0 nova_compute[349548]: 2025-12-05 02:38:26.113 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:38:26 compute-0 virtqemud[138703]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec  5 02:38:26 compute-0 virtqemud[138703]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec  5 02:38:26 compute-0 podman[491439]: 2025-12-05 02:38:26.21423023 +0000 UTC m=+0.089472471 container create da706aafdd4c35ccc1d5057a5d4b485e652b69df6b215d716c71ef6b305d2db4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_knuth, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:38:26 compute-0 virtqemud[138703]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec  5 02:38:26 compute-0 podman[491439]: 2025-12-05 02:38:26.178460682 +0000 UTC m=+0.053703033 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:38:26 compute-0 systemd[1]: Started libpod-conmon-da706aafdd4c35ccc1d5057a5d4b485e652b69df6b215d716c71ef6b305d2db4.scope.
Dec  5 02:38:26 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:38:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e93b6ff996702dce68d7303bc497d1da3751820d8324d1986ece4d0237fa3a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:38:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e93b6ff996702dce68d7303bc497d1da3751820d8324d1986ece4d0237fa3a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:38:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e93b6ff996702dce68d7303bc497d1da3751820d8324d1986ece4d0237fa3a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:38:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e93b6ff996702dce68d7303bc497d1da3751820d8324d1986ece4d0237fa3a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:38:26 compute-0 podman[491439]: 2025-12-05 02:38:26.373702444 +0000 UTC m=+0.248944775 container init da706aafdd4c35ccc1d5057a5d4b485e652b69df6b215d716c71ef6b305d2db4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:38:26 compute-0 podman[491439]: 2025-12-05 02:38:26.396768084 +0000 UTC m=+0.272010335 container start da706aafdd4c35ccc1d5057a5d4b485e652b69df6b215d716c71ef6b305d2db4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_knuth, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:38:26 compute-0 podman[491439]: 2025-12-05 02:38:26.400976393 +0000 UTC m=+0.276218694 container attach da706aafdd4c35ccc1d5057a5d4b485e652b69df6b215d716c71ef6b305d2db4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_knuth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:38:26 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: cache status {prefix=cache status} (starting...)
Dec  5 02:38:27 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: client ls {prefix=client ls} (starting...)
Dec  5 02:38:27 compute-0 jolly_knuth[491471]: {
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:    "0": [
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:        {
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:            "devices": [
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:                "/dev/loop3"
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:            ],
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:            "lv_name": "ceph_lv0",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:            "lv_size": "21470642176",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8c4de221-4fda-4bb1-b794-fc4329742186,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:            "lv_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:            "name": "ceph_lv0",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:            "tags": {
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:                "ceph.block_uuid": "39NTof-xFKX-Z9TH-v5ih-gwlB-QTJZ-fgiCd0",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:                "ceph.cluster_name": "ceph",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:                "ceph.crush_device_class": "",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:                "ceph.encrypted": "0",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:                "ceph.osd_fsid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:                "ceph.osd_id": "0",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:                "ceph.type": "block",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:                "ceph.vdo": "0"
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:            },
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:            "type": "block",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:            "vg_name": "ceph_vg0"
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:        }
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:    ],
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:    "1": [
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:        {
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:            "devices": [
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:                "/dev/loop4"
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:            ],
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:            "lv_name": "ceph_lv1",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:            "lv_size": "21470642176",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=944e6457-e96a-45b2-ba7f-23ecd70be9f8,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:            "lv_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:            "name": "ceph_lv1",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:            "path": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:            "tags": {
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:                "ceph.block_uuid": "q69o6U-m4Dt-9Otw-3yjm-m9LX-lMRB-m1vzQU",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:                "ceph.cluster_name": "ceph",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:                "ceph.crush_device_class": "",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:                "ceph.encrypted": "0",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:                "ceph.osd_fsid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:                "ceph.osd_id": "1",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:                "ceph.type": "block",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:                "ceph.vdo": "0"
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:            },
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:            "type": "block",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:            "vg_name": "ceph_vg1"
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:        }
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:    ],
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:    "2": [
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:        {
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:            "devices": [
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:                "/dev/loop5"
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:            ],
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:            "lv_name": "ceph_lv2",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:            "lv_size": "21470642176",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=cbd280d3-cbd8-528b-ace6-2b3a887cdcee,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=adfceb0a-e5d7-48a8-b6ba-0c42f745777c,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:            "lv_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:            "name": "ceph_lv2",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:            "path": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:            "tags": {
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:                "ceph.block_uuid": "Ur4hFt-MI3s-i61X-9ESf-ZFFw-PCwz-il9DAb",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:                "ceph.cephx_lockbox_secret": "",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:                "ceph.cluster_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:                "ceph.cluster_name": "ceph",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:                "ceph.crush_device_class": "",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:                "ceph.encrypted": "0",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:                "ceph.osd_fsid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:                "ceph.osd_id": "2",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:                "ceph.type": "block",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:                "ceph.vdo": "0"
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:            },
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:            "type": "block",
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:            "vg_name": "ceph_vg2"
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:        }
Dec  5 02:38:27 compute-0 jolly_knuth[491471]:    ]
Dec  5 02:38:27 compute-0 jolly_knuth[491471]: }
Dec  5 02:38:27 compute-0 systemd[1]: libpod-da706aafdd4c35ccc1d5057a5d4b485e652b69df6b215d716c71ef6b305d2db4.scope: Deactivated successfully.
Dec  5 02:38:27 compute-0 podman[491439]: 2025-12-05 02:38:27.206753631 +0000 UTC m=+1.081995882 container died da706aafdd4c35ccc1d5057a5d4b485e652b69df6b215d716c71ef6b305d2db4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_knuth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Dec  5 02:38:27 compute-0 lvm[491651]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  5 02:38:27 compute-0 lvm[491651]: VG ceph_vg0 finished
Dec  5 02:38:27 compute-0 lvm[491658]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec  5 02:38:27 compute-0 lvm[491658]: VG ceph_vg2 finished
Dec  5 02:38:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e93b6ff996702dce68d7303bc497d1da3751820d8324d1986ece4d0237fa3a8-merged.mount: Deactivated successfully.
Dec  5 02:38:27 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2687: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:38:27 compute-0 podman[491439]: 2025-12-05 02:38:27.292097096 +0000 UTC m=+1.167339347 container remove da706aafdd4c35ccc1d5057a5d4b485e652b69df6b215d716c71ef6b305d2db4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_knuth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  5 02:38:27 compute-0 systemd[1]: libpod-conmon-da706aafdd4c35ccc1d5057a5d4b485e652b69df6b215d716c71ef6b305d2db4.scope: Deactivated successfully.
Dec  5 02:38:27 compute-0 lvm[491686]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec  5 02:38:27 compute-0 lvm[491686]: VG ceph_vg1 finished
Dec  5 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] _maybe_adjust
Dec  5 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  5 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Dec  5 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0009191400908380543 of space, bias 1.0, pg target 0.2757420272514163 quantized to 32 (current 32)
Dec  5 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  5 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Dec  5 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Dec  5 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  5 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  5 02:38:27 compute-0 ceph-mgr[193209]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec  5 02:38:27 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: damage ls {prefix=damage ls} (starting...)
Dec  5 02:38:28 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: dump loads {prefix=dump loads} (starting...)
Dec  5 02:38:28 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15885 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec  5 02:38:28 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec  5 02:38:28 compute-0 podman[491967]: 2025-12-05 02:38:28.206121645 +0000 UTC m=+0.122392680 container create bd32dd67c950b9505c8f1e5bf8a3a1a1c9b1fb61fcaafe47e909b5b774db0f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:38:28 compute-0 podman[491967]: 2025-12-05 02:38:28.116201331 +0000 UTC m=+0.032472386 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:38:28 compute-0 nova_compute[349548]: 2025-12-05 02:38:28.234 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:38:28 compute-0 systemd[1]: Started libpod-conmon-bd32dd67c950b9505c8f1e5bf8a3a1a1c9b1fb61fcaafe47e909b5b774db0f59.scope.
Dec  5 02:38:28 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:38:28 compute-0 podman[491967]: 2025-12-05 02:38:28.302970805 +0000 UTC m=+0.219241870 container init bd32dd67c950b9505c8f1e5bf8a3a1a1c9b1fb61fcaafe47e909b5b774db0f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_brattain, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:38:28 compute-0 podman[491967]: 2025-12-05 02:38:28.31984787 +0000 UTC m=+0.236118905 container start bd32dd67c950b9505c8f1e5bf8a3a1a1c9b1fb61fcaafe47e909b5b774db0f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:38:28 compute-0 podman[491967]: 2025-12-05 02:38:28.324148602 +0000 UTC m=+0.240419657 container attach bd32dd67c950b9505c8f1e5bf8a3a1a1c9b1fb61fcaafe47e909b5b774db0f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_brattain, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Dec  5 02:38:28 compute-0 funny_brattain[492018]: 167 167
Dec  5 02:38:28 compute-0 systemd[1]: libpod-bd32dd67c950b9505c8f1e5bf8a3a1a1c9b1fb61fcaafe47e909b5b774db0f59.scope: Deactivated successfully.
Dec  5 02:38:28 compute-0 podman[491967]: 2025-12-05 02:38:28.326068666 +0000 UTC m=+0.242339701 container died bd32dd67c950b9505c8f1e5bf8a3a1a1c9b1fb61fcaafe47e909b5b774db0f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  5 02:38:28 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec  5 02:38:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f05bc9066e0cfe1d0aff81f454e1e8744b47753fb81a2b52d4ecbeb024a6a58-merged.mount: Deactivated successfully.
Dec  5 02:38:28 compute-0 podman[491967]: 2025-12-05 02:38:28.386669323 +0000 UTC m=+0.302940358 container remove bd32dd67c950b9505c8f1e5bf8a3a1a1c9b1fb61fcaafe47e909b5b774db0f59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  5 02:38:28 compute-0 systemd[1]: libpod-conmon-bd32dd67c950b9505c8f1e5bf8a3a1a1c9b1fb61fcaafe47e909b5b774db0f59.scope: Deactivated successfully.
Dec  5 02:38:28 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec  5 02:38:28 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15887 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec  5 02:38:28 compute-0 podman[492064]: 2025-12-05 02:38:28.58845819 +0000 UTC m=+0.064969292 container create a3bd66726044175ac2121cefacc47f02572aa495e82fa43740adbbd3bab50b3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Dec  5 02:38:28 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec  5 02:38:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Dec  5 02:38:28 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1603919827' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Dec  5 02:38:28 compute-0 podman[492064]: 2025-12-05 02:38:28.562867639 +0000 UTC m=+0.039378831 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Dec  5 02:38:28 compute-0 systemd[1]: Started libpod-conmon-a3bd66726044175ac2121cefacc47f02572aa495e82fa43740adbbd3bab50b3d.scope.
Dec  5 02:38:28 compute-0 systemd[1]: Started libcrun container.
Dec  5 02:38:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5f02d74ef53ae62a24e0fc555c669b197971b74f2eff843649d18e6df9290e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  5 02:38:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5f02d74ef53ae62a24e0fc555c669b197971b74f2eff843649d18e6df9290e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  5 02:38:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5f02d74ef53ae62a24e0fc555c669b197971b74f2eff843649d18e6df9290e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  5 02:38:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5f02d74ef53ae62a24e0fc555c669b197971b74f2eff843649d18e6df9290e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  5 02:38:28 compute-0 podman[492064]: 2025-12-05 02:38:28.699090688 +0000 UTC m=+0.175601810 container init a3bd66726044175ac2121cefacc47f02572aa495e82fa43740adbbd3bab50b3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_merkle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Dec  5 02:38:28 compute-0 podman[492064]: 2025-12-05 02:38:28.713684049 +0000 UTC m=+0.190195151 container start a3bd66726044175ac2121cefacc47f02572aa495e82fa43740adbbd3bab50b3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_merkle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  5 02:38:28 compute-0 podman[492064]: 2025-12-05 02:38:28.718733722 +0000 UTC m=+0.195244844 container attach a3bd66726044175ac2121cefacc47f02572aa495e82fa43740adbbd3bab50b3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Dec  5 02:38:28 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:38:28 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec  5 02:38:28 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec  5 02:38:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Dec  5 02:38:29 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/371342701' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  5 02:38:29 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15895 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  5 02:38:29 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T02:38:29.258+0000 7f1b09f03640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec  5 02:38:29 compute-0 ceph-mgr[193209]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec  5 02:38:29 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2688: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:38:29 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: ops {prefix=ops} (starting...)
Dec  5 02:38:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Dec  5 02:38:29 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3741055999' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Dec  5 02:38:29 compute-0 silly_merkle[492097]: {
Dec  5 02:38:29 compute-0 silly_merkle[492097]:    "8c4de221-4fda-4bb1-b794-fc4329742186": {
Dec  5 02:38:29 compute-0 silly_merkle[492097]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:38:29 compute-0 silly_merkle[492097]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec  5 02:38:29 compute-0 silly_merkle[492097]:        "osd_id": 0,
Dec  5 02:38:29 compute-0 silly_merkle[492097]:        "osd_uuid": "8c4de221-4fda-4bb1-b794-fc4329742186",
Dec  5 02:38:29 compute-0 silly_merkle[492097]:        "type": "bluestore"
Dec  5 02:38:29 compute-0 silly_merkle[492097]:    },
Dec  5 02:38:29 compute-0 silly_merkle[492097]:    "944e6457-e96a-45b2-ba7f-23ecd70be9f8": {
Dec  5 02:38:29 compute-0 silly_merkle[492097]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:38:29 compute-0 silly_merkle[492097]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec  5 02:38:29 compute-0 silly_merkle[492097]:        "osd_id": 1,
Dec  5 02:38:29 compute-0 silly_merkle[492097]:        "osd_uuid": "944e6457-e96a-45b2-ba7f-23ecd70be9f8",
Dec  5 02:38:29 compute-0 silly_merkle[492097]:        "type": "bluestore"
Dec  5 02:38:29 compute-0 silly_merkle[492097]:    },
Dec  5 02:38:29 compute-0 silly_merkle[492097]:    "adfceb0a-e5d7-48a8-b6ba-0c42f745777c": {
Dec  5 02:38:29 compute-0 silly_merkle[492097]:        "ceph_fsid": "cbd280d3-cbd8-528b-ace6-2b3a887cdcee",
Dec  5 02:38:29 compute-0 silly_merkle[492097]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Dec  5 02:38:29 compute-0 silly_merkle[492097]:        "osd_id": 2,
Dec  5 02:38:29 compute-0 silly_merkle[492097]:        "osd_uuid": "adfceb0a-e5d7-48a8-b6ba-0c42f745777c",
Dec  5 02:38:29 compute-0 silly_merkle[492097]:        "type": "bluestore"
Dec  5 02:38:29 compute-0 silly_merkle[492097]:    }
Dec  5 02:38:29 compute-0 silly_merkle[492097]: }
Dec  5 02:38:29 compute-0 podman[492064]: 2025-12-05 02:38:29.690752205 +0000 UTC m=+1.167263317 container died a3bd66726044175ac2121cefacc47f02572aa495e82fa43740adbbd3bab50b3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Dec  5 02:38:29 compute-0 systemd[1]: libpod-a3bd66726044175ac2121cefacc47f02572aa495e82fa43740adbbd3bab50b3d.scope: Deactivated successfully.
Dec  5 02:38:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Dec  5 02:38:29 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1703454689' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Dec  5 02:38:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5f02d74ef53ae62a24e0fc555c669b197971b74f2eff843649d18e6df9290e1-merged.mount: Deactivated successfully.
Dec  5 02:38:29 compute-0 podman[158197]: time="2025-12-05T02:38:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  5 02:38:29 compute-0 podman[492064]: 2025-12-05 02:38:29.765395567 +0000 UTC m=+1.241906669 container remove a3bd66726044175ac2121cefacc47f02572aa495e82fa43740adbbd3bab50b3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  5 02:38:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:38:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 44140 "" "Go-http-client/1.1"
Dec  5 02:38:29 compute-0 systemd[1]: libpod-conmon-a3bd66726044175ac2121cefacc47f02572aa495e82fa43740adbbd3bab50b3d.scope: Deactivated successfully.
Dec  5 02:38:29 compute-0 podman[158197]: @ - - [05/Dec/2025:02:38:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 8205 "" "Go-http-client/1.1"
Dec  5 02:38:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Dec  5 02:38:29 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:38:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Dec  5 02:38:29 compute-0 ceph-mon[192914]: log_channel(audit) log [INF] : from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:38:29 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev cef351cb-e8f6-473b-bb51-15817bfed718 does not exist
Dec  5 02:38:29 compute-0 ceph-mgr[193209]: [progress WARNING root] complete: ev 6fa266ab-8dc2-433d-8fc8-fe8a76628d74 does not exist
Dec  5 02:38:29 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: session ls {prefix=session ls} (starting...)
Dec  5 02:38:29 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Dec  5 02:38:29 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1659965562' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Dec  5 02:38:30 compute-0 ceph-mds[220561]: mds.cephfs.compute-0.ksxtqc asok_command: status {prefix=status} (starting...)
Dec  5 02:38:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Dec  5 02:38:30 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3576470875' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec  5 02:38:30 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15905 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  5 02:38:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:38:30 compute-0 ceph-mon[192914]: from='mgr.14130 192.168.122.100:0/3085287714' entity='mgr.compute-0.afshmv' 
Dec  5 02:38:30 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Dec  5 02:38:30 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1086202521' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec  5 02:38:30 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15909 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec  5 02:38:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Dec  5 02:38:31 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1805997846' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec  5 02:38:31 compute-0 nova_compute[349548]: 2025-12-05 02:38:31.118 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:38:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Dec  5 02:38:31 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3674282684' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Dec  5 02:38:31 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2689: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:38:31 compute-0 openstack_network_exporter[366555]: ERROR   02:38:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  5 02:38:31 compute-0 openstack_network_exporter[366555]: ERROR   02:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:38:31 compute-0 openstack_network_exporter[366555]: ERROR   02:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  5 02:38:31 compute-0 openstack_network_exporter[366555]: ERROR   02:38:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  5 02:38:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:38:31 compute-0 openstack_network_exporter[366555]: ERROR   02:38:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  5 02:38:31 compute-0 openstack_network_exporter[366555]: 
Dec  5 02:38:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec  5 02:38:31 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3768399922' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  5 02:38:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Dec  5 02:38:31 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/465384984' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Dec  5 02:38:31 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Dec  5 02:38:31 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2729641969' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec  5 02:38:32 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15921 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec  5 02:38:32 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T02:38:32.126+0000 7f1b09f03640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  5 02:38:32 compute-0 ceph-mgr[193209]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec  5 02:38:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Dec  5 02:38:32 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/482079521' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec  5 02:38:32 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Dec  5 02:38:32 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/586608825' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Dec  5 02:38:32 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15927 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  5 02:38:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Dec  5 02:38:33 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1475494273' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Dec  5 02:38:33 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15931 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  5 02:38:33 compute-0 nova_compute[349548]: 2025-12-05 02:38:33.236 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:38:33 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2690: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:38:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Dec  5 02:38:33 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2162378691' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Dec  5 02:38:33 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15935 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec  5 02:38:33 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9171000/0x0/0x4ffc00000, data 0x28430e1/0x290c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1335592 data_alloc: 218103808 data_used: 16302080
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 ms_handle_reset con 0x55c43a4cf400 session 0x55c43a54e780
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 ms_handle_reset con 0x55c4398b2800 session 0x55c4397da1e0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 173.469711304s of 174.075759888s, submitted: 52
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 ms_handle_reset con 0x55c4398b2000 session 0x55c43986dc20
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107282432 unmapped: 30728192 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103825408 unmapped: 34185216 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 ms_handle_reset con 0x55c4398b3000 session 0x55c4399641e0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103825408 unmapped: 34185216 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c300ae/0x1cf7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210818 data_alloc: 218103808 data_used: 11628544
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103825408 unmapped: 34185216 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103825408 unmapped: 34185216 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103825408 unmapped: 34185216 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c300ae/0x1cf7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103825408 unmapped: 34185216 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c300ae/0x1cf7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103825408 unmapped: 34185216 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210818 data_alloc: 218103808 data_used: 11628544
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103825408 unmapped: 34185216 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103825408 unmapped: 34185216 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.715733528s of 10.014011383s, submitted: 44
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103849984 unmapped: 34160640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103874560 unmapped: 34136064 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103874560 unmapped: 34136064 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9d87000/0x0/0x4ffc00000, data 0x1c300ae/0x1cf7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x417f9c6), peers [0,1] op hist [0,0,0,1])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210818 data_alloc: 218103808 data_used: 11628544
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9977000/0x0/0x4ffc00000, data 0x1c300ae/0x1cf7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103890944 unmapped: 34119680 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103890944 unmapped: 34119680 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103890944 unmapped: 34119680 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103890944 unmapped: 34119680 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103890944 unmapped: 34119680 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210818 data_alloc: 218103808 data_used: 11628544
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103890944 unmapped: 34119680 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4f9977000/0x0/0x4ffc00000, data 0x1c300ae/0x1cf7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103890944 unmapped: 34119680 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 103890944 unmapped: 34119680 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 ms_handle_reset con 0x55c4398b2c00 session 0x55c43990bc20
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 ms_handle_reset con 0x55c437982c00 session 0x55c4398c2d20
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 ms_handle_reset con 0x55c4399f0c00 session 0x55c4398c2780
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.585576057s of 11.133249283s, submitted: 76
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100327424 unmapped: 37683200 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 ms_handle_reset con 0x55c4398b2000 session 0x55c4373285a0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1006755 data_alloc: 218103808 data_used: 4386816
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100335616 unmapped: 37675008 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 heartbeat osd_stat(store_statfs(0x4face4000/0x0/0x4ffc00000, data 0x8c409e/0x98a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 80.609878540s of 80.641670227s, submitted: 13
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100786176 unmapped: 37224448 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 ms_handle_reset con 0x55c4398b2800 session 0x55c439c743c0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100802560 unmapped: 37208064 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1061603 data_alloc: 218103808 data_used: 4386816
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100802560 unmapped: 37208064 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fa4e0000/0x0/0x4ffc00000, data 0x10c5c1b/0x118d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 ms_handle_reset con 0x55c4398b3000 session 0x55c4398a9860
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125416 data_alloc: 218103808 data_used: 4403200
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15937 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125416 data_alloc: 218103808 data_used: 4403200
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125416 data_alloc: 218103808 data_used: 4403200
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125416 data_alloc: 218103808 data_used: 4403200
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125416 data_alloc: 218103808 data_used: 4403200
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125416 data_alloc: 218103808 data_used: 4403200
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125416 data_alloc: 218103808 data_used: 4403200
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125576 data_alloc: 218103808 data_used: 4407296
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125576 data_alloc: 218103808 data_used: 4407296
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125576 data_alloc: 218103808 data_used: 4407296
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 heartbeat osd_stat(store_statfs(0x4f9cdd000/0x0/0x4ffc00000, data 0x18c7798/0x1990000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1125576 data_alloc: 218103808 data_used: 4407296
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 ms_handle_reset con 0x55c437982c00 session 0x55c4373b0780
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 ms_handle_reset con 0x55c4398b2000 session 0x55c43911be00
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 ms_handle_reset con 0x55c4398b2800 session 0x55c43803e780
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 100827136 unmapped: 37183488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 59.751800537s of 59.914466858s, submitted: 18
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c43a4cf400 session 0x55c43911a1e0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4399f0c00 session 0x55c4378c32c0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107839488 unmapped: 30171136 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 heartbeat osd_stat(store_statfs(0x4f9cd8000/0x0/0x4ffc00000, data 0x18c935b/0x1995000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c437982c00 session 0x55c439165e00
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4398b2800 session 0x55c436fc4d20
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107356160 unmapped: 30654464 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4398b2000 session 0x55c439164960
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4399ee800 session 0x55c4373183c0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c438edbc00 session 0x55c437aa0000
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4398b2000 session 0x55c437aa1c20
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4398b2800 session 0x55c437aa0780
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4399ee800 session 0x55c43914a1e0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210372 data_alloc: 218103808 data_used: 11231232
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c437982c00 session 0x55c439859860
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4398a0800 session 0x55c43914a000
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c43a4ce800 session 0x55c437319c20
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4398b2000 session 0x55c4398a9a40
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c437982c00 session 0x55c43914bc20
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4399ee800 session 0x55c4373312c0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107626496 unmapped: 30384128 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c438cd6800 session 0x55c437330000
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c437982c00 session 0x55c4399cd4a0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 ms_handle_reset con 0x55c4398b2000 session 0x55c4399cde00
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 135 handle_osd_map epochs [136,136], i have 136, src has [1,136]
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107667456 unmapped: 30343168 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f90ad000/0x0/0x4ffc00000, data 0x24f1f7b/0x25c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,1])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 136 ms_handle_reset con 0x55c4398b2800 session 0x55c4397dab40
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107675648 unmapped: 30334976 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107954176 unmapped: 30056448 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 136 ms_handle_reset con 0x55c4399ee800 session 0x55c4399643c0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 136 ms_handle_reset con 0x55c43a4ce800 session 0x55c4398c3680
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 136 ms_handle_reset con 0x55c437982c00 session 0x55c4398665a0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 136 ms_handle_reset con 0x55c4398b2000 session 0x55c43980cd20
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 136 ms_handle_reset con 0x55c4398b2800 session 0x55c4398d4780
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 30015488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1304547 data_alloc: 218103808 data_used: 11243520
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 30015488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 136 heartbeat osd_stat(store_statfs(0x4f8b0c000/0x0/0x4ffc00000, data 0x2a94f58/0x2b62000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 136 ms_handle_reset con 0x55c4399ee800 session 0x55c4398d45a0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107995136 unmapped: 30015488 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 136 ms_handle_reset con 0x55c4398b3800 session 0x55c437329860
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107569152 unmapped: 30441472 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107569152 unmapped: 30441472 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107175936 unmapped: 30834688 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1370618 data_alloc: 234881024 data_used: 19755008
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.508833885s of 12.396708488s, submitted: 129
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109142016 unmapped: 28868608 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f8ade000/0x0/0x4ffc00000, data 0x2ac09bb/0x2b8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110321664 unmapped: 27688960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110321664 unmapped: 27688960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f8ade000/0x0/0x4ffc00000, data 0x2ac09bb/0x2b8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110321664 unmapped: 27688960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f8ade000/0x0/0x4ffc00000, data 0x2ac09bb/0x2b8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110321664 unmapped: 27688960 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c43989d400 session 0x55c43965e3c0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c438edbc00 session 0x55c43802a000
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1387780 data_alloc: 234881024 data_used: 21659648
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b2400 session 0x55c437aa1e00
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 108232704 unmapped: 29777920 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b3c00 session 0x55c4399cd0e0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 30408704 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 30408704 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91d8000/0x0/0x4ffc00000, data 0x23c79bb/0x2496000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 30408704 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107601920 unmapped: 30408704 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1302492 data_alloc: 234881024 data_used: 17076224
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 107610112 unmapped: 30400512 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91d8000/0x0/0x4ffc00000, data 0x23c79bb/0x2496000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91d8000/0x0/0x4ffc00000, data 0x23c79bb/0x2496000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332732 data_alloc: 234881024 data_used: 21368832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91d8000/0x0/0x4ffc00000, data 0x23c79bb/0x2496000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332732 data_alloc: 234881024 data_used: 21368832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91d8000/0x0/0x4ffc00000, data 0x23c79bb/0x2496000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91d8000/0x0/0x4ffc00000, data 0x23c79bb/0x2496000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109707264 unmapped: 28303360 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332732 data_alloc: 234881024 data_used: 21368832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109715456 unmapped: 28295168 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4399efc00 session 0x55c4399cc5a0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c438edbc00 session 0x55c4399cd2c0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c43989d400 session 0x55c4399cc000
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b2400 session 0x55c43802ba40
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 109715456 unmapped: 28295168 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 26.453773499s of 26.580440521s, submitted: 32
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b3c00 session 0x55c43986d860
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4399efc00 session 0x55c43965fc20
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c438edbc00 session 0x55c4373b70e0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c43989d400 session 0x55c43914bc20
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b2400 session 0x55c4398a92c0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 27394048 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 27394048 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110616576 unmapped: 27394048 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1389229 data_alloc: 234881024 data_used: 21372928
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f8b33000/0x0/0x4ffc00000, data 0x2a6ba1d/0x2b3b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110649344 unmapped: 27361280 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 110665728 unmapped: 27344896 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 20815872 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b3c00 session 0x55c43980c3c0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115515392 unmapped: 22495232 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7e8f000/0x0/0x4ffc00000, data 0x370ea1d/0x37de000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115564544 unmapped: 22446080 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1503030 data_alloc: 234881024 data_used: 22388736
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115113984 unmapped: 22896640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 20946944 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 117784576 unmapped: 20226048 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120913920 unmapped: 17096704 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4399efc00 session 0x55c4398590e0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.942190170s of 12.467167854s, submitted: 119
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4399ef800 session 0x55c4399cc1e0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118046720 unmapped: 19963904 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7e8a000/0x0/0x4ffc00000, data 0x3714a1d/0x37e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,1])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1451822 data_alloc: 234881024 data_used: 22401024
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c438edbc00 session 0x55c43967b4a0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7e8a000/0x0/0x4ffc00000, data 0x3714a1d/0x37e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118063104 unmapped: 19947520 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f852e000/0x0/0x4ffc00000, data 0x30709bb/0x313f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118063104 unmapped: 19947520 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f852e000/0x0/0x4ffc00000, data 0x30709bb/0x313f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120578048 unmapped: 17432576 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120578048 unmapped: 17432576 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120602624 unmapped: 17408000 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1503456 data_alloc: 234881024 data_used: 22880256
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120602624 unmapped: 17408000 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120602624 unmapped: 17408000 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120602624 unmapped: 17408000 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120602624 unmapped: 17408000 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120610816 unmapped: 17399808 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1503456 data_alloc: 234881024 data_used: 22880256
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.669157028s of 11.117276192s, submitted: 70
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120610816 unmapped: 17399808 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120610816 unmapped: 17399808 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120610816 unmapped: 17399808 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120610816 unmapped: 17399808 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120610816 unmapped: 17399808 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1503472 data_alloc: 234881024 data_used: 22880256
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120610816 unmapped: 17399808 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120610816 unmapped: 17399808 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 17391616 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 17391616 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 17391616 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1503472 data_alloc: 234881024 data_used: 22880256
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 17391616 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 17391616 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 17391616 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 17391616 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120619008 unmapped: 17391616 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1503472 data_alloc: 234881024 data_used: 22880256
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120627200 unmapped: 17383424 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120627200 unmapped: 17383424 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120627200 unmapped: 17383424 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120627200 unmapped: 17383424 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120627200 unmapped: 17383424 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1503792 data_alloc: 234881024 data_used: 22888448
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120627200 unmapped: 17383424 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120627200 unmapped: 17383424 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120627200 unmapped: 17383424 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120635392 unmapped: 17375232 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120635392 unmapped: 17375232 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1503792 data_alloc: 234881024 data_used: 22888448
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120635392 unmapped: 17375232 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120635392 unmapped: 17375232 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120635392 unmapped: 17375232 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7fb3000/0x0/0x4ffc00000, data 0x35e49bb/0x36b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 120635392 unmapped: 17375232 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 29.523187637s of 29.533178329s, submitted: 1
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c43989d400 session 0x55c439e163c0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118743040 unmapped: 19267584 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b2400 session 0x55c439100f00
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b3c00 session 0x55c439c74960
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c438edbc00 session 0x55c43990b2c0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c43989d400 session 0x55c4399ccf00
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533172 data_alloc: 234881024 data_used: 22888448
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118743040 unmapped: 19267584 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118751232 unmapped: 19259392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118751232 unmapped: 19259392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b2800 session 0x55c437aa0b40
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b2000 session 0x55c4373183c0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118751232 unmapped: 19259392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7c4e000/0x0/0x4ffc00000, data 0x39519bb/0x3a20000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b2400 session 0x55c43965e000
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118751232 unmapped: 19259392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1533172 data_alloc: 234881024 data_used: 22888448
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118751232 unmapped: 19259392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118751232 unmapped: 19259392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c437982c00 session 0x55c437c10960
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b3000 session 0x55c43965ef00
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f7c4e000/0x0/0x4ffc00000, data 0x39519bb/0x3a20000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 118751232 unmapped: 19259392 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b2000 session 0x55c43717eb40
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115195904 unmapped: 22814720 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115204096 unmapped: 22806528 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1352478 data_alloc: 234881024 data_used: 17600512
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115204096 unmapped: 22806528 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115204096 unmapped: 22806528 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.346131325s of 12.543250084s, submitted: 31
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4398b2800 session 0x55c437aa0960
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115204096 unmapped: 22806528 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f89f5000/0x0/0x4ffc00000, data 0x2750949/0x281d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115204096 unmapped: 22806528 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115204096 unmapped: 22806528 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355411 data_alloc: 234881024 data_used: 17719296
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115335168 unmapped: 22675456 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4399ef800 session 0x55c437aa14a0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c4399ef400 session 0x55c439101e00
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 115335168 unmapped: 22675456 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91bd000/0x0/0x4ffc00000, data 0x23e396c/0x24b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 ms_handle_reset con 0x55c437982c00 session 0x55c43ac4de00
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91bd000/0x0/0x4ffc00000, data 0x23e396c/0x24b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324103 data_alloc: 234881024 data_used: 17600512
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91bd000/0x0/0x4ffc00000, data 0x23e3949/0x24b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324103 data_alloc: 234881024 data_used: 17600512
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91bd000/0x0/0x4ffc00000, data 0x23e3949/0x24b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324103 data_alloc: 234881024 data_used: 17600512
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91bd000/0x0/0x4ffc00000, data 0x23e3949/0x24b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324103 data_alloc: 234881024 data_used: 17600512
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91bd000/0x0/0x4ffc00000, data 0x23e3949/0x24b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91bd000/0x0/0x4ffc00000, data 0x23e3949/0x24b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91bd000/0x0/0x4ffc00000, data 0x23e3949/0x24b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324103 data_alloc: 234881024 data_used: 17600512
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91bd000/0x0/0x4ffc00000, data 0x23e3949/0x24b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114089984 unmapped: 23920640 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 28.978549957s of 29.158163071s, submitted: 17
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91bd000/0x0/0x4ffc00000, data 0x23e3949/0x24b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 23904256 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 23904256 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 23904256 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91be000/0x0/0x4ffc00000, data 0x23e3949/0x24b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 23904256 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324439 data_alloc: 234881024 data_used: 17666048
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 114106368 unmapped: 23904256 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113999872 unmapped: 24010752 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113999872 unmapped: 24010752 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113999872 unmapped: 24010752 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91b9000/0x0/0x4ffc00000, data 0x23e8949/0x24b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113999872 unmapped: 24010752 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1327689 data_alloc: 234881024 data_used: 17661952
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113999872 unmapped: 24010752 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113999872 unmapped: 24010752 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113999872 unmapped: 24010752 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113999872 unmapped: 24010752 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.963225365s of 13.028007507s, submitted: 11
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 24256512 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f91b8000/0x0/0x4ffc00000, data 0x23e9949/0x24b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1326709 data_alloc: 234881024 data_used: 17661952
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 24256512 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 24256512 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 24256512 heap: 138010624 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f81b7000/0x0/0x4ffc00000, data 0x33e9959/0x34b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 32628736 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 heartbeat osd_stat(store_statfs(0x4f81b7000/0x0/0x4ffc00000, data 0x33e9959/0x34b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7d47000/0x0/0x4ffc00000, data 0x3859959/0x3927000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,0,0,1])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2000 session 0x55c43ac4c000
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 32620544 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1474162 data_alloc: 234881024 data_used: 17670144
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 32620544 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 32620544 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 32620544 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7d44000/0x0/0x4ffc00000, data 0x385b4d6/0x392a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 32620544 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 32620544 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473458 data_alloc: 234881024 data_used: 17670144
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 32612352 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7d44000/0x0/0x4ffc00000, data 0x385b4d6/0x392a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 32612352 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7d44000/0x0/0x4ffc00000, data 0x385b4d6/0x392a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 32612352 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 32612352 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7d44000/0x0/0x4ffc00000, data 0x385b4d6/0x392a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.567854881s of 15.844666481s, submitted: 24
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2800 session 0x55c437380960
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 121815040 unmapped: 24592384 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b3000 session 0x55c437aa0d20
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1525586 data_alloc: 251658240 data_used: 37007360
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 132841472 unmapped: 13565952 heap: 146407424 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7d44000/0x0/0x4ffc00000, data 0x385b4d6/0x392a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c439838000
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c43989d400 session 0x55c4398a90e0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c43806cd20
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2800 session 0x55c437aa05a0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b3000 session 0x55c43a54e5a0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b3000 session 0x55c43a54ef00
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c43990b4a0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7715000/0x0/0x4ffc00000, data 0x3e8a4d6/0x3f59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2000 session 0x55c439101c20
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c4373314a0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1443055 data_alloc: 234881024 data_used: 30597120
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e01000/0x0/0x4ffc00000, data 0x336d4d6/0x343c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e01000/0x0/0x4ffc00000, data 0x336d4d6/0x343c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e01000/0x0/0x4ffc00000, data 0x336d4d6/0x343c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1443187 data_alloc: 234881024 data_used: 30597120
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129818624 unmapped: 20267008 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4399ef400 session 0x55c439101a40
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c439100f00
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c4398a92c0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2000 session 0x55c438015680
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 129777664 unmapped: 20307968 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.293066978s of 13.490474701s, submitted: 33
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e01000/0x0/0x4ffc00000, data 0x336d4d6/0x343c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,1])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b3000 session 0x55c4373b0b40
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4399ee000 session 0x55c4373310e0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e91000/0x0/0x4ffc00000, data 0x370d4e6/0x37dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c437330d20
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c437aae3c0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2000 session 0x55c437380960
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 130064384 unmapped: 20021248 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 130064384 unmapped: 20021248 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1507941 data_alloc: 251658240 data_used: 35082240
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 130064384 unmapped: 20021248 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e91000/0x0/0x4ffc00000, data 0x370d4e6/0x37dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 130064384 unmapped: 20021248 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 130064384 unmapped: 20021248 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 130064384 unmapped: 20021248 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b3000 session 0x55c437381e00
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 130367488 unmapped: 19718144 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1524263 data_alloc: 251658240 data_used: 36999168
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 130367488 unmapped: 19718144 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 130400256 unmapped: 19685376 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e6d000/0x0/0x4ffc00000, data 0x37314e6/0x3801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550983 data_alloc: 251658240 data_used: 40714240
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e6d000/0x0/0x4ffc00000, data 0x37314e6/0x3801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e6d000/0x0/0x4ffc00000, data 0x37314e6/0x3801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550983 data_alloc: 251658240 data_used: 40714240
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e6d000/0x0/0x4ffc00000, data 0x37314e6/0x3801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550983 data_alloc: 251658240 data_used: 40714240
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e6d000/0x0/0x4ffc00000, data 0x37314e6/0x3801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550983 data_alloc: 251658240 data_used: 40714240
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e6d000/0x0/0x4ffc00000, data 0x37314e6/0x3801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131702784 unmapped: 18382848 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131710976 unmapped: 18374656 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131710976 unmapped: 18374656 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7e6d000/0x0/0x4ffc00000, data 0x37314e6/0x3801000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 131719168 unmapped: 18366464 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550983 data_alloc: 251658240 data_used: 40714240
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 32.743537903s of 32.836509705s, submitted: 6
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 137379840 unmapped: 12705792 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140926976 unmapped: 9158656 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7455000/0x0/0x4ffc00000, data 0x41434e6/0x4213000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 9125888 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7455000/0x0/0x4ffc00000, data 0x41434e6/0x4213000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 139755520 unmapped: 10330112 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 139755520 unmapped: 10330112 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1631561 data_alloc: 251658240 data_used: 41697280
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 139755520 unmapped: 10330112 heap: 150085632 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141393920 unmapped: 10166272 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6e8a000/0x0/0x4ffc00000, data 0x470c4e6/0x47dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141541376 unmapped: 10018816 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141549568 unmapped: 10010624 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141549568 unmapped: 10010624 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697183 data_alloc: 251658240 data_used: 41992192
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141549568 unmapped: 10010624 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141549568 unmapped: 10010624 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6cec000/0x0/0x4ffc00000, data 0x48aa4e6/0x497a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141549568 unmapped: 10010624 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141557760 unmapped: 10002432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.211829185s of 13.867918968s, submitted: 144
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141557760 unmapped: 10002432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697199 data_alloc: 251658240 data_used: 41992192
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141557760 unmapped: 10002432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141557760 unmapped: 10002432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6cec000/0x0/0x4ffc00000, data 0x48aa4e6/0x497a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141557760 unmapped: 10002432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6cec000/0x0/0x4ffc00000, data 0x48aa4e6/0x497a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6cec000/0x0/0x4ffc00000, data 0x48aa4e6/0x497a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141557760 unmapped: 10002432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6cec000/0x0/0x4ffc00000, data 0x48aa4e6/0x497a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141557760 unmapped: 10002432 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1697199 data_alloc: 251658240 data_used: 41992192
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140877824 unmapped: 10682368 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140877824 unmapped: 10682368 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140877824 unmapped: 10682368 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140877824 unmapped: 10682368 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6cf4000/0x0/0x4ffc00000, data 0x48aa4e6/0x497a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140877824 unmapped: 10682368 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1692287 data_alloc: 251658240 data_used: 41992192
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140877824 unmapped: 10682368 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140877824 unmapped: 10682368 heap: 151560192 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6cf4000/0x0/0x4ffc00000, data 0x48aa4e6/0x497a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c439892c00 session 0x55c437319680
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c439892c00 session 0x55c437aaeb40
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c439164960
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c4371734a0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.983115196s of 13.000985146s, submitted: 2
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2000 session 0x55c439964b40
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b3000 session 0x55c4373b63c0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b3000 session 0x55c4373b01e0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 25296896 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c43967b0e0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6cf4000/0x0/0x4ffc00000, data 0x48aa4e6/0x497a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c4398c2f00
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 25288704 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c439892c00 session 0x55c43ac4d680
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2000 session 0x55c438014d20
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2000 session 0x55c43806da40
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c437aa0960
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c437aa0b40
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c439892c00 session 0x55c43912a5a0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b3000 session 0x55c437bdfc20
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140345344 unmapped: 25911296 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c4398a9a40
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c439101e00
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1777183 data_alloc: 251658240 data_used: 41992192
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 25927680 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628b000/0x0/0x4ffc00000, data 0x53124f6/0x53e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 25927680 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 25927680 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628b000/0x0/0x4ffc00000, data 0x53124f6/0x53e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 25927680 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 25927680 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1777359 data_alloc: 251658240 data_used: 41992192
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 25927680 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140328960 unmapped: 25927680 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c439892c00 session 0x55c4373303c0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.160284996s of 10.497438431s, submitted: 42
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140320768 unmapped: 25935872 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140320768 unmapped: 25935872 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398a5800 session 0x55c43a54e3c0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 25706496 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1807465 data_alloc: 251658240 data_used: 45797376
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 146636800 unmapped: 19619840 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 148963328 unmapped: 17293312 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149340160 unmapped: 16916480 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149561344 unmapped: 16695296 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149561344 unmapped: 16695296 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1857705 data_alloc: 268435456 data_used: 52809728
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149569536 unmapped: 16687104 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149569536 unmapped: 16687104 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149569536 unmapped: 16687104 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149602304 unmapped: 16654336 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149602304 unmapped: 16654336 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1857705 data_alloc: 268435456 data_used: 52809728
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149602304 unmapped: 16654336 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149602304 unmapped: 16654336 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149602304 unmapped: 16654336 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149635072 unmapped: 16621568 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149643264 unmapped: 16613376 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1857705 data_alloc: 268435456 data_used: 52809728
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149643264 unmapped: 16613376 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149643264 unmapped: 16613376 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149643264 unmapped: 16613376 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149643264 unmapped: 16613376 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149684224 unmapped: 16572416 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1857705 data_alloc: 268435456 data_used: 52809728
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149684224 unmapped: 16572416 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149684224 unmapped: 16572416 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149684224 unmapped: 16572416 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149684224 unmapped: 16572416 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 26.395227432s of 26.412460327s, submitted: 3
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149790720 unmapped: 16465920 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1858713 data_alloc: 268435456 data_used: 52813824
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149790720 unmapped: 16465920 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149798912 unmapped: 16457728 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149807104 unmapped: 16449536 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149815296 unmapped: 16441344 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149815296 unmapped: 16441344 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1858713 data_alloc: 268435456 data_used: 52813824
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149815296 unmapped: 16441344 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149823488 unmapped: 16433152 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f628a000/0x0/0x4ffc00000, data 0x5312519/0x53e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 149823488 unmapped: 16433152 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 151429120 unmapped: 14827520 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.783483505s of 10.946872711s, submitted: 24
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 152084480 unmapped: 14172160 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1889331 data_alloc: 268435456 data_used: 52891648
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 152961024 unmapped: 13295616 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5df9000/0x0/0x4ffc00000, data 0x57a3519/0x5875000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153083904 unmapped: 13172736 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153354240 unmapped: 12902400 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153354240 unmapped: 12902400 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5de9000/0x0/0x4ffc00000, data 0x57b3519/0x5885000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153387008 unmapped: 12869632 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1911755 data_alloc: 268435456 data_used: 53923840
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153387008 unmapped: 12869632 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2000 session 0x55c43ac4d2c0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398a5400 session 0x55c4398583c0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 152756224 unmapped: 13500416 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c4398c32c0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 147660800 unmapped: 18595840 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 147660800 unmapped: 18595840 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f69de000/0x0/0x4ffc00000, data 0x4bbf4e6/0x4c8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 147660800 unmapped: 18595840 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1743569 data_alloc: 251658240 data_used: 43810816
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 147660800 unmapped: 18595840 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f69de000/0x0/0x4ffc00000, data 0x4bbf4e6/0x4c8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4399eec00 session 0x55c43a54ef00
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4399ee800 session 0x55c43802b0e0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 147660800 unmapped: 18595840 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.789140701s of 12.240109444s, submitted: 93
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c437c112c0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f74fb000/0x0/0x4ffc00000, data 0x40a44d6/0x4173000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1625360 data_alloc: 251658240 data_used: 39796736
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f74fb000/0x0/0x4ffc00000, data 0x40a44d6/0x4173000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1625360 data_alloc: 251658240 data_used: 39796736
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f74fb000/0x0/0x4ffc00000, data 0x40a44d6/0x4173000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1625360 data_alloc: 251658240 data_used: 39796736
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1625360 data_alloc: 251658240 data_used: 39796736
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f74fb000/0x0/0x4ffc00000, data 0x40a44d6/0x4173000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.772724152s of 19.821142197s, submitted: 11
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f74fb000/0x0/0x4ffc00000, data 0x40a44d6/0x4173000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f74fb000/0x0/0x4ffc00000, data 0x40a44d6/0x4173000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1625872 data_alloc: 251658240 data_used: 39800832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f74fb000/0x0/0x4ffc00000, data 0x40a44d6/0x4173000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398a5c00 session 0x55c437aaef00
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c439c88c00 session 0x55c4398a8000
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 21626880 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c43ac4c780
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f74fb000/0x0/0x4ffc00000, data 0x40a44d6/0x4173000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Dec  5 02:38:34 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/315993858' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 9064 writes, 35K keys, 9064 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 9064 writes, 2319 syncs, 3.91 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1653 writes, 6154 keys, 1653 commit groups, 1.0 writes per commit group, ingest: 6.39 MB, 0.01 MB/s#012Interval WAL: 1653 writes, 687 syncs, 2.41 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143777792 unmapped: 22478848 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143785984 unmapped: 22470656 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: mgrc ms_handle_reset ms_handle_reset con 0x55c437983800
Dec  5 02:38:34 compute-0 ceph-osd[208828]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/858078637
Dec  5 02:38:34 compute-0 ceph-osd[208828]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/858078637,v1:192.168.122.100:6801/858078637]
Dec  5 02:38:34 compute-0 ceph-osd[208828]: mgrc handle_mgr_configure stats_period=5
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143917056 unmapped: 22339584 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143925248 unmapped: 22331392 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1589536 data_alloc: 251658240 data_used: 37978112
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f7810000/0x0/0x4ffc00000, data 0x3d8f4d6/0x3e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143925248 unmapped: 22331392 heap: 166256640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 100.267555237s of 100.389305115s, submitted: 23
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c43ac4de00
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398a5400 session 0x55c4398a90e0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c437982c00 session 0x55c4373292c0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c439e17680
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398a5c00 session 0x55c437319e00
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 26796032 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 26796032 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d26000/0x0/0x4ffc00000, data 0x48794d6/0x4948000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 26796032 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 26796032 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1681860 data_alloc: 251658240 data_used: 37978112
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143663104 unmapped: 26796032 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d26000/0x0/0x4ffc00000, data 0x48794d6/0x4948000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143671296 unmapped: 26787840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143671296 unmapped: 26787840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c439c88c00 session 0x55c43914a3c0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 26763264 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 26763264 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1684438 data_alloc: 251658240 data_used: 37978112
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 26763264 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143695872 unmapped: 26763264 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143704064 unmapped: 26755072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 143941632 unmapped: 26517504 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145530880 unmapped: 24928256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1765558 data_alloc: 251658240 data_used: 49324032
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1765558 data_alloc: 251658240 data_used: 49324032
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1765558 data_alloc: 251658240 data_used: 49324032
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4399ee400 session 0x55c439867c20
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145620992 unmapped: 24838144 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145620992 unmapped: 24838144 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145620992 unmapped: 24838144 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145620992 unmapped: 24838144 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145620992 unmapped: 24838144 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1765558 data_alloc: 251658240 data_used: 49324032
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145620992 unmapped: 24838144 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145620992 unmapped: 24838144 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 31.556104660s of 31.715848923s, submitted: 28
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 24829952 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [0,0,0,1])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145670144 unmapped: 24788992 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145735680 unmapped: 24723456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1766950 data_alloc: 251658240 data_used: 49364992
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145793024 unmapped: 24666112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145793024 unmapped: 24666112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145793024 unmapped: 24666112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145793024 unmapped: 24666112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145793024 unmapped: 24666112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1766950 data_alloc: 251658240 data_used: 49364992
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145793024 unmapped: 24666112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145793024 unmapped: 24666112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145793024 unmapped: 24666112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6d25000/0x0/0x4ffc00000, data 0x48794f9/0x4949000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145793024 unmapped: 24666112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 145793024 unmapped: 24666112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.810025215s of 12.461395264s, submitted: 108
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1791816 data_alloc: 251658240 data_used: 49373184
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159358976 unmapped: 11100160 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5eab000/0x0/0x4ffc00000, data 0x56ed4f9/0x57bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159612928 unmapped: 10846208 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159825920 unmapped: 10633216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e75000/0x0/0x4ffc00000, data 0x57214f9/0x57f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159825920 unmapped: 10633216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892200 data_alloc: 268435456 data_used: 50962432
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e75000/0x0/0x4ffc00000, data 0x57214f9/0x57f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159858688 unmapped: 10600448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159858688 unmapped: 10600448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159858688 unmapped: 10600448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159858688 unmapped: 10600448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159866880 unmapped: 10592256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892200 data_alloc: 268435456 data_used: 50962432
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159866880 unmapped: 10592256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e75000/0x0/0x4ffc00000, data 0x57214f9/0x57f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159899648 unmapped: 10559488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.857128143s of 12.298008919s, submitted: 144
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159899648 unmapped: 10559488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159899648 unmapped: 10559488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159899648 unmapped: 10559488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892412 data_alloc: 268435456 data_used: 50962432
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e74000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159899648 unmapped: 10559488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159899648 unmapped: 10559488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159907840 unmapped: 10551296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159907840 unmapped: 10551296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e74000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159907840 unmapped: 10551296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892412 data_alloc: 268435456 data_used: 50962432
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159907840 unmapped: 10551296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159907840 unmapped: 10551296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159907840 unmapped: 10551296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159907840 unmapped: 10551296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e74000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159940608 unmapped: 10518528 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892412 data_alloc: 268435456 data_used: 50962432
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159948800 unmapped: 10510336 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159948800 unmapped: 10510336 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159948800 unmapped: 10510336 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159956992 unmapped: 10502144 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e74000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159989760 unmapped: 10469376 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892412 data_alloc: 268435456 data_used: 50962432
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159989760 unmapped: 10469376 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159989760 unmapped: 10469376 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159989760 unmapped: 10469376 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.230848312s of 21.238258362s, submitted: 1
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1885948 data_alloc: 268435456 data_used: 50962432
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1885948 data_alloc: 268435456 data_used: 50962432
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1885948 data_alloc: 268435456 data_used: 50962432
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1885948 data_alloc: 268435456 data_used: 50962432
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.680427551s of 17.691595078s, submitted: 2
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1888652 data_alloc: 268435456 data_used: 51240960
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1888476 data_alloc: 268435456 data_used: 51240960
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159309824 unmapped: 11149312 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.009906769s of 13.027759552s, submitted: 2
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159318016 unmapped: 11141120 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159318016 unmapped: 11141120 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1889356 data_alloc: 268435456 data_used: 51240960
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159318016 unmapped: 11141120 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159318016 unmapped: 11141120 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158703616 unmapped: 11755520 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158703616 unmapped: 11755520 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158703616 unmapped: 11755520 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1889516 data_alloc: 268435456 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158703616 unmapped: 11755520 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158703616 unmapped: 11755520 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158703616 unmapped: 11755520 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 11747328 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 11747328 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1889516 data_alloc: 268435456 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 11747328 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 11747328 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 11747328 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 11747328 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 11747328 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1889516 data_alloc: 268435456 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 11747328 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158720000 unmapped: 11739136 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158720000 unmapped: 11739136 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158720000 unmapped: 11739136 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158720000 unmapped: 11739136 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1889516 data_alloc: 268435456 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158720000 unmapped: 11739136 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 22.845460892s of 22.864625931s, submitted: 4
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158785536 unmapped: 11673600 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158785536 unmapped: 11673600 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158785536 unmapped: 11673600 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158785536 unmapped: 11673600 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158785536 unmapped: 11673600 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158785536 unmapped: 11673600 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158785536 unmapped: 11673600 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158785536 unmapped: 11673600 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158785536 unmapped: 11673600 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158785536 unmapped: 11673600 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 11665408 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158801920 unmapped: 11657216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158801920 unmapped: 11657216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158801920 unmapped: 11657216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158801920 unmapped: 11657216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158801920 unmapped: 11657216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158801920 unmapped: 11657216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158801920 unmapped: 11657216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158801920 unmapped: 11657216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158801920 unmapped: 11657216 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158810112 unmapped: 11649024 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 11640832 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 11632640 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158834688 unmapped: 11624448 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158842880 unmapped: 11616256 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158851072 unmapped: 11608064 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 11599872 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158867456 unmapped: 11591680 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 268435456 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 251658240 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 251658240 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 251658240 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158875648 unmapped: 11583488 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 251658240 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 251658240 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158883840 unmapped: 11575296 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 11567104 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 251658240 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 11567104 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 11567104 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 11567104 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 11567104 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158892032 unmapped: 11567104 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 251658240 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158900224 unmapped: 11558912 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158900224 unmapped: 11558912 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158900224 unmapped: 11558912 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158900224 unmapped: 11558912 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 158900224 unmapped: 11558912 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1891276 data_alloc: 251658240 data_used: 51245056
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 215.814193726s of 215.831085205s, submitted: 14
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159014912 unmapped: 11444224 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159031296 unmapped: 11427840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159031296 unmapped: 11427840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159031296 unmapped: 11427840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159031296 unmapped: 11427840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1894956 data_alloc: 251658240 data_used: 51838976
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159031296 unmapped: 11427840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7c000/0x0/0x4ffc00000, data 0x57224f9/0x57f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159031296 unmapped: 11427840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159031296 unmapped: 11427840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159031296 unmapped: 11427840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159031296 unmapped: 11427840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892500 data_alloc: 251658240 data_used: 51838976
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7a000/0x0/0x4ffc00000, data 0x57244f9/0x57f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159031296 unmapped: 11427840 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7a000/0x0/0x4ffc00000, data 0x57244f9/0x57f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892500 data_alloc: 251658240 data_used: 51838976
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7a000/0x0/0x4ffc00000, data 0x57244f9/0x57f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892660 data_alloc: 251658240 data_used: 51843072
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e7a000/0x0/0x4ffc00000, data 0x57244f9/0x57f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 22.156604767s of 22.175872803s, submitted: 2
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892904 data_alloc: 251658240 data_used: 51843072
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159047680 unmapped: 11411456 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159055872 unmapped: 11403264 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159064064 unmapped: 11395072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159064064 unmapped: 11395072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159064064 unmapped: 11395072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892904 data_alloc: 251658240 data_used: 51843072
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159064064 unmapped: 11395072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159064064 unmapped: 11395072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159064064 unmapped: 11395072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159064064 unmapped: 11395072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159064064 unmapped: 11395072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1892904 data_alloc: 251658240 data_used: 51843072
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159064064 unmapped: 11395072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.559044838s of 13.568158150s, submitted: 1
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895352 data_alloc: 251658240 data_used: 51830784
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895352 data_alloc: 251658240 data_used: 51830784
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895352 data_alloc: 251658240 data_used: 51830784
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.334384918s of 15.360384941s, submitted: 14
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 11362304 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159105024 unmapped: 11354112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159105024 unmapped: 11354112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159105024 unmapped: 11354112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159105024 unmapped: 11354112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159105024 unmapped: 11354112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159105024 unmapped: 11354112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159105024 unmapped: 11354112 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159113216 unmapped: 11345920 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159113216 unmapped: 11345920 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159113216 unmapped: 11345920 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159113216 unmapped: 11345920 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159113216 unmapped: 11345920 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159113216 unmapped: 11345920 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159113216 unmapped: 11345920 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159113216 unmapped: 11345920 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159129600 unmapped: 11329536 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159137792 unmapped: 11321344 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159145984 unmapped: 11313152 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159145984 unmapped: 11313152 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159145984 unmapped: 11313152 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159145984 unmapped: 11313152 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159154176 unmapped: 11304960 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159154176 unmapped: 11304960 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159154176 unmapped: 11304960 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 251658240 data_used: 51830784
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 11296768 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 11288576 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 11288576 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 11288576 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 11288576 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 11288576 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 11288576 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 11288576 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 11288576 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 11288576 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159178752 unmapped: 11280384 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159178752 unmapped: 11280384 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159178752 unmapped: 11280384 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159178752 unmapped: 11280384 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159178752 unmapped: 11280384 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159178752 unmapped: 11280384 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159178752 unmapped: 11280384 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159178752 unmapped: 11280384 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 11272192 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 11272192 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 11272192 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 11272192 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 11272192 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 11272192 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 11272192 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 11272192 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 11272192 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 11272192 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 11264000 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159203328 unmapped: 11255808 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159203328 unmapped: 11255808 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 11247616 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 11247616 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 11247616 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 11247616 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 11247616 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 11247616 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 11247616 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 11247616 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 11247616 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 11247616 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159219712 unmapped: 11239424 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159219712 unmapped: 11239424 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159219712 unmapped: 11239424 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159219712 unmapped: 11239424 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159219712 unmapped: 11239424 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159219712 unmapped: 11239424 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159227904 unmapped: 11231232 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 9570 writes, 37K keys, 9570 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 9570 writes, 2504 syncs, 3.82 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 506 writes, 1776 keys, 506 commit groups, 1.0 writes per commit group, ingest: 2.56 MB, 0.00 MB/s#012Interval WAL: 506 writes, 185 syncs, 2.74 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159236096 unmapped: 11223040 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159244288 unmapped: 11214848 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159252480 unmapped: 11206656 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159252480 unmapped: 11206656 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159252480 unmapped: 11206656 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159252480 unmapped: 11206656 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159252480 unmapped: 11206656 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159260672 unmapped: 11198464 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159268864 unmapped: 11190272 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159277056 unmapped: 11182080 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159277056 unmapped: 11182080 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159277056 unmapped: 11182080 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159277056 unmapped: 11182080 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159277056 unmapped: 11182080 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159285248 unmapped: 11173888 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159285248 unmapped: 11173888 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159285248 unmapped: 11173888 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159285248 unmapped: 11173888 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1895528 data_alloc: 234881024 data_used: 51830784
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 159285248 unmapped: 11173888 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 196.116027832s of 196.124725342s, submitted: 1
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c43989d400 session 0x55c43965fa40
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4398b2800 session 0x55c4398c23c0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f5e79000/0x0/0x4ffc00000, data 0x57254f9/0x57f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c439c75c20
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1723092 data_alloc: 234881024 data_used: 44167168
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6eca000/0x0/0x4ffc00000, data 0x46d44f9/0x47a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6eca000/0x0/0x4ffc00000, data 0x46d44f9/0x47a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6eca000/0x0/0x4ffc00000, data 0x46d44f9/0x47a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1723092 data_alloc: 234881024 data_used: 44167168
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f6eca000/0x0/0x4ffc00000, data 0x46d44f9/0x47a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4399ee800 session 0x55c4373183c0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.321928978s of 13.440460205s, submitted: 22
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c4399eec00 session 0x55c438015680
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 153534464 unmapped: 16924672 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1722960 data_alloc: 234881024 data_used: 44167168
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 ms_handle_reset con 0x55c438edbc00 session 0x55c4380143c0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 27779072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 27779072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 27779072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8860000/0x0/0x4ffc00000, data 0x2d3e4d6/0x2e0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 27779072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 27779072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1438434 data_alloc: 218103808 data_used: 30633984
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 27779072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 27779072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 27779072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 heartbeat osd_stat(store_statfs(0x4f8860000/0x0/0x4ffc00000, data 0x2d3e4d6/0x2e0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 27779072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 27779072 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1438434 data_alloc: 218103808 data_used: 30633984
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.228912354s of 11.431051254s, submitted: 36
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 142688256 unmapped: 27770880 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124952576 unmapped: 45506560 heap: 170459136 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 139 ms_handle_reset con 0x55c43989d400 session 0x55c439867860
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 141778944 unmapped: 45465600 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 139 heartbeat osd_stat(store_statfs(0x4f9cce000/0x0/0x4ffc00000, data 0x18d00a7/0x19a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 140 ms_handle_reset con 0x55c4398b2800 session 0x55c4373310e0
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125018112 unmapped: 62226432 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262617 data_alloc: 218103808 data_used: 11313152
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 141 ms_handle_reset con 0x55c4399ee800 session 0x55c437c10960
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9cc9000/0x0/0x4ffc00000, data 0x18d3811/0x19a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9cc9000/0x0/0x4ffc00000, data 0x18d3811/0x19a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1259196 data_alloc: 218103808 data_used: 11313152
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9cc9000/0x0/0x4ffc00000, data 0x18d3811/0x19a5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.351562500s of 11.958790779s, submitted: 94
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc5000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125181952 unmapped: 62062592 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125206528 unmapped: 62038016 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125296640 unmapped: 61947904 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125313024 unmapped: 61931520 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125321216 unmapped: 61923328 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125403136 unmapped: 61841408 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: do_command 'config diff' '{prefix=config diff}'
Dec  5 02:38:34 compute-0 ceph-osd[208828]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec  5 02:38:34 compute-0 ceph-osd[208828]: do_command 'config show' '{prefix=config show}'
Dec  5 02:38:34 compute-0 ceph-osd[208828]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec  5 02:38:34 compute-0 ceph-osd[208828]: do_command 'counter dump' '{prefix=counter dump}'
Dec  5 02:38:34 compute-0 ceph-osd[208828]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec  5 02:38:34 compute-0 ceph-osd[208828]: do_command 'counter schema' '{prefix=counter schema}'
Dec  5 02:38:34 compute-0 ceph-osd[208828]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125493248 unmapped: 61751296 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 62152704 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125394944 unmapped: 61849600 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: do_command 'log dump' '{prefix=log dump}'
Dec  5 02:38:34 compute-0 ceph-osd[208828]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Dec  5 02:38:34 compute-0 ceph-osd[208828]: do_command 'perf dump' '{prefix=perf dump}'
Dec  5 02:38:34 compute-0 ceph-osd[208828]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Dec  5 02:38:34 compute-0 ceph-osd[208828]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Dec  5 02:38:34 compute-0 ceph-osd[208828]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125329408 unmapped: 61915136 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: do_command 'perf schema' '{prefix=perf schema}'
Dec  5 02:38:34 compute-0 ceph-osd[208828]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 ms_handle_reset con 0x55c437982c00 session 0x55c437bdfa40
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124919808 unmapped: 62324736 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 2729 syncs, 3.68 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 481 writes, 1444 keys, 481 commit groups, 1.0 writes per commit group, ingest: 0.40 MB, 0.00 MB/s#012Interval WAL: 481 writes, 225 syncs, 2.14 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124928000 unmapped: 62316544 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124936192 unmapped: 62308352 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124936192 unmapped: 62308352 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124936192 unmapped: 62308352 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124936192 unmapped: 62308352 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 601.736877441s of 602.376464844s, submitted: 104
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124952576 unmapped: 62291968 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f9cc6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,1] op hist [1])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124993536 unmapped: 62251008 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125059072 unmapped: 62185472 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125067264 unmapped: 62177280 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125083648 unmapped: 62160896 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125083648 unmapped: 62160896 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125083648 unmapped: 62160896 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125083648 unmapped: 62160896 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125083648 unmapped: 62160896 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125083648 unmapped: 62160896 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125083648 unmapped: 62160896 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125083648 unmapped: 62160896 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125083648 unmapped: 62160896 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125083648 unmapped: 62160896 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125083648 unmapped: 62160896 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125083648 unmapped: 62160896 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125083648 unmapped: 62160896 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125083648 unmapped: 62160896 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125083648 unmapped: 62160896 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 62152704 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 62152704 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 62152704 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 62152704 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 62152704 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 62152704 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 62152704 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 62152704 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 62152704 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 62152704 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 62152704 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 62152704 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 62152704 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125100032 unmapped: 62144512 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125100032 unmapped: 62144512 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125100032 unmapped: 62144512 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125100032 unmapped: 62144512 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125100032 unmapped: 62144512 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125100032 unmapped: 62144512 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125100032 unmapped: 62144512 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125100032 unmapped: 62144512 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125100032 unmapped: 62144512 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125100032 unmapped: 62144512 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125100032 unmapped: 62144512 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125100032 unmapped: 62144512 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125100032 unmapped: 62144512 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 62136320 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 62136320 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 62136320 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 62136320 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 62136320 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 62136320 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 62136320 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 62136320 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 62136320 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 62136320 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 62136320 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 62136320 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 62136320 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 62136320 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 62136320 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 62136320 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125108224 unmapped: 62136320 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125116416 unmapped: 62128128 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125124608 unmapped: 62119936 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125124608 unmapped: 62119936 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125124608 unmapped: 62119936 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125124608 unmapped: 62119936 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125124608 unmapped: 62119936 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125124608 unmapped: 62119936 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: do_command 'config diff' '{prefix=config diff}'
Dec  5 02:38:34 compute-0 ceph-osd[208828]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec  5 02:38:34 compute-0 ceph-osd[208828]: do_command 'config show' '{prefix=config show}'
Dec  5 02:38:34 compute-0 ceph-osd[208828]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec  5 02:38:34 compute-0 ceph-osd[208828]: do_command 'counter dump' '{prefix=counter dump}'
Dec  5 02:38:34 compute-0 ceph-osd[208828]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:34 compute-0 ceph-osd[208828]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:34 compute-0 ceph-osd[208828]: bluestore.MempoolThread(0x55c43583db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1262810 data_alloc: 218103808 data_used: 11329536
Dec  5 02:38:34 compute-0 ceph-osd[208828]: do_command 'counter schema' '{prefix=counter schema}'
Dec  5 02:38:34 compute-0 ceph-osd[208828]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 61980672 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 61980672 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: osd.2 142 heartbeat osd_stat(store_statfs(0x4f98b6000/0x0/0x4ffc00000, data 0x18d5294/0x19a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Dec  5 02:38:34 compute-0 ceph-osd[208828]: prioritycache tune_memory target: 4294967296 mapped: 124829696 unmapped: 62414848 heap: 187244544 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:34 compute-0 ceph-osd[208828]: do_command 'log dump' '{prefix=log dump}'
Dec  5 02:38:34 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Dec  5 02:38:34 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/799941036' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Dec  5 02:38:34 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15943 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec  5 02:38:34 compute-0 rsyslogd[188644]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  5 02:38:34 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Dec  5 02:38:34 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3859075767' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  5 02:38:34 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15947 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec  5 02:38:35 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2691: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:38:35 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15951 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec  5 02:38:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Dec  5 02:38:35 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4120978798' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Dec  5 02:38:35 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15953 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  5 02:38:35 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Dec  5 02:38:35 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3016016635' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Dec  5 02:38:36 compute-0 nova_compute[349548]: 2025-12-05 02:38:36.120 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:38:36 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15957 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  5 02:38:36 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Dec  5 02:38:36 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3547954620' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Dec  5 02:38:37 compute-0 ceph-mgr[193209]: log_channel(audit) log [DBG] : from='client.15965 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec  5 02:38:37 compute-0 ceph-mgr[193209]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec  5 02:38:37 compute-0 ceph-cbd280d3-cbd8-528b-ace6-2b3a887cdcee-mgr-compute-0-afshmv[193193]: 2025-12-05T02:38:37.020+0000 7f1b09f03640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec  5 02:38:37 compute-0 ceph-mgr[193209]: log_channel(cluster) log [DBG] : pgmap v2692: 321 pgs: 321 active+clean; 57 MiB data, 283 MiB used, 60 GiB / 60 GiB avail
Dec  5 02:38:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Dec  5 02:38:37 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3215335810' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Dec  5 02:38:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Dec  5 02:38:37 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3514639259' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Dec  5 02:38:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Dec  5 02:38:37 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1296937051' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Dec  5 02:38:37 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Dec  5 02:38:37 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3173020999' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Dec  5 02:38:38 compute-0 nova_compute[349548]: 2025-12-05 02:38:38.238 349552 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  5 02:38:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Dec  5 02:38:38 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1738194113' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.332 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.333 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.333 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd61438050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7efd5e6c73b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c70e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.334 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd60ac80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c78f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c71d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7230>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5d564290>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c72f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7350>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7b60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7bc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c63f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.335 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7c20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7cb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7d40>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7dd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c75f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7650>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7e60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e624e90>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7ef0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.336 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7efd5e6c7f80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7efd5d082de0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7efd5e6c5730>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7efd5e6c7380>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7efd5e6c7b00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7efd5e6c7020>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.337 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7efd5e6c7140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.338 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7efd5e6c71a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.338 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7efd5e6c7200>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.338 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7efd5e6c7260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.338 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7efd5d564260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.338 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7efd5e6c72c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.338 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.339 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7efd5e6c7320>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.339 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.339 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7efd5e6c7b30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.339 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.339 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7efd5e6c7b90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.339 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.339 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7efd5e6c56d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.339 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.339 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7efd5e6c7bf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.339 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.339 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7efd5e6c7c80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.340 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7efd5e6c7d10>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.340 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7efd5e6c7da0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.340 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7efd5e6c75c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.340 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7efd60ef9a30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.340 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7efd5e6c7e30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.340 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7efd5e6c7620>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.340 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7efd5e6c56a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.341 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7efd5e6c7ec0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.341 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7efd5e6c7f50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7efd5e67ba40>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.341 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:38:38 compute-0 ceilometer_agent_compute[361302]: 2025-12-05 02:38:38.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  5 02:38:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Dec  5 02:38:38 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2614369423' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Dec  5 02:38:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Dec  5 02:38:38 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1405470282' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Dec  5 02:38:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec  5 02:38:38 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Dec  5 02:38:38 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1005288873' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 42090496 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108560384 unmapped: 42090496 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.749129295s of 10.003149033s, submitted: 39
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108658688 unmapped: 41992192 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149030 data_alloc: 218103808 data_used: 11771904
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 41984000 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fa4f2000/0x0/0x4ffc00000, data 0x10b279c/0x117c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108724224 unmapped: 41926656 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108724224 unmapped: 41926656 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fa4f2000/0x0/0x4ffc00000, data 0x10b279c/0x117c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108724224 unmapped: 41926656 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108724224 unmapped: 41926656 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fa4f2000/0x0/0x4ffc00000, data 0x10b279c/0x117c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1149030 data_alloc: 218103808 data_used: 11771904
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108724224 unmapped: 41926656 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108724224 unmapped: 41926656 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108724224 unmapped: 41926656 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fa4f2000/0x0/0x4ffc00000, data 0x10b279c/0x117c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108724224 unmapped: 41926656 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.765543938s of 10.422777176s, submitted: 87
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 ms_handle_reset con 0x56484ac72c00 session 0x56484af661e0
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 ms_handle_reset con 0x56484887c000 session 0x56484af674a0
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108707840 unmapped: 41943040 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1054182 data_alloc: 218103808 data_used: 7081984
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4fa4f2000/0x0/0x4ffc00000, data 0x10b279c/0x117c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,1])
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 ms_handle_reset con 0x56484abb5000 session 0x56484aeee3c0
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1049505 data_alloc: 218103808 data_used: 7057408
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 heartbeat osd_stat(store_statfs(0x4face6000/0x0/0x4ffc00000, data 0x8bf6a5/0x985000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104669184 unmapped: 45981696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 81.139320374s of 81.423446655s, submitted: 52
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051319 data_alloc: 218103808 data_used: 7057408
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104734720 unmapped: 45916160 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 133 ms_handle_reset con 0x56484887c000 session 0x564849e343c0
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104742912 unmapped: 45907968 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 133 heartbeat osd_stat(store_statfs(0x4fa072000/0x0/0x4ffc00000, data 0x1531265/0x15fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104751104 unmapped: 45899776 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 134 ms_handle_reset con 0x564847ff7000 session 0x56484aafe1e0
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:38 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:38 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:38 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 134 heartbeat osd_stat(store_statfs(0x4f9bfe000/0x0/0x4ffc00000, data 0x19a2e05/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179977 data_alloc: 218103808 data_used: 7073792
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104792064 unmapped: 45858816 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 134 ms_handle_reset con 0x56484804a000 session 0x5648493263c0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 134 ms_handle_reset con 0x56484ac72000 session 0x564847f9da40
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 134 ms_handle_reset con 0x564847ff7000 session 0x56484a5e8f00
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 104808448 unmapped: 45842432 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 134 ms_handle_reset con 0x56484804a000 session 0x56484a5e9860
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 59.769268036s of 59.916522980s, submitted: 11
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 42229760 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 134 ms_handle_reset con 0x56484887c000 session 0x564849e343c0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x56484abb5000 session 0x564848a7a5a0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198393 data_alloc: 218103808 data_used: 11743232
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 135 heartbeat osd_stat(store_statfs(0x4f9bf9000/0x0/0x4ffc00000, data 0x19a4d92/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x56484ac72400 session 0x564848e26000
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 42246144 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x564847ff7000 session 0x564848ec7c20
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x56484ac72400 session 0x564848e26b40
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x56484804a000 session 0x56484a5e0d20
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x56484887c000 session 0x56484aeed680
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x56484abb5000 session 0x564847ceed20
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x564847ff7000 session 0x56484a845680
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 108691456 unmapped: 41959424 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x56484887c000 session 0x56484aeefa40
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x56484804a000 session 0x56484aeee5a0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x56484ac72c00 session 0x56484a842960
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x564848d3bc00 session 0x56484a5ec5a0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x564847ff7000 session 0x564848005e00
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 135 ms_handle_reset con 0x56484804a000 session 0x56484a78fa40
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 109887488 unmapped: 40763392 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484ac72400 session 0x5648493261e0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 109756416 unmapped: 40894464 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8dc1000/0x0/0x4ffc00000, data 0x27d99c5/0x28ab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484887c000 session 0x564847d114a0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484ac72c00 session 0x564847d11c20
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x564847ff7000 session 0x56484ab0e1e0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 109756416 unmapped: 40894464 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484804a000 session 0x56484ab0ef00
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1370612 data_alloc: 218103808 data_used: 11743232
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484887c000 session 0x56484ab0e780
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484af76c00 session 0x56484a845860
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484af76000 session 0x56484a5e0d20
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x564847ff7000 session 0x564848e26b40
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484804a000 session 0x56484a9fe000
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 40402944 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484887c000 session 0x5648474ef4a0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484af76c00 session 0x5648474ee5a0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484aa26400 session 0x564847d10960
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 110247936 unmapped: 40402944 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x564847ff7000 session 0x564847d10f00
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484887c000 session 0x56484ab0e3c0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484804a000 session 0x56484ab0eb40
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 110641152 unmapped: 40009728 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f883f000/0x0/0x4ffc00000, data 0x2d5c5d5/0x2e2e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x564849db2000 session 0x56484a845a40
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 136 ms_handle_reset con 0x56484ac72400 session 0x56484a9fe5a0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 110624768 unmapped: 40026112 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 110624768 unmapped: 40026112 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 136 heartbeat osd_stat(store_statfs(0x4f8813000/0x0/0x4ffc00000, data 0x2d86618/0x2e5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1378599 data_alloc: 218103808 data_used: 11751424
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 110632960 unmapped: 40017920 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.355422020s of 12.096014023s, submitted: 115
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 38879232 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f880f000/0x0/0x4ffc00000, data 0x2d8807b/0x2e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 115818496 unmapped: 34832384 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 32759808 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484887c000 session 0x56484aa50780
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f880f000/0x0/0x4ffc00000, data 0x2d8807b/0x2e5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 32759808 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849db2000 session 0x56484813c000
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1485893 data_alloc: 234881024 data_used: 26185728
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 32759808 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484af76c00 session 0x56484ab0f0e0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849bfac00 session 0x56484aeee1e0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564848d3a400 session 0x564849327c20
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 32759808 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484887c000 session 0x564847f98b40
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849bfac00 session 0x56484a8450e0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f87e6000/0x0/0x4ffc00000, data 0x2db208e/0x2e88000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 33431552 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 33423360 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 33423360 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1451549 data_alloc: 234881024 data_used: 24010752
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 33423360 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f88ae000/0x0/0x4ffc00000, data 0x2b5fff9/0x2c33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f88ae000/0x0/0x4ffc00000, data 0x2b5fff9/0x2c33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118177792 unmapped: 32473088 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118685696 unmapped: 31965184 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f88ae000/0x0/0x4ffc00000, data 0x2b5fff9/0x2c33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 30597120 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 30597120 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1484509 data_alloc: 234881024 data_used: 28577792
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 30597120 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 30597120 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 30597120 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 30597120 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f88ae000/0x0/0x4ffc00000, data 0x2b5fff9/0x2c33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120094720 unmapped: 30556160 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1484509 data_alloc: 234881024 data_used: 28577792
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120094720 unmapped: 30556160 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120094720 unmapped: 30556160 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120094720 unmapped: 30556160 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120094720 unmapped: 30556160 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f88ae000/0x0/0x4ffc00000, data 0x2b5fff9/0x2c33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120094720 unmapped: 30556160 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1484509 data_alloc: 234881024 data_used: 28577792
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120094720 unmapped: 30556160 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120102912 unmapped: 30547968 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 120102912 unmapped: 30547968 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.486093521s of 26.861923218s, submitted: 69
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564848e20000 session 0x564849c06960
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484af77000 session 0x56484813d860
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484af76400 session 0x56484aa512c0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484887c000 session 0x564848ec74a0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121430016 unmapped: 29220864 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564848e20000 session 0x56484a8421e0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8280000/0x0/0x4ffc00000, data 0x331aff9/0x33ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121430016 unmapped: 29220864 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1550538 data_alloc: 234881024 data_used: 28581888
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121430016 unmapped: 29220864 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8280000/0x0/0x4ffc00000, data 0x331aff9/0x33ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121430016 unmapped: 29220864 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121430016 unmapped: 29220864 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 127156224 unmapped: 23494656 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 128073728 unmapped: 22577152 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849bfac00 session 0x5648481c5c20
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1620314 data_alloc: 234881024 data_used: 29696000
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 125485056 unmapped: 25165824 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 126754816 unmapped: 23896064 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f78df000/0x0/0x4ffc00000, data 0x3cb3ff9/0x3d87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f78df000/0x0/0x4ffc00000, data 0x3cb3ff9/0x3d87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 127426560 unmapped: 23224320 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 131514368 unmapped: 19136512 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133488640 unmapped: 17162240 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1686404 data_alloc: 251658240 data_used: 36605952
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.745903969s of 12.423884392s, submitted: 151
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484af77000 session 0x56484ab36960
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849477c00 session 0x56484a5ec1e0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133488640 unmapped: 17162240 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484887c000 session 0x564849e34f00
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 130637824 unmapped: 20013056 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 130637824 unmapped: 20013056 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f80a7000/0x0/0x4ffc00000, data 0x34f3ff9/0x35c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 134373376 unmapped: 16277504 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132988928 unmapped: 17661952 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1658098 data_alloc: 234881024 data_used: 30609408
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133152768 unmapped: 17498112 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75c4000/0x0/0x4ffc00000, data 0x3fd5ff9/0x40a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,0,3])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 17391616 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 17391616 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 17391616 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133259264 unmapped: 17391616 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75bc000/0x0/0x4ffc00000, data 0x3fddff9/0x40b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1665698 data_alloc: 234881024 data_used: 31031296
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133267456 unmapped: 17383424 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133267456 unmapped: 17383424 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133275648 unmapped: 17375232 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75bc000/0x0/0x4ffc00000, data 0x3fddff9/0x40b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 133275648 unmapped: 17375232 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.259255409s of 13.825000763s, submitted: 121
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132579328 unmapped: 18071552 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1661566 data_alloc: 234881024 data_used: 31031296
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132579328 unmapped: 18071552 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132579328 unmapped: 18071552 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132579328 unmapped: 18071552 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75ba000/0x0/0x4ffc00000, data 0x3fe0ff9/0x40b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132587520 unmapped: 18063360 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132587520 unmapped: 18063360 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1661566 data_alloc: 234881024 data_used: 31031296
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75ba000/0x0/0x4ffc00000, data 0x3fe0ff9/0x40b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132587520 unmapped: 18063360 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132595712 unmapped: 18055168 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132595712 unmapped: 18055168 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132595712 unmapped: 18055168 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75ba000/0x0/0x4ffc00000, data 0x3fe0ff9/0x40b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132595712 unmapped: 18055168 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1661566 data_alloc: 234881024 data_used: 31031296
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132595712 unmapped: 18055168 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75ba000/0x0/0x4ffc00000, data 0x3fe0ff9/0x40b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.511526108s of 12.544201851s, submitted: 4
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132603904 unmapped: 18046976 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132603904 unmapped: 18046976 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132603904 unmapped: 18046976 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75ba000/0x0/0x4ffc00000, data 0x3fe0ff9/0x40b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132603904 unmapped: 18046976 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1662094 data_alloc: 234881024 data_used: 31031296
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75ba000/0x0/0x4ffc00000, data 0x3fe0ff9/0x40b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132603904 unmapped: 18046976 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132603904 unmapped: 18046976 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132603904 unmapped: 18046976 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75ba000/0x0/0x4ffc00000, data 0x3fe0ff9/0x40b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132603904 unmapped: 18046976 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75ba000/0x0/0x4ffc00000, data 0x3fe0ff9/0x40b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132620288 unmapped: 18030592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1661650 data_alloc: 234881024 data_used: 31031296
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75b5000/0x0/0x4ffc00000, data 0x3fe5ff9/0x40b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132620288 unmapped: 18030592 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132628480 unmapped: 18022400 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.172998428s of 10.241044044s, submitted: 10
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132653056 unmapped: 17997824 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132653056 unmapped: 17997824 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f75b5000/0x0/0x4ffc00000, data 0x3fe5ff9/0x40b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132653056 unmapped: 17997824 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1664114 data_alloc: 234881024 data_used: 31019008
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849477800 session 0x5648481472c0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849477400 session 0x564848021e00
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849476400 session 0x56484aeed860
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849476800 session 0x56484aeecb40
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132653056 unmapped: 17997824 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849476800 session 0x56484aeede00
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484887c000 session 0x56484a9b8000
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849476400 session 0x56484a9b9860
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849477400 session 0x564847cef860
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849477800 session 0x5648488f0780
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f6e82000/0x0/0x4ffc00000, data 0x4718009/0x47ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132964352 unmapped: 17686528 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132964352 unmapped: 17686528 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f6e82000/0x0/0x4ffc00000, data 0x4718009/0x47ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132964352 unmapped: 17686528 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132964352 unmapped: 17686528 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849db2000 session 0x56484af3d680
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484af76c00 session 0x56484a9b85a0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1719347 data_alloc: 234881024 data_used: 31019008
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132980736 unmapped: 17670144 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f6e82000/0x0/0x4ffc00000, data 0x4718009/0x47ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132980736 unmapped: 17670144 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.484535217s of 10.727886200s, submitted: 38
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132980736 unmapped: 17670144 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132980736 unmapped: 17670144 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564847ff7000 session 0x564847f9d680
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484804a000 session 0x564848143680
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123420672 unmapped: 27230208 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849476800 session 0x564847f9cb40
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1468530 data_alloc: 234881024 data_used: 17723392
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f842a000/0x0/0x4ffc00000, data 0x3170ff9/0x3244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123420672 unmapped: 27230208 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564847ff7000 session 0x56484a91c000
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 27222016 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484804a000 session 0x56484a91d2c0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 27222016 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f842a000/0x0/0x4ffc00000, data 0x3170ff9/0x3244000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849476800 session 0x564848143c20
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849db2000 session 0x5648481434a0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 27746304 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 27746304 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1473527 data_alloc: 234881024 data_used: 17731584
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 27746304 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 27746304 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.922307968s of 10.159023285s, submitted: 48
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x56484af76c00 session 0x5648492f1860
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564849477400 session 0x564847f983c0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122912768 unmapped: 27738112 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8406000/0x0/0x4ffc00000, data 0x3194ff9/0x3268000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [0,0,1])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 ms_handle_reset con 0x564847ff7000 session 0x56484aeec960
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417997 data_alloc: 234881024 data_used: 17723392
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417997 data_alloc: 234881024 data_used: 17723392
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417997 data_alloc: 234881024 data_used: 17723392
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417997 data_alloc: 234881024 data_used: 17723392
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1417997 data_alloc: 234881024 data_used: 17723392
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121053184 unmapped: 29597696 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121061376 unmapped: 29589504 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.571750641s of 24.842250824s, submitted: 46
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121118720 unmapped: 29532160 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121217024 unmapped: 29433856 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121421824 unmapped: 29229056 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1435561 data_alloc: 234881024 data_used: 19324928
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1436041 data_alloc: 234881024 data_used: 19337216
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1436041 data_alloc: 234881024 data_used: 19337216
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 heartbeat osd_stat(store_statfs(0x4f8b5d000/0x0/0x4ffc00000, data 0x2a3efe9/0x2b11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1436041 data_alloc: 234881024 data_used: 19337216
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.756362915s of 18.791091919s, submitted: 4
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484d2ca000 session 0x56484a845680
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8b59000/0x0/0x4ffc00000, data 0x2a40b66/0x2b14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439863 data_alloc: 234881024 data_used: 19345408
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8b59000/0x0/0x4ffc00000, data 0x2a40b66/0x2b14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af69c00 session 0x56484aa51860
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1439863 data_alloc: 234881024 data_used: 19345408
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564848e20800 session 0x56484aeee5a0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564847ff7000 session 0x56484af2c780
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8b59000/0x0/0x4ffc00000, data 0x2a40b66/0x2b14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564848e20800 session 0x564848e265a0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.284442902s of 10.303551674s, submitted: 2
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564847cf8c00 session 0x564849e341e0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 121069568 unmapped: 29581312 heap: 150650880 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564847fcc400 session 0x56484a8443c0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484887c000 session 0x56484ab37a40
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476400 session 0x56484ab370e0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476800 session 0x56484a5ec3c0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122503168 unmapped: 33398784 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68c00 session 0x564847d114a0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af69c00 session 0x564847cef860
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118472704 unmapped: 37429248 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af69400 session 0x56484a9fef00
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118472704 unmapped: 37429248 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356384 data_alloc: 218103808 data_used: 11759616
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476400 session 0x56484a9fe780
Dec  5 02:38:39 compute-0 ceph-mon[192914]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Dec  5 02:38:39 compute-0 ceph-mon[192914]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/355060447' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118472704 unmapped: 37429248 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476800 session 0x56484a9fe000
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68c00 session 0x564848e272c0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f85f1000/0x0/0x4ffc00000, data 0x26d9b95/0x27ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118833152 unmapped: 37068800 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118833152 unmapped: 37068800 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8e96000/0x0/0x4ffc00000, data 0x2703bc8/0x27d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118833152 unmapped: 37068800 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118833152 unmapped: 37068800 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1364170 data_alloc: 218103808 data_used: 11759616
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8e96000/0x0/0x4ffc00000, data 0x2703bc8/0x27d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118833152 unmapped: 37068800 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118833152 unmapped: 37068800 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118841344 unmapped: 37060608 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8e96000/0x0/0x4ffc00000, data 0x2703bc8/0x27d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 118841344 unmapped: 37060608 heap: 155901952 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68400 session 0x564848ec7860
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac70800 session 0x56484a842960
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476400 session 0x56484ab0f860
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476800 session 0x56484813dc20
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.737582207s of 13.344229698s, submitted: 87
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68400 session 0x56484ac421e0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68c00 session 0x56484ab36f00
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 119644160 unmapped: 39936000 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac71000 session 0x56484a78ef00
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476400 session 0x56484a9b8780
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476800 session 0x56484a8434a0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1535513 data_alloc: 234881024 data_used: 20418560
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 36896768 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 124076032 unmapped: 35504128 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 124076032 unmapped: 35504128 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac71000 session 0x564847d11860
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 124076032 unmapped: 35504128 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68400 session 0x56484ab36b40
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 124076032 unmapped: 35504128 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1571513 data_alloc: 234881024 data_used: 25505792
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68c00 session 0x56484aa50b40
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476400 session 0x56484aa51860
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 35495936 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 124084224 unmapped: 35495936 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 124116992 unmapped: 35463168 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 126574592 unmapped: 33005568 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 132308992 unmapped: 27271168 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1663144 data_alloc: 251658240 data_used: 37560320
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135913472 unmapped: 23666688 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135913472 unmapped: 23666688 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135913472 unmapped: 23666688 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135954432 unmapped: 23625728 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135962624 unmapped: 23617536 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677864 data_alloc: 251658240 data_used: 38080512
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135962624 unmapped: 23617536 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 23584768 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 23584768 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 23584768 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 23584768 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677864 data_alloc: 251658240 data_used: 38080512
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 23584768 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 23584768 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 23584768 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 136003584 unmapped: 23576576 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 136003584 unmapped: 23576576 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677864 data_alloc: 251658240 data_used: 38080512
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 136003584 unmapped: 23576576 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 136003584 unmapped: 23576576 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 136003584 unmapped: 23576576 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 136003584 unmapped: 23576576 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 136003584 unmapped: 23576576 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f8064000/0x0/0x4ffc00000, data 0x3535bc8/0x360a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1677864 data_alloc: 251658240 data_used: 38080512
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 136003584 unmapped: 23576576 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 136028160 unmapped: 23552000 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 32.552471161s of 32.833507538s, submitted: 33
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139296768 unmapped: 20283392 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139304960 unmapped: 20275200 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7387000/0x0/0x4ffc00000, data 0x4212bc8/0x42e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140591104 unmapped: 18989056 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1791954 data_alloc: 251658240 data_used: 38576128
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140591104 unmapped: 18989056 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140591104 unmapped: 18989056 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7378000/0x0/0x4ffc00000, data 0x4221bc8/0x42f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143826944 unmapped: 15753216 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7378000/0x0/0x4ffc00000, data 0x4221bc8/0x42f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x458f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 144506880 unmapped: 15073280 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143433728 unmapped: 16146432 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1865102 data_alloc: 251658240 data_used: 38723584
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f6673000/0x0/0x4ffc00000, data 0x4b16bc8/0x4beb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 16130048 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 16130048 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 16130048 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 16130048 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 16130048 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f6673000/0x0/0x4ffc00000, data 0x4b16bc8/0x4beb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1865262 data_alloc: 251658240 data_used: 38727680
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.018227577s of 13.871615410s, submitted: 193
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143450112 unmapped: 16130048 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143458304 unmapped: 16121856 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143458304 unmapped: 16121856 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f6672000/0x0/0x4ffc00000, data 0x4b17bc8/0x4bec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143458304 unmapped: 16121856 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143458304 unmapped: 16121856 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1865686 data_alloc: 251658240 data_used: 38731776
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143458304 unmapped: 16121856 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143458304 unmapped: 16121856 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f6670000/0x0/0x4ffc00000, data 0x4b19bc8/0x4bee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143466496 unmapped: 16113664 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143466496 unmapped: 16113664 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143466496 unmapped: 16113664 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1863118 data_alloc: 251658240 data_used: 38731776
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143466496 unmapped: 16113664 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f6670000/0x0/0x4ffc00000, data 0x4b19bc8/0x4bee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143466496 unmapped: 16113664 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143466496 unmapped: 16113664 heap: 159580160 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68400 session 0x56484ab0fc20
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac71c00 session 0x56484813cd20
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac70c00 session 0x56484ab374a0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564848048800 session 0x56484a9fe780
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.988618851s of 13.008173943s, submitted: 2
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476400 session 0x564847d114a0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143294464 unmapped: 17899520 heap: 161193984 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac70c00 session 0x56484a845e00
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f5d46000/0x0/0x4ffc00000, data 0x5442bf1/0x5518000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143327232 unmapped: 17866752 heap: 161193984 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1934703 data_alloc: 251658240 data_used: 38731776
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 146489344 unmapped: 14704640 heap: 161193984 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac71c00 session 0x56484a844780
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68400 session 0x56484ab37a40
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e78000/0x0/0x4ffc00000, data 0x630fc53/0x63e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143679488 unmapped: 25387008 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143679488 unmapped: 25387008 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143679488 unmapped: 25387008 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143679488 unmapped: 25387008 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2046908 data_alloc: 251658240 data_used: 38731776
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143679488 unmapped: 25387008 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e78000/0x0/0x4ffc00000, data 0x630fc8c/0x63e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484a7f0400 session 0x56484aa51a40
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476400 session 0x56484a8425a0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143679488 unmapped: 25387008 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 143687680 unmapped: 25378816 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac70c00 session 0x56484a843680
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac71c00 session 0x56484a843860
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 144007168 unmapped: 25059328 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.890540123s of 11.288828850s, submitted: 66
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 144015360 unmapped: 25051136 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2050670 data_alloc: 251658240 data_used: 38731776
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 144072704 unmapped: 24993792 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e53000/0x0/0x4ffc00000, data 0x6333c9c/0x640b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 144252928 unmapped: 24813568 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 146710528 unmapped: 22355968 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 153870336 unmapped: 15196160 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 158777344 unmapped: 10289152 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2184302 data_alloc: 268435456 data_used: 56541184
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161619968 unmapped: 7446528 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161619968 unmapped: 7446528 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e53000/0x0/0x4ffc00000, data 0x6333c9c/0x640b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161619968 unmapped: 7446528 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e53000/0x0/0x4ffc00000, data 0x6333c9c/0x640b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161619968 unmapped: 7446528 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e53000/0x0/0x4ffc00000, data 0x6333c9c/0x640b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161619968 unmapped: 7446528 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2193742 data_alloc: 268435456 data_used: 57905152
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161619968 unmapped: 7446528 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161619968 unmapped: 7446528 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161619968 unmapped: 7446528 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161628160 unmapped: 7438336 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161636352 unmapped: 7430144 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e53000/0x0/0x4ffc00000, data 0x6333c9c/0x640b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2193742 data_alloc: 268435456 data_used: 57905152
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161636352 unmapped: 7430144 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161644544 unmapped: 7421952 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161644544 unmapped: 7421952 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e53000/0x0/0x4ffc00000, data 0x6333c9c/0x640b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161644544 unmapped: 7421952 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161644544 unmapped: 7421952 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2193742 data_alloc: 268435456 data_used: 57905152
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161644544 unmapped: 7421952 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161644544 unmapped: 7421952 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e53000/0x0/0x4ffc00000, data 0x6333c9c/0x640b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161644544 unmapped: 7421952 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161660928 unmapped: 7405568 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161660928 unmapped: 7405568 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2193742 data_alloc: 268435456 data_used: 57905152
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161660928 unmapped: 7405568 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161660928 unmapped: 7405568 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e53000/0x0/0x4ffc00000, data 0x6333c9c/0x640b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e53000/0x0/0x4ffc00000, data 0x6333c9c/0x640b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161660928 unmapped: 7405568 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161660928 unmapped: 7405568 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161660928 unmapped: 7405568 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2193742 data_alloc: 268435456 data_used: 57905152
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161660928 unmapped: 7405568 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4e53000/0x0/0x4ffc00000, data 0x6333c9c/0x640b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161660928 unmapped: 7405568 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161669120 unmapped: 7397376 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161669120 unmapped: 7397376 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 34.762050629s of 34.795322418s, submitted: 5
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 165208064 unmapped: 3858432 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2233138 data_alloc: 268435456 data_used: 58028032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4a4e000/0x0/0x4ffc00000, data 0x6738c9c/0x6810000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 165330944 unmapped: 3735552 heap: 169066496 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 165412864 unmapped: 6807552 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 165462016 unmapped: 6758400 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 166813696 unmapped: 5406720 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 166813696 unmapped: 5406720 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f3c01000/0x0/0x4ffc00000, data 0x7585c9c/0x765d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2345136 data_alloc: 268435456 data_used: 58961920
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 166846464 unmapped: 5373952 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f3c01000/0x0/0x4ffc00000, data 0x7585c9c/0x765d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 166846464 unmapped: 5373952 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484a7f0800 session 0x56484a5e01e0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68400 session 0x56484a9fe000
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 166846464 unmapped: 5373952 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68400 session 0x5648489743c0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161579008 unmapped: 10641408 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161579008 unmapped: 10641408 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2161675 data_alloc: 251658240 data_used: 49319936
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161579008 unmapped: 10641408 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4a3d000/0x0/0x4ffc00000, data 0x6746c2a/0x681c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161579008 unmapped: 10641408 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4a3d000/0x0/0x4ffc00000, data 0x6746c2a/0x681c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.678235054s of 13.497112274s, submitted: 173
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476800 session 0x56484aa505a0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac71000 session 0x564848144d20
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 161587200 unmapped: 10633216 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f4a42000/0x0/0x4ffc00000, data 0x6746c2a/0x681c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476400 session 0x56484af7c3c0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1882942 data_alloc: 251658240 data_used: 36667392
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f616b000/0x0/0x4ffc00000, data 0x501dc2a/0x50f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f616b000/0x0/0x4ffc00000, data 0x501dc2a/0x50f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1882942 data_alloc: 251658240 data_used: 36667392
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f616b000/0x0/0x4ffc00000, data 0x501dc2a/0x50f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1882942 data_alloc: 251658240 data_used: 36667392
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f616b000/0x0/0x4ffc00000, data 0x501dc2a/0x50f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1882942 data_alloc: 251658240 data_used: 36667392
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f616b000/0x0/0x4ffc00000, data 0x501dc2a/0x50f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f616b000/0x0/0x4ffc00000, data 0x501dc2a/0x50f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.953754425s of 21.209007263s, submitted: 50
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1883210 data_alloc: 251658240 data_used: 36667392
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 150806528 unmapped: 21413888 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484a7f1000 session 0x564847f98d20
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f6169000/0x0/0x4ffc00000, data 0x501ec2a/0x50f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476400 session 0x56484ac43e00
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139493376 unmapped: 32727040 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139501568 unmapped: 32718848 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139501568 unmapped: 32718848 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139501568 unmapped: 32718848 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.2 total, 600.0 interval#012Cumulative writes: 11K writes, 44K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 11K writes, 2982 syncs, 3.80 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2397 writes, 9102 keys, 2397 commit groups, 1.0 writes per commit group, ingest: 9.64 MB, 0.02 MB/s#012Interval WAL: 2397 writes, 959 syncs, 2.50 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139509760 unmapped: 32710656 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139509760 unmapped: 32710656 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139509760 unmapped: 32710656 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139509760 unmapped: 32710656 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139509760 unmapped: 32710656 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139517952 unmapped: 32702464 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139517952 unmapped: 32702464 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af69000 session 0x564849e34000
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139517952 unmapped: 32702464 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: mgrc ms_handle_reset ms_handle_reset con 0x56484885e000
Dec  5 02:38:39 compute-0 ceph-osd[207795]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/858078637
Dec  5 02:38:39 compute-0 ceph-osd[207795]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/858078637,v1:192.168.122.100:6801/858078637]
Dec  5 02:38:39 compute-0 ceph-osd[207795]: mgrc handle_mgr_configure stats_period=5
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564848045400 session 0x564848144960
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139665408 unmapped: 32555008 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564847ff6c00 session 0x564847cee1e0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139665408 unmapped: 32555008 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139665408 unmapped: 32555008 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139665408 unmapped: 32555008 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139665408 unmapped: 32555008 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139665408 unmapped: 32555008 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139665408 unmapped: 32555008 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139673600 unmapped: 32546816 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139673600 unmapped: 32546816 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139673600 unmapped: 32546816 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139673600 unmapped: 32546816 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139673600 unmapped: 32546816 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139673600 unmapped: 32546816 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139673600 unmapped: 32546816 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139673600 unmapped: 32546816 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139673600 unmapped: 32546816 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139681792 unmapped: 32538624 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139681792 unmapped: 32538624 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139681792 unmapped: 32538624 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139681792 unmapped: 32538624 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139681792 unmapped: 32538624 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1572132 data_alloc: 234881024 data_used: 21114880
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139689984 unmapped: 32530432 heap: 172220416 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7d99000/0x0/0x4ffc00000, data 0x33f0bc8/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68400 session 0x56484ac425a0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484a7f0800 session 0x56484ac423c0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac70c00 session 0x5648488f1e00
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac71c00 session 0x56484a842d20
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 99.254913330s of 99.396072388s, submitted: 32
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476400 session 0x56484a8423c0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484a7f0800 session 0x56484a842f00
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138616832 unmapped: 40951808 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac70c00 session 0x564848a7a5a0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68400 session 0x564848005c20
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484a7f1400 session 0x56484a91d680
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138600448 unmapped: 40968192 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138600448 unmapped: 40968192 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7967000/0x0/0x4ffc00000, data 0x3820c3a/0x38f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1612692 data_alloc: 234881024 data_used: 21114880
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138600448 unmapped: 40968192 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7967000/0x0/0x4ffc00000, data 0x3820c3a/0x38f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138600448 unmapped: 40968192 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564849476400 session 0x56484aeed0e0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138600448 unmapped: 40968192 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7967000/0x0/0x4ffc00000, data 0x3820c3a/0x38f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484a7f0800 session 0x56484aeede00
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138600448 unmapped: 40968192 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484ac70c00 session 0x56484aeec960
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x56484af68400 session 0x56484aeecb40
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138600448 unmapped: 40968192 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1614530 data_alloc: 234881024 data_used: 21114880
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x3820c4a/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1614262 data_alloc: 234881024 data_used: 21233664
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x3820c4a/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1642742 data_alloc: 234881024 data_used: 25214976
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x3820c4a/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x3820c4a/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1642742 data_alloc: 234881024 data_used: 25214976
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x3820c4a/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x3820c4a/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 ms_handle_reset con 0x564847ff6000 session 0x5648481421e0
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x3820c4a/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x3820c4a/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1642742 data_alloc: 234881024 data_used: 25214976
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137674752 unmapped: 41893888 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x3820c4a/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137682944 unmapped: 41885696 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 137682944 unmapped: 41885696 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 31.585376740s of 31.791507721s, submitted: 41
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138772480 unmapped: 40796160 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1643046 data_alloc: 234881024 data_used: 25227264
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138878976 unmapped: 40689664 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138911744 unmapped: 40656896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138936320 unmapped: 40632320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x3820c4a/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x3820c4a/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138936320 unmapped: 40632320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138936320 unmapped: 40632320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1643046 data_alloc: 234881024 data_used: 25227264
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138944512 unmapped: 40624128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x3820c4a/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138944512 unmapped: 40624128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x3820c4a/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138944512 unmapped: 40624128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138944512 unmapped: 40624128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138944512 unmapped: 40624128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1643046 data_alloc: 234881024 data_used: 25227264
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7966000/0x0/0x4ffc00000, data 0x3820c4a/0x38f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 138944512 unmapped: 40624128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.734733582s of 12.462025642s, submitted: 110
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139984896 unmapped: 39583744 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139984896 unmapped: 39583744 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f71fb000/0x0/0x4ffc00000, data 0x3f83c4a/0x405b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 139214848 unmapped: 40353792 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140197888 unmapped: 39370752 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1705914 data_alloc: 234881024 data_used: 25247744
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140197888 unmapped: 39370752 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140279808 unmapped: 39288832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140279808 unmapped: 39288832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7187000/0x0/0x4ffc00000, data 0x3fffc4a/0x40d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140279808 unmapped: 39288832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140279808 unmapped: 39288832 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1705914 data_alloc: 234881024 data_used: 25247744
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140288000 unmapped: 39280640 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140288000 unmapped: 39280640 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7187000/0x0/0x4ffc00000, data 0x3fffc4a/0x40d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140296192 unmapped: 39272448 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.947365761s of 12.310205460s, submitted: 67
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 39108608 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 39108608 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704530 data_alloc: 234881024 data_used: 25251840
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717d000/0x0/0x4ffc00000, data 0x4009c4a/0x40e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 39108608 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717d000/0x0/0x4ffc00000, data 0x4009c4a/0x40e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 39108608 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 39108608 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717d000/0x0/0x4ffc00000, data 0x4009c4a/0x40e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 39108608 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 39108608 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704530 data_alloc: 234881024 data_used: 25251840
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 39108608 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 39108608 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 39108608 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717a000/0x0/0x4ffc00000, data 0x400cc4a/0x40e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 39108608 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 39108608 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717a000/0x0/0x4ffc00000, data 0x400cc4a/0x40e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704398 data_alloc: 234881024 data_used: 25251840
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 39108608 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140460032 unmapped: 39108608 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717a000/0x0/0x4ffc00000, data 0x400cc4a/0x40e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704398 data_alloc: 234881024 data_used: 25251840
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717a000/0x0/0x4ffc00000, data 0x400cc4a/0x40e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704398 data_alloc: 234881024 data_used: 25251840
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717a000/0x0/0x4ffc00000, data 0x400cc4a/0x40e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704398 data_alloc: 234881024 data_used: 25251840
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717a000/0x0/0x4ffc00000, data 0x400cc4a/0x40e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140468224 unmapped: 39100416 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140476416 unmapped: 39092224 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717a000/0x0/0x4ffc00000, data 0x400cc4a/0x40e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140476416 unmapped: 39092224 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717a000/0x0/0x4ffc00000, data 0x400cc4a/0x40e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140476416 unmapped: 39092224 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704398 data_alloc: 234881024 data_used: 25251840
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140476416 unmapped: 39092224 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140476416 unmapped: 39092224 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717a000/0x0/0x4ffc00000, data 0x400cc4a/0x40e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140476416 unmapped: 39092224 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140476416 unmapped: 39092224 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717a000/0x0/0x4ffc00000, data 0x400cc4a/0x40e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140476416 unmapped: 39092224 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704398 data_alloc: 234881024 data_used: 25251840
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140484608 unmapped: 39084032 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140484608 unmapped: 39084032 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.973342896s of 39.005554199s, submitted: 6
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140484608 unmapped: 39084032 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140484608 unmapped: 39084032 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704558 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717a000/0x0/0x4ffc00000, data 0x400cc4a/0x40e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f717a000/0x0/0x4ffc00000, data 0x400cc4a/0x40e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704598 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7178000/0x0/0x4ffc00000, data 0x400ec4a/0x40e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7178000/0x0/0x4ffc00000, data 0x400ec4a/0x40e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704598 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140492800 unmapped: 39075840 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7178000/0x0/0x4ffc00000, data 0x400ec4a/0x40e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 39067648 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704598 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7178000/0x0/0x4ffc00000, data 0x400ec4a/0x40e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 39067648 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 39067648 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 39067648 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.475843430s of 21.493835449s, submitted: 3
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 39067648 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 39067648 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704778 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 39067648 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 39067648 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 39067648 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 39067648 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 39067648 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704778 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 39067648 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 39067648 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 39067648 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704778 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704778 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704778 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 39059456 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704778 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.689928055s of 29.696563721s, submitted: 1
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140517376 unmapped: 39051264 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140525568 unmapped: 39043072 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140533760 unmapped: 39034880 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140533760 unmapped: 39034880 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140533760 unmapped: 39034880 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140533760 unmapped: 39034880 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140533760 unmapped: 39034880 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 39026688 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 39018496 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140558336 unmapped: 39010304 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 39002112 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140574720 unmapped: 38993920 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140574720 unmapped: 38993920 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140574720 unmapped: 38993920 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704602 data_alloc: 234881024 data_used: 25260032
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140574720 unmapped: 38993920 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140574720 unmapped: 38993920 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140574720 unmapped: 38993920 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 200.378692627s of 200.388336182s, submitted: 1
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140574720 unmapped: 38993920 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140574720 unmapped: 38993920 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7177000/0x0/0x4ffc00000, data 0x400fc4a/0x40e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1704586 data_alloc: 234881024 data_used: 25264128
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140574720 unmapped: 38993920 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140574720 unmapped: 38993920 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140574720 unmapped: 38993920 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7160000/0x0/0x4ffc00000, data 0x4026c4a/0x40fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1705430 data_alloc: 234881024 data_used: 25264128
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7160000/0x0/0x4ffc00000, data 0x4026c4a/0x40fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1705430 data_alloc: 234881024 data_used: 25264128
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7160000/0x0/0x4ffc00000, data 0x4026c4a/0x40fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1705590 data_alloc: 234881024 data_used: 25268224
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7160000/0x0/0x4ffc00000, data 0x4026c4a/0x40fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f7160000/0x0/0x4ffc00000, data 0x4026c4a/0x40fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140771328 unmapped: 38797312 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.224842072s of 22.257825851s, submitted: 4
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706010 data_alloc: 234881024 data_used: 25268224
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140943360 unmapped: 38625280 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140943360 unmapped: 38625280 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140943360 unmapped: 38625280 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140951552 unmapped: 38617088 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706010 data_alloc: 234881024 data_used: 25268224
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706010 data_alloc: 234881024 data_used: 25268224
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.544095993s of 13.567526817s, submitted: 2
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 38608896 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38600704 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140976128 unmapped: 38592512 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140976128 unmapped: 38592512 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140976128 unmapped: 38592512 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140976128 unmapped: 38592512 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140976128 unmapped: 38592512 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140976128 unmapped: 38592512 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140976128 unmapped: 38592512 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140976128 unmapped: 38592512 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140976128 unmapped: 38592512 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140976128 unmapped: 38592512 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140976128 unmapped: 38592512 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 38584320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 38584320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 38584320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 38584320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 38584320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 38584320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 38584320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 38584320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 38584320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 38584320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 38584320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 38584320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 38584320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 38584320 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38576128 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141000704 unmapped: 38567936 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141000704 unmapped: 38567936 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec  5 02:38:39 compute-0 ceph-osd[207795]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec  5 02:38:39 compute-0 ceph-osd[207795]: bluestore.MempoolThread(0x5648467e9b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1706186 data_alloc: 234881024 data_used: 25268224
Dec  5 02:38:39 compute-0 ceph-osd[207795]: prioritycache tune_memory target: 4294967296 mapped: 141000704 unmapped: 38567936 heap: 179568640 old mem: 2845415832 new mem: 2845415832
Dec  5 02:38:39 compute-0 ceph-osd[207795]: osd.1 138 heartbeat osd_stat(store_statfs(0x4f714b000/0x0/0x4ffc00000, data 0x403bc4a/0x4113000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,2] op hist [])
